Friday, November 14, 2025

LangWatch Scenario with Ollama

LangWatch Scenario is a framework for a Agent testing based on pytest. Scenario runs with Openai compatible api's. Here we show how to get LangWatch running using local Llm's with Ollama.

The code test_ollama_client.py is along the same lines as the test_azure_api_gateway.py from the scenario python examples folder. 

Changes specific to Ollama being:

1. Set-up

    pip3 install langwatch-scenario 

Environment variables

    export OPENAI_API_BASE_URL=http://localhost:11434/api/
    export OPENAI_API_KEY=NOTHING

2. Create Ollama client

    ollama_client() -> OpenAI(base_url=<OLLAMA_BASE_URL>)

3. Configuring the Ollama model (gemma, etc) & custom_llm_provider ("ollama") in the Agents (UserSimulatorAgent & JudgeAgent)           

    scenario.UserSimulatorAgent(model=OLLAMA_MODEL, client=custom_client, custom_llm_provider=CUSTOM_LLM_PROVIDER)...

For better clarity see test_ollama_client.py.

4. Offline LangWatch Scenario Reporter

For every run LangWatch uploads run results to app.langwatch.ai endpoint. For a truly offline run set the LANGWATCH_ENDPOINT location: 

    export LANGWATCH_ENDPOINT= <https://YOUR_REPORTING_ENDPOINT>

There's no option to disable scenario reporting for now. Only work around is to set  to LANGWATCH_ENDPOINT to an invalid value (eg "http://localhost2333/invalid").

 

No comments:

Post a Comment