LangWatch Scenario is a framework for a Agent testing based on pytest. Scenario runs with Openai compatible api's. Here we show how to get LangWatch running using local Llm's with Ollama.
The code test_ollama_client.py is along the same lines as the test_azure_api_gateway.py from the scenario python examples folder.
Changes specific to Ollama being:
1. Create Ollama client
ollama_client() -> OpenAI(base_url=<OLLAMA_BASE_URL>)
2. Configuring the Ollama model (gemma, etc) & custom_llm_provider ("ollama") in the Agents (UserSimulatorAgent & JudgeAgent) scenario.UserSimulatorAgent(model=OLLAMA_MODEL, client=custom_client, custom_llm_provider=CUSTOM_LLM_PROVIDER)...
For better clarity see test_ollama_client.py.
No comments:
Post a Comment