Thursday, October 30, 2025

Langchain4j

LangChain is one of the leading python based AI/ ML, agentic modelling and integration frameworks. Langchain (and allied frameworks like LangGraph) allow integration with almost all LLMs, python libraries and tools out there. 

Langchain4j is its Java couterpart. Langchain4j allows LLM integrations and workflows to be built using pure Java constructs. It primarily operates as a Java client to the various Api's exposed by the different LLM provides such as OpenAi, Azure, Bedrock, Gemini and so on. 

Langchain4j has covered a lot of ground in terms of the supported modules from both the Python and the Java ecosystems. It's actively supported and should be one for the long run.. 

To get a feel for Langchain4j on a local LLM try out langchain4j-ollama

This will get: 

    Java langchain4j-ollama to talk to  

        -> Ollama (deployed locally) 

                -> Hosting the llama3.2:1b  model  

(I) Get a local Ollama up & running

Refer to the previous post regarding installing getting Ollama running locally. Once done, you should have a llama3.2:1b model running & ready to chat locally on:

    http://127.0.0.1:11434 

(II) Download & build langchain4j-ollama project

Clone langchain4j-ollama project & build:

    cd </download/folder/langchain4j-ollama> 

    mvn install 

(III) Run langchain4j-ollama tests

Run a couple of the langchain4j-ollama integration tests. Start with OllamaChatModelIT.java. Make sure to update the Model_Name value to llama3.2:1b downloaded in step (I) above:

         static final String MODEL_NAME = "llama3.2:1b";

That's about it for getting the three pieces integrated & chatting! 

No comments:

Post a Comment