Showing posts with label framework. Show all posts
Showing posts with label framework. Show all posts

Wednesday, November 5, 2025

Agent2Agent (A2A) with a2a-sdk and Http2

Continuing with A2A evaluation next up is a2a-sdk (unrelated to previously evaluated a2a-server). This evaluation is largely based on getting the hello world from the a2a-samples project working as per the instruction of a2a-protocol. With additional, integration with other Http2 based non Python clients.

(I) Installation

pip install a2a-sdk 

# uvicorn python-dotenv (packages existing) 

# For Http2 support 

pip install hypercorn 

pip install h2==4.2.0 (See Issue 1 at the end & the bug details

git clone https://github.com/a2aproject/a2a-samples.git -b main --depth 1

(II) Replace uvicorn server with hypercorn (support for Http2) 

The a2a-samples make use of the uvicorn python server. However, uvicorn is a Http1.x compliant server and doesn't support Http2. Keep seeing the following messages if client requests from Http2: 

"WARNING:  Unsupported upgrade request. "

In order to support a wider & more updated category of clients, uvicorn is replaced with a hypercorn which is Http2 compliant.

In order to switch to hypercorn, the following changes are done to _main_.py of helloworld python project

#import uvicorn
 

# Use Hypercorn for Http2
import asyncio
from hypercorn.config import Config
from hypercorn.asyncio import serve

 ....

    config = Config()
    config.bind="127.0.0.1:8080"  # Binds to all interfaces on port 8080

    asyncio.run(serve(server.build(), config))
   # uvicorn.run(server.build(), host='127.0.0.1', port=8080, log-level='debug') 

(III) Run helloworld

python a2a-samples/samples/python/agents/helloworld/__main__.py 

(IV) View AgentCard

Open in the browser or via curl:

curl http:///127.0.0.1:8080/.well-known/agent-card.json

Response: 

{"capabilities":{"streaming":true},"defaultInputModes":["text"],"defaultOutputModes":["text"],"description":"Just a hello world agent","name":"Hello World Agent","preferredTransport":"JSONRPC","protocolVersion":"0.3.0","skills":[{"description":"just returns hello world","examples":["hi","hello world"],"id":"hello_world","name":"Returns hello world","tags":["hello world"]}],"supportsAuthenticatedExtendedCard":true,"url":"http://127.0.0.1:8080/","version":"1.0.0"} 

For the Authorized Extended Agent Card:

curl -H "Authorization: Bearer dummy-token-for-extended-card" --http2 http://127.0.0.1:8080/agent/authenticatedExtendedCard 

Response: 

{"capabilities":{"streaming":true},"defaultInputModes":["text"],"defaultOutputModes":["text"],"description":"The full-featured hello world agent for authenticated users.","name":"Hello World Agent - Extended Edition","preferredTransport":"JSONRPC","protocolVersion":"0.3.0","skills":[{"description":"just returns hello world","examples":["hi","hello world"],"id":"hello_world","name":"Returns hello world","tags":["hello world"]},{"description":"A more enthusiastic greeting, only for authenticated users.","examples":["super hi","give me a super hello"],"id":"super_hello_world","name":"Returns a SUPER Hello World","tags":["hello world","super","extended"]}],"supportsAuthenticatedExtendedCard":true,"url":"http://127.0.0.1:8080/","version":"1.0.1"} 

(V) Send/ Receive message to Agent

curl -H "Content-Type: application/json"  http:///127.0.0.1:8080 -d '{"jsonrpc":"2.0","id":"ee22f765-0253-40a0-a29f-c786b090889d","method":"message/send","params":{"message":{"role":"user","parts":[{"text":"hello there!","kind":"text"}],"messageId":"ccaf4715-712e-40c6-82bc-634a7a7136f2","kind":"message"},"configuration":{"blocking":false}}}' 

Response: 

 {"id":"ee22f765-0253-40a0-a29f-c786b090889d","jsonrpc":"2.0","result":{"kind":"message","messageId":"d813fed8-58cd-4337-8295-6282930d4d4e","parts":[{"kind":"text","text":"Hello World"}],"role":"agent"}}

(VI) Send/ Receive via Http2

curl -iv --http2 http://127.0.0.1:8080/.well-known/agent-card.json

curl -iv --http2  -H "Content-Type: application/json"  http://127.0.0.1:8080 -d '{"jsonrpc":"2.0","id":"ee22f765-0253-40a0-a29f-c786b090889d","method":"message/send","params":{"message":{"role":"user","parts":[{"text":"dragons and wizards","kind":"text"}],"messageId":"ccaf4715-712e-40c6-82bc-634a7a7136f2","kind":"message"},"configuration":{"blocking":false}}}'

(The responses are the same as shown above)

(VII) Send/ Receive from Java client

TBD

(VIII) Issues 

Issue 1: Compatibility issue with hypercorn (ver=0.17.3) & latest h2 (ver=4.3.0)

Ran in to the issue in the mentioned here:

    |   File "/home/algo/Tools/venv/langvang/lib/python3.13/site-packages/hypercorn/protocol/h2.py", line 138, in initiate
    |     event = h2.events.RequestReceived()
    | TypeError: RequestReceived.__init__() missing 1 required keyword-only argument: 'stream_id' 

Issue was resolved by downgrading to h2 (ver=4.2.0).

 

Tuesday, November 4, 2025

Agent2Agent (A2A) with a2a-server

Agent2Agent (A2A) is a protocol for AI agents to communicate amongst themselves. These Agents though built by different vendors by subscribing to the common a2a protocol will have a standardized way of inter-operating.  

Getting going with A2A 

(I) As a starting point got the python a2a-server installed. 

pip install a2a-server

Issue 1: Compatibility issue between latest a2a-server & a2a-json-rpc:

a2a-server & a2a-server also brings in a2a-json-rpc:  but there were compatibility issues between the latest a2a-json-rpc (ver.0.4.0) & a2a-server (ver. 0.6.1)

        ImportError: cannot import name 'TaskSendParams' from 'a2a_json_rpc.spec' (.../python3.13/site-packages/a2a_json_rpc/spec.py) 

Downgrading  a2a-json-rpc to previous 0.3.0 fixed it:

pip install a2a-json-rpc==0.3.0 

(II) To get the a2a-server running a agent.yaml file needs to be built with the configs like host, port, handler, provider, model, etc:

server:
  host: 127.0.0.1
  port: 8080

handlers:
  use_discovery: false
  default_handler: chuk_pirate
  chuk_pirate:
    type: a2a_server.tasks.handlers.chuk.chuk_agent_handler.ChukAgentHandler
    agent: a2a_server.sample_agents.chuk_pirate.create_pirate_agent
    name: chuk_pirate
    enable_sessions: false
    enable_tools: false
    provider: "ollama"
    model: "llama3.2:1b"
    version: "1.0.1"

    agent_card:
      name: Pirate Agent
      description: "Captain Blackbeard's Ghost with conversation memory"
      capabilities:
        streaming: false
        sessions: false
        tools: false 

-- 

Next, start the server using:

a2a-server -c agent.yaml --log-level debug 

(III) Test a2a-server endpoint from browser

Open http://127.0.0.1:8080/ which will lists the different Agents. 

Agent Card(s): 

http://127.0.0.1:8080/chuk_pirate/.well-known/agent.json 

(IV) Issues a2a-server 

Issue 2: Agent Card endpoint url 

Firstly, the Agent Card end point is that this is no longer a valid end point. As per the latest Agent Card protocol the Agent Card needs to be served from the location: http://<base_url>/ .well-known/agent-card.json

  • agent-card.json (& not agent.json) 
  • Without the agent's name (i.e. without chuk_pirate) 

The valid one would looks like:

http://127.0.0.1:8080/chuk_pirate/.well-known/agent.json 

Issue 3: Error message/send not found

The other issue is that the seems to be a lack of support for the method "message/ send"  used to send messages and chat with the agent. The curl request fails with an error: 

curl -iv -H "Content-Type: application/json"  http://127.0.0.1:8080/chuk_pirate -d '{"jsonrpc":"2.0","id":"ee22f765-0253-40a0-a29f-c786b090889d","method":"message/send","params":{"message":{"role":"user","parts":[{"text":"hello  there!","kind":"text"}],"messageId":"ccaf4715-712e-40c6-82bc-634a7a7136f2","kind":"message"},"configuration":{"blocking":false}}}' 

{"jsonrpc":"2.0","id":"ee22f765-0253-40a0-a29f-c786b090889d","result":null,"error":{"code":-32601,"message":"message/send not found"}} 

Due to all these issues with a2a-server and its lack of documentation there's no clarity on the library. So it's a no-go for the moment atleast.

Sunday, October 26, 2025

Mlflow Java client

Mlflow is a leading open source framework for managing AI/ ML workflows. Mlflow allows tracking, monitoring and generally visualizing end-to-end ML project lifecycles. A handy ops side tool that improves over interpretability of AI/ ML projects.

Key Mlflow concepts include ML Projects, Models on which several Runs of Experiments conducted to name a few. Experiments can also be Tagged with meaningful humanly relevant labels.

While Mlflow is a Python native library with integrations with all the leading Python AI/ ML frameworks such as OpenAI, Langchain, Llamaindex, etc there are also Mlflow API endpoints for wider portability. 

There is also a specific Mlflow Java Api for use from the Java ecosystem. The corresponding Mlflow Java client (maven plugin, etc) works well with the API. To get started with the mlflow using Java:

(I) Install mlflow (Getting started guide)

        $ pip install mlflow 

 This installs mlflow to the users .local folder:

        ~/.local/bin/mlflow 

(II) Start Local mlflow server (simple without authentication)

        $ mlflow server --host 127.0.0.1 --port 8080

mlflow server should be running on 

        http://127.0.0.1:8080

(III) Download mlflower repo (sample Java client code)

Next clone the mlflower repo which has some sample code showing working of the mlflow Java client. 

  • The class Mlfclient shows a simple use case of Creating an Experiment:

            client.createExperiment(experimentName);

Followed by a few runs of logging some Parameters, Metrics, Artifacts:

     run.logParam();

      run.logMetric();

       run.logArtifact()

 

  • Run Hierarchy: Class NestedMlfClient shows nesting hierarchy of Mlflow runs

        Parent Run -> Child Run -> Grand Child Run ->.... & so on

(IV) Start Local mlflow server (with eBasic Authentication)

While authentication is crucial for managing workflows, Mlflow only provided Basic Auth till very recently. Version 3.5 onwards has better support for various auth provides, SSO, etc. For now only mlflow Basic Auth integration is shown.

           # Start server with Basic Auth
            mlflow server --host 127.0.0.1 --port 8080 --app-name basic-auth

Like previously, mlflow server should start running on

            http://127.0.0.1:8080

Only requiring a login credential this time to access the page. The default admin credentials are mentioned on mlflow basic-auth-http.

  • The class BasicAuthMlfclient shows the Java client using BasicMlflowHostCreds to connect to Mlflow with basic auth. 

            new MlflowClient(new BasicMlflowHostCreds(TRACKING_URI, USERNAME, PASSWORD));

(V) Deletes Soft/ Hard

  • Experiments, Runs, etc created within mlflow can be deleted from the ui (& client). The deletes are however only Soft, and get stored somewhere in a Recycle Bin, not visible on the UI.
  •  Hard/ permanent deletes can be effected from the mlflow cli

    # Set mlflow server tracking uri 

    export MLFLOW_TRACKING_URI=http://127.0.0.1:8080

    # Clear garbage

    mlflow gc

  (VI) Issues

  • MlflowContext.withActiveRun() absorbs exception without any logs, simply sets the run status to RunStatus.FAILED
    • So incase runs show failure on the mlflow UI, its best to put explicit try-catch on the client to find the cause.
  • Unable to upload artifacts since cli looks for python (& not python3) on path to run. 
    • Error message: Failed to exec 'python -m mlflow.store.artifact.cli', needed to access artifacts within the non-Java-native artifact store at 'mlflow-artifacts:
    • The dev box (Ubuntu ver 20.04) has python3 (& not python) installed.
    • Without changing the dev box a simple fix is to set/ export the environment variable MLFLOW_PYTHON_EXECUTABLE (within the IDE, shell, etc) to whichever python lib is installed on the box:
               MLFLOW_PYTHON_EXECUTABLE=/usr/bin/python3 
 
So with that keep the AI/ Ml projects flowing!

Friday, April 18, 2025

AI Agentic Frameworks

With prolification of AI Agents, it's only logical that there will be attempts at standardization and building protocols & frameworks: