Saturday, November 15, 2025

Guardrails & Guard-Llm's

With wide scale adoption of Llm's & Agentic models in production, there's also a pressing need to verify both the inputs & output for GenAI use cases. This should ideally be done in real-time just before serving the response to the end user. This would ensure that no invalid, harmful, hateful, confidential, etc content goes through in either direction. Guardrails are the answer to that very problem.

The simple idea with Guardrails is to apply intelligent input/ output filters that can sanitize and filter out both bad requests/ responses from getting through. There are many ways of implementing Guardrails as pattern based, rule engines, etc. Though these have worked so far, in an ever changing Agentic world it's now up to the self learning guard Llm's to judge & flag! 

Guard llm's are specifically trained to flag out harmful content. One such implementation is llama-guard which flags out violations of any of the ML Commons AI Safety Taxonomies/ Categories.

An implementation of the guard-llm can be found in the ApiCaller project. More specifically the ApiCaller->invokeWithGuardrails():

  •  First calls a local Ollama model with sanitized input to get a response
  •  Then calls the isSafe() method with the received response
  •  isSafe() internally makes a call to a different Ollama model llama-guard which flags out the content as safe/ unsafe

Check the TestApiCaller.py test case for better clarity.

References

  • https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/
  • https://www.ibm.com/think/tutorials/llm-guardrails
  • https://ollama.com/library/llama-guard3

No comments:

Post a Comment