Nvidia’s Open-Source Toolkit Offers Chatbots with Guardrails

Nvidia's open-source toolkit is intended to provide "guardrails" for chatbots

Nvidia Introduces Open Source Toolkit for Secure AI Chat Programs

Nvidia has recently launched an open-source toolkit named “NeMo Guardrails” to enhance the security of AI chat programs. The primary purpose of NeMo Guardrails is to create a safety net or “guard rails” for large language models (LLMs). The new toolkit aims to help developers establish “secure and trustworthy LLM conversation systems.” NeMo Guardrails works with all LLMs, including ChatGPT.

Three Categories of Rules

The developers can create rules using NeMo Guardrails that outline how a chatbot responds to each user input. These rules are divided into three categories: “topical guardrails,” “safety guardrails,” and “security guardrails.” Topical guardrails ensure that the chatbot stays on topic, safety guardrails help prevent disinformation, and security guardrails protect against malicious code.

Based on Community-Developed Toolkits

Nvidia has stated that NeMo Guardrails is based on community-developed toolkits such as LangChain, specifically designed for developing applications based on language models. NeMo Guardrails also utilizes Nvidia’s Colang programming language, which facilitates the creation of chatbots with natural-language commands.

ChatGPT Fuels Discussion

OpenAI released the ChatGPT chatbot at the end of November 2022. The secure usage of language models has been a topic of discussion globally since then. The data protection officer of the state of Schleswig-Holstein explained in an interview how the German and European authorities oversee OpenAI.

In conclusion, Nvidia’s new open-source toolkit, NeMo Guardrails, is intended to enhance the security of AI chat programs. The toolkit works with all LLMs, including ChatGPT, and employs Nvidia’s Colang programming language. Its three rule categories ensure that chatbots respond appropriately to user input, making them safer and more trustworthy.

Leave a Reply