Nvidia announced new software on Tuesday that will help software makers prevent AI models from saying incorrect facts, talking about harmful subjects, or opening up security holes.
Copyright: cnbc.com – “Nvidia has a new way to prevent AI chatbots from ‘hallucinating’ wrong facts”
The software, called NeMo Guardrails, is one example of how the AI industry is right now scrambling to address the “hallucinating” issue with the latest generation of so-called large language models.
Nvidia announced new software on Tuesday that will help software makers prevent AI models from stating incorrect facts, talking about harmful subjects, or opening up security holes.
The software, called NeMo Guardrails, is one example of how the artificial intelligence industry is scrambling to address the “hallucination” issue with the latest generation of large language models, which is a major blocking point for businesses.
Large language models, like GPT from Microsoft-backed OpenAI and LaMDA from Google, are trained on terabytes of data to create programs that can spit out blocks of text that read like a human wrote them. But they also have a tendency to make things up, which is often called “hallucination” by practitioners. Early applications for the technology, such as summarizing documents or answering basic questions, need to minimize hallucinations in order to be useful.
Nvidia’s new software can do this by adding guardrails to prevent the software from addressing topics that it shouldn’t. NeMo Guardrails can force a LLM chatbot to talk about a specific topic, head off toxic content, and can prevent LLM systems from executing harmful commands on a computer.[…]
Read more: www.cnbc.com
Der Beitrag Nvidia Has a New Way to Prevent AI Chatbots from ‘Hallucinating’ erschien zuerst auf SwissCognitive, World-Leading AI Network.