Supportsoft Glossary
Discover the language of innovation with our glossary, turning complex app development, web design, marketing and blockchain terms into clear, practical explanations.
Managing AI Hallucinations in Language Models
Hallucination in AI describes a situation where a model generates something that sounds right but is actually incorrect, misleading, or made up. Large language models often produce hallucinatory outputs because they don’t have enough valid context or data.
When using LLMs in the workplace, hallucination is particularly problematic in situations such as communicating with customers, providing technical assistance, and making important business decisions with fabricated information.
To reduce the chance of hallucination from LLMs, organisations should use a combination of technical and operational controls to improve LLM training data quality, limit the scope of LLMs to ensure credible information is available, use reputable sources of data in conjunction with LLMs, and integrate an end-to-end human review process for LLM-generated content that is critical to a business's success.
IT service delivery organisations will have the greatest risk of experiencing hallucination from LLMs deployed to support knowledge sharing/creation, documentation, and automated responses. By providing clear context for LLM-generated outputs, using retrieval-based models, and applying validation mechanisms for outputs created through LLMs, organisations can reduce the risk of receiving inaccurate information from LLMs.
By proactively addressing hallucination, organisations will be in a better position to leverage LLMs safely and successfully, enabling them to produce accurate and reliable outputs that correspond with the needs of their business.