Lilian Weng 7/7/2024

Extrinsic Hallucinations in LLMs

Read Original

This technical article delves into the problem of hallucinations in large language models (LLMs), specifically defining and focusing on 'extrinsic hallucinations' where outputs are not grounded in the model's pre-training data (world knowledge). It analyzes root causes, including issues with pre-training data quality and how fine-tuning with new knowledge can increase hallucination tendencies, citing recent research.

Extrinsic Hallucinations in LLMs

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser