Eugene Yan 11/5/2023

Out-of-Domain Finetuning to Bootstrap Hallucination Detection

Read Original

This technical article details a machine learning experiment on bootstrapping hallucination detection models. It explains how finetuning a BART model on Wikipedia data (out-of-domain) before task-specific finetuning on a news summary benchmark significantly improves performance in identifying factual inconsistencies, using a Natural Language Inference (NLI) approach.

Out-of-Domain Finetuning to Bootstrap Hallucination Detection

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser