Monday, November 6, 2023

Show HN: Open-source model and scorecard for measuring hallucinations in LLMs https://ift.tt/7zE8Ovr

Show HN: Open-source model and scorecard for measuring hallucinations in LLMs Hi all! This morning, we released a new Apache 2.0 licensed model on HuggingFace for detecting hallucinations in retrieval augmented generation (RAG) systems. What we've found is that even when given a "simple" instruction like "summarize the following news article," every LLM that's available hallucinates to some extent, making up details that never existed in the source article -- and some of them quite a bit. As a RAG provider and proponents of ethical AI, we want to see LLMs get better at this. We've published an open source model, a blog more thoroughly describing our methodology (and some specific examples of these summarization hallucinations), and a GitHub repository containing our evaluation from the most popular generative LLMs available today. Links to all of them are referenced in the blog here, but for the technical audience here, the most interesting additional links might be: - https://ift.tt/Gc6fWER... - https://ift.tt/yfEVvds We hope that releasing these under a truly open source license and detailing the methodology, we hope to increase the viability of anyone really quantitatively measuring and improving the generative LLMs they're publishing. https://ift.tt/U8na5f9 November 7, 2023 at 12:41AM

No comments:

Post a Comment

Show HN: I built a tool that make its fast to onboard devs to your codebase https://ift.tt/BO2AhTb

Show HN: I built a tool that make its fast to onboard devs to your codebase https://envkit.co/ April 14, 2025 at 11:29PM