#llm-hallucinations

[ follow ]
Science
fromNature
18 hours ago

Synthesizing scientific literature with retrieval-augmented language models - Nature

OpenScholar is an open, retrieval-augmented system integrating a 45 million-paper datastore, trained retrievers, and iterative self-feedback to generate cited, up-to-date scientific literature syntheses.
Artificial intelligence
fromFuturism
3 months ago

Inventor of Vibe Coding Admits He Hand-Coded His New Project

Vibe coding accelerates prototyping but creates security leaks, hallucinations, and unreliable software, so human developers and oversight remain essential.
Artificial intelligence
fromInfoQ
3 months ago

OpenAI Study Investigates the Causes of LLM Hallucinations and Potential Solutions

LLM hallucinations largely result from pretraining exposure and evaluation metrics that reward guessing; penalizing confident errors and rewarding uncertainty can reduce hallucinations.
Mental health
fromTechCrunch
5 months ago

How chatbot design choices are fueling AI delusions | TechCrunch

Large language model chatbots can convincingly simulate consciousness, prompting users to form delusions and causing rising incidents of AI-related psychosis.
[ Load more ]