How LLMs Cite and Why It Matters: A Cross-Model Audit of Reference Fabrication in AI-Assisted Academic Writing and Methods to Detect Phantom Citations

Ali NematiAli Nemati5 days ago22 sec read30 views

Researchers conducted an audit of citation fabrication by large language models across four academic domains, finding hallucination rates vary widely depending on model and prompt framing. Key takeaway for content creators: using multi-model consensus and within-prompt repetition can significantly improve accuracy in detecting fabricated citations.

Read the full article at arXiv cs.CL (NLP)


Want to create content about this topic? Use Nemati AI tools to generate articles, social posts, and more.

30
Comments
Ali Nemati
Ali NematiWritten by Ali
View all posts

Related Articles

How LLMs Cite and Why It Matters: A Cross-Model Audit of Reference Fabrication in AI-Assisted Academic Writing and Methods to Detect Phantom Citations | OSLLM.ai