Overview
The CHOMPS Workshop, focusing on Confabulation, Hallucinations, & Overgeneration in Multilingual & Precision-critical Settings, is set to take place alongside IJCNLP-AACL 2025 at IIT Bombay, Mumbai, India. This workshop aims to address the challenges posed by large language models (LLMs) that often generate unsupported and unverifiable content, a phenomenon known as hallucination. The significance of this workshop lies in its focus on finding solutions to mitigate these issues in critical applications such as healthcare, legal systems, and education.
Background & Relevance
In recent years, the rapid development of LLMs has transformed various fields, yet their propensity to produce misleading information remains a significant barrier to their adoption in sensitive areas. Hallucination, confabulation, and overgeneration can lead to severe consequences in precision-critical domains. This workshop seeks to explore the implications of these challenges, particularly in multilingual contexts, where resources for addressing these issues are often limited. By fostering discussions around these topics, the CHOMPS Workshop aims to contribute to the responsible development and deployment of AI technologies.
Key Details
- Workshop Dates: December 23-24, 2025 (TBC)
- Location: IIT Bombay, Mumbai, India
- Workshop Website: CHOMPS 2025
- Submission Deadline: September 29, 2025
- Direct ARR Commitment Deadline: October 27, 2025
- Author Notification: November 3, 2025
- Camera-Ready Submission Due: November 11, 2025
Eligibility & Participation
The workshop welcomes submissions from researchers, practitioners, and students interested in addressing the challenges associated with hallucinations in AI. It targets those working in fields such as natural language processing, machine learning, and related disciplines, particularly in applications where accuracy is paramount.
Submission or Application Guidelines
Submissions can be made in two formats: archival or non-archival, with a maximum of 8 pages for long papers and 4 pages for short papers. Additionally, dissemination submissions are allowed up to 1 page. Accepted authors may include an additional page for revisions based on reviewer feedback. All submissions must adhere to the ACL style templates, which can be found here. Submissions should be made in PDF format via:
– Direct Submission
– ARR Commitment
More Information
The CHOMPS Workshop represents a vital opportunity to engage with pressing issues in AI, particularly concerning the reliability of LLMs in critical applications. By bringing together experts from various fields, the workshop aims to foster collaboration and innovation in mitigating hallucinations and enhancing the trustworthiness of AI systems. This initiative aligns with broader efforts to ensure that AI technologies are developed responsibly and ethically, particularly in high-stakes environments.
Conclusion
The CHOMPS Workshop invites researchers and practitioners to contribute their insights and findings on hallucination mitigation in AI. This is an excellent opportunity to engage with peers, share knowledge, and explore solutions to enhance the reliability of AI systems in multilingual and precision-critical contexts. Interested parties are encouraged to submit their work and participate in this important dialogue.
Category: CFP & Deadlines
Tags: multilingual ai, hallucination detection, llms, natural language processing, machine learning, medical applications, legal systems, biotech, AI ethics, IJCNLP, AACL, conference