Overview
The CHOMPS Workshop, focusing on Confabulation, Hallucinations, and Overgeneration in Multilingual and Precision-critical Settings, invites submissions for its upcoming event at IJCNLP-AACL 2025. This workshop aims to tackle significant challenges posed by large language models (LLMs), particularly their tendency to produce misleading or unverifiable content, which is critical in fields where accuracy is paramount.
Background & Relevance
As advancements in AI continue, the phenomenon of hallucination in LLMs remains a pressing concern. This issue is especially relevant in precision-critical domains such as healthcare, legal systems, and education, where the consequences of misinformation can be severe. The CHOMPS workshop seeks to explore effective strategies for mitigating these risks, particularly in multilingual contexts where resources are often limited.
Key Details
- Workshop Dates: December 23-24, 2025 (tentative)
- Venue: IIT Bombay, Mumbai, India
- Workshop Website: CHOMPS 2025
- Submission Deadline: September 29, 2025
- Direct ARR Commitment Deadline: October 27, 2025
- Author Notification: November 3, 2025
- Camera-Ready Submission Due: November 11, 2025
Eligibility & Participation
The workshop is open to researchers and practitioners interested in addressing the challenges of hallucination in AI systems. It encourages contributions from various disciplines, particularly those focusing on multilingual applications and precision-critical scenarios.
Submission or Application Guidelines
Participants can submit either archival or non-archival papers, with the following specifications:
– Long Papers: Up to 8 pages
– Short Papers: Up to 4 pages
– Dissemination Submissions: Up to 1 page
Accepted authors may include an additional page for revisions based on reviewer feedback. Submissions should adhere to the ACL style templates, available here, and must be submitted in PDF format via:
– Direct Submission
– ARR Commitment
Additional Context / Real-World Relevance
The CHOMPS workshop addresses a critical gap in the deployment of AI technologies, particularly in sectors where the stakes are high. By focusing on hallucination mitigation, the workshop aims to foster discussions that could lead to improved reliability and trust in AI systems across various applications.
Conclusion
Researchers and practitioners are encouraged to participate in this workshop to contribute to the ongoing discourse on hallucinations in AI. This is an excellent opportunity to share insights, collaborate, and advance the understanding of these challenges in multilingual and precision-critical environments.
Category: CFP & Deadlines
Tags: hallucination, multilingual ai, large language models, nlp, workshop, ijcnlp, aacl, healthcare, legal systems, biotech