Call for Papers: CHOMPS Workshop on Hallucination Mitigation in Multilingual AI

Date:

Overview

The CHOMPS workshop, titled “Confabulation, Hallucinations, & Overgeneration in Multilingual & Precision-critical Settings,” is set to take place in conjunction with the IJCNLP-AACL 2025 conference in Mumbai, India. This workshop aims to address the pressing issue of hallucination in large language models (LLMs), a phenomenon that can lead to the generation of misleading and unverifiable text, particularly in critical fields such as healthcare, law, and education.

Background & Relevance

As the capabilities of LLMs continue to evolve, their propensity for generating false or unsupported information remains a significant barrier to their adoption in real-world applications. This workshop seeks to explore effective strategies for mitigating hallucinations, especially in precision-critical environments where accuracy is paramount. The relevance of this topic spans various sectors, emphasizing the need for reliable AI systems that can operate in diverse linguistic contexts.

Key Details

  • Workshop Dates: December 23-24, 2025 (tentative)
  • Location: Mumbai, India, at the IJCNLP-AACL 2025 conference
  • Workshop Website: CHOMPS 2025
  • Submission Deadline: September 29, 2025
  • Direct ARR Commitment Deadline: October 27, 2025
  • Notification of Acceptance: November 3, 2025
  • Camera-Ready Submission Due: November 11, 2025

Eligibility & Participation

The workshop invites contributions from researchers, practitioners, and students interested in the challenges and solutions related to hallucination in LLMs. It aims to foster a collaborative environment for discussing innovative approaches and sharing insights across disciplines.

Submission or Application Guidelines

Participants can submit their work in various formats:
Long Papers: Up to 8 pages
Short Papers: Up to 4 pages
Dissemination Submissions: Up to 1 page

Accepted authors will have the opportunity to revise their papers based on reviewer feedback, allowing for one additional page to incorporate suggested changes. Submissions should adhere to the ACL style templates, which can be found here. All papers must be submitted in PDF format through the following links:
Direct Submission
ARR Commitment

More Information

The workshop will delve into various aspects of hallucination, including metrics for detection, mitigation strategies during model training, and the implications of these challenges in multilingual settings. By bringing together experts from different fields, the CHOMPS workshop aims to generate actionable insights that can enhance the reliability of AI systems in critical applications.

Conclusion

The CHOMPS workshop presents a unique opportunity for researchers and practitioners to engage in meaningful discussions about the challenges of hallucination in AI. Interested parties are encouraged to submit their work and contribute to this vital discourse in the AI/ML community. For further inquiries, please contact the workshop chairs: Aman Sinha, Raúl Vázquez, and Timothee Mickus.


Category: CFP & Deadlines
Tags: hallucination, multilingual ai, large language models, natural language processing, machine learning, medical applications, legal systems, biotech, confabulation, IJCNLP, AACL, AI ethics

Share post:

Subscribe

Popular

More like this
Related

Call for Papers: Submit to Academia AI and Applications Journal

Overview Academia AI and Applications invites researchers to submit their...

Postdoctoral Opportunity in World Models and Reinforcement Learning at University of Toronto

Overview This is an exciting opportunity for qualified candidates to...

PhD and Postdoc Opportunities in Data Science at Danish Institutions

Overview The Danish Data Science Academy is offering exciting PhD...

Fully Funded PhD and Postdoc Opportunities in Ecological Neuroscience at TU Darmstadt

Overview The Centre for Cognitive Science at TU Darmstadt is...