Call for Papers: CHOMPS Workshop on Hallucinations in Multilingual AI at IJCNLP-AACL 2025

Date:

Overview

The CHOMPS workshop, titled “Confabulation, Hallucinations, & Overgeneration in Multilingual & Precision-critical Settings,” invites submissions for its upcoming event at IJCNLP-AACL 2025. This workshop addresses the significant challenges posed by hallucinations in large language models (LLMs), particularly in critical applications where accuracy is paramount. The extended deadline for submissions is now set for October 3, 2025.

Background & Relevance

As advancements in AI continue to evolve, the issue of LLMs generating inaccurate or misleading information remains a pressing concern. Hallucinations, confabulations, and overgeneration can lead to serious implications in fields such as healthcare, law, and education. This workshop aims to explore methods for mitigating these issues, especially in multilingual contexts where resources may be limited. Addressing these challenges is crucial for the responsible deployment of AI technologies in real-world applications.

Key Details

Eligibility & Participation

The workshop is open to researchers and practitioners interested in addressing the challenges of hallucinations in AI. It aims to foster a diverse range of contributions, encouraging participation from various disciplines and backgrounds.

Submission or Application Guidelines

Submissions are welcome in various formats:
Archival submissions: Up to 8 pages (long) or 4 pages (short)
Dissemination submissions: Up to 1 page
Accepted authors may add one additional page for revisions based on reviewer feedback. All submissions must adhere to the ACL style templates, available here.

Submissions can be made via:
– Direct submission: OpenReview Direct Submission
– ARR commitment: ARR Commitment Submission

Important dates include:
Direct ARR commitment: October 27, 2025
Author Notification: November 3, 2025
Camera-Ready Due: November 11, 2025

Additional Context / Real-World Relevance

The CHOMPS workshop is particularly relevant given the growing reliance on AI in critical sectors. By focusing on hallucination mitigation, the workshop seeks to enhance the reliability of AI systems, ensuring they can be safely integrated into high-stakes environments. This initiative aligns with broader efforts to improve AI accountability and transparency, fostering trust in AI technologies.

Conclusion

Researchers and practitioners are encouraged to submit their work to the CHOMPS workshop, contributing to the vital discourse on hallucinations in AI. This is an opportunity to engage with leading experts and share insights that could shape the future of AI applications in precision-critical fields. Explore the workshop details and prepare your submissions to be part of this important conversation.


Category: CFP & Deadlines
Tags: multilingual ai, hallucination detection, large language models, nlp, ai ethics, medical applications, legal technology, biotech, conference, ijcnlp, aacl

Share post:

Subscribe

Popular

More like this
Related

Call for Papers: Submit to Academia AI and Applications Journal

Overview Academia AI and Applications invites researchers to submit their...

Postdoctoral Opportunity in World Models and Reinforcement Learning at University of Toronto

Overview This is an exciting opportunity for qualified candidates to...

PhD and Postdoc Opportunities in Data Science at Danish Institutions

Overview The Danish Data Science Academy is offering exciting PhD...

Fully Funded PhD and Postdoc Opportunities in Ecological Neuroscience at TU Darmstadt

Overview The Centre for Cognitive Science at TU Darmstadt is...