Call for Papers: ESANN 2026 Special Session on AI Reliability and Safety

Date:

Overview

The upcoming ESANN 2026 will feature a special session dedicated to the critical themes of reliability, safety, and robustness in artificial intelligence applications. Scheduled for April 22 to 24, 2026, in Bruges, Belgium, this session aims to address the pressing need for AI systems that can perform reliably in real-world scenarios, particularly when faced with conditions that differ from their training environments.

Background & Relevance

In the field of artificial intelligence and machine learning, ensuring the reliability and safety of AI models is paramount, especially for applications that are safety-critical. As AI systems are increasingly deployed in various sectors, including healthcare, autonomous vehicles, and finance, the importance of robust and reliable models cannot be overstated. This special session will explore recent advancements and methodologies that enhance the safety and reliability of AI systems, making it a significant event for researchers and practitioners in the field.

Key Details

  • Event: ESANN 2026 Special Session on Reliability, Safety and Robustness of AI applications
  • Dates: April 22-24, 2026
  • Location: Bruges, Belgium
  • Submission Deadline: November 19, 2025
  • Notification of Decisions: January 23, 2026
  • Session Link: ESANN Special Sessions

Eligibility & Participation

This call for papers invites contributions from researchers, practitioners, and industry experts who are working on topics related to the reliability, safety, and robustness of AI applications. Participants are encouraged to submit their findings and methodologies that address these critical issues.

Submission or Application Guidelines

Interested authors should prepare their submissions in accordance with the guidelines provided on the ESANN website. Submissions should focus on themes such as:
– Safety-Critical AI Applications: Including case studies and risk assessment frameworks.
– Robustness Under Distribution Shifts: Techniques for open-set recognition and domain adaptation.
– Adversarial Robustness and Stress Testing: Evaluating model behavior under challenging inputs.
– Reliability Testing and Evaluation Protocols: Approaches for validating model trustworthiness.
– Human-in-the-Loop Safety: Integrating expert oversight in high-risk AI deployments.
– Explainability for Safety-Critical Decisions: Ensuring model transparency.
– Formal Verification of AI Models: Methods for stability and fairness.
– Uncertainty Quantification and Calibration: Confidence-aware predictions for safe decision-making.

Additional Context / Real-World Relevance

The focus on reliability and safety in AI is becoming increasingly relevant as AI technologies are integrated into everyday life. The ability to trust AI systems to operate safely and effectively in unpredictable environments is essential for their widespread adoption. This session at ESANN 2026 provides a platform for sharing innovative research and practical solutions that can help advance the field.

Conclusion

The ESANN 2026 special session on AI reliability, safety, and robustness represents an important opportunity for researchers to contribute to a critical area of AI development. Interested individuals are encouraged to prepare their submissions and participate in this vital discussion. Join the community in Bruges to explore the latest advancements in ensuring the safety and reliability of AI applications.


Category: CFP & Deadlines
Tags: ai, machine learning, safety-critical applications, robustness, reliability, esann, computational intelligence, explainability

Share post:

Subscribe

Popular

More like this
Related

Call for Papers: Submit to Academia AI and Applications Journal

Overview Academia AI and Applications invites researchers to submit their...

Postdoctoral Opportunity in World Models and Reinforcement Learning at University of Toronto

Overview This is an exciting opportunity for qualified candidates to...

PhD and Postdoc Opportunities in Data Science at Danish Institutions

Overview The Danish Data Science Academy is offering exciting PhD...

Fully Funded PhD and Postdoc Opportunities in Ecological Neuroscience at TU Darmstadt

Overview The Centre for Cognitive Science at TU Darmstadt is...