Call for Papers: IEEE SaTML 2026 on Secure and Trustworthy Machine Learning

Date:

Overview

The IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) is set to take place from March 23-25, 2026, in Munich, Germany. This conference represents a significant platform for researchers and practitioners to discuss advancements and challenges in the realm of secure and trustworthy AI systems. The focus will be on security, privacy, and fairness in machine learning, making it a crucial event for those invested in these critical areas.

Background & Relevance

As machine learning continues to permeate various sectors, the importance of ensuring the security and trustworthiness of these systems cannot be overstated. Issues such as algorithmic bias, data privacy, and system vulnerabilities pose significant risks. This conference aims to address these challenges by fostering discussions on innovative solutions and best practices, thereby contributing to the development of more secure AI technologies.

Key Details

  • Conference Dates: March 23-25, 2026
  • Location: Munich, Germany
  • Submission Deadline: September 24, 2025
  • Early Reject Notification: October 29, 2025
  • Interactive Author Discussion: November 19–December 3, 2025
  • Decision Notification: December 10, 2025
  • Submission Categories:
  • Research Papers (up to 12 pages)
  • Systematization of Knowledge (SoK) Papers (up to 12 pages, must include “SoK:” in title)
  • Position Papers (5 to 12 pages, must include “Position:” in title)
  • Submission Links: Call for Papers

Eligibility & Participation

The conference invites submissions from researchers, practitioners, and students who are engaged in the fields of secure and trustworthy machine learning. Participants are encouraged to share their findings and insights, contributing to a collective understanding of the challenges and solutions in this domain.

Submission or Application Guidelines

To submit a paper, authors should adhere to the following guidelines:
1. Choose the appropriate category for your submission (Research, SoK, or Position Paper).
2. Prepare your manuscript according to the specified page limits.
3. Ensure that your paper is well-argued and relevant to the themes of secure and trustworthy machine learning.
4. Submit your paper by the deadline of September 24, 2025.
5. For further details on the submission and review process, refer to the Call for Papers.

More Information

The SaTML conference serves as a vital forum for discussing the intersection of machine learning and security. As AI technologies evolve, the need for robust frameworks that ensure their safe deployment becomes increasingly critical. This conference will not only highlight current research but also inspire future innovations in the field.

Conclusion

Researchers and practitioners are encouraged to participate in this important event by submitting their work. Engaging with peers in the field will foster collaboration and advance the discourse on secure and trustworthy machine learning. Explore the opportunity to contribute to this pivotal conference and share your insights with the community.


Category: CFP & Deadlines
Tags: secure machine learning, trustworthy ai, privacy, fairness, machine learning security, IEEE, SaTML, algorithm verification, forensic analysis, research papers, position papers, SoK papers

Share post:

Subscribe

Popular

More like this
Related

Call for Papers: Submit to Academia AI and Applications Journal

Overview Academia AI and Applications invites researchers to submit their...

Postdoctoral Opportunity in World Models and Reinforcement Learning at University of Toronto

Overview This is an exciting opportunity for qualified candidates to...

PhD and Postdoc Opportunities in Data Science at Danish Institutions

Overview The Danish Data Science Academy is offering exciting PhD...

Fully Funded PhD and Postdoc Opportunities in Ecological Neuroscience at TU Darmstadt

Overview The Centre for Cognitive Science at TU Darmstadt is...