Call for Papers: Privacy in Machine Learning Workshop at NeurIPS 2019

Date:

Overview

This editorial highlights the upcoming workshop titled “Privacy in Machine Learning” scheduled for December 13 or 14, 2019, as part of the NeurIPS conference in Vancouver. This workshop aims to address the pressing need for privacy-preserving techniques in machine learning, which is increasingly critical in today’s data-driven landscape.

Background & Relevance

Privacy in machine learning is a rapidly evolving area of research, driven by the need to protect sensitive data while still enabling effective data analysis. Techniques such as Differential Privacy (DP), Multi-Party Computation (MPC), and Homomorphic Encryption (HE) are at the forefront of this field. As machine learning applications expand across various sectors, ensuring data privacy becomes paramount, making this workshop particularly relevant for researchers and practitioners alike.

Key Details

  • Event: Privacy in Machine Learning Workshop
  • Date: December 13 or 14, 2019
  • Location: Vancouver, Canada
  • Submission Deadline: September 9, 2019, at 11:59 PM UTC
  • Notification of Acceptance: October 1, 2019
  • Submission URL: EasyChair Submission
  • Format: Extended abstracts (maximum 4 pages, excluding references)

Eligibility & Participation

This workshop invites submissions from researchers and practitioners working on privacy-preserving methods in machine learning. It targets those exploring theoretical and practical aspects of privacy in data analysis, including but not limited to cryptographic techniques and empirical studies.

Submission or Application Guidelines

To submit an abstract, authors must adhere to the NeurIPS format and ensure that their submissions are non-anonymized. While the workshop will not have formal proceedings, accepted abstracts may be linked to arXiv or a PDF on the workshop webpage. Additional supplementary material can be submitted but may not be reviewed.

Additional Context / Real-World Relevance

The intersection of privacy and machine learning is crucial as organizations increasingly rely on data for decision-making. Understanding how to balance privacy with utility is essential for developing robust machine learning systems. This workshop will foster discussions on various approaches to privacy, including the implications of privacy on fairness and transparency in AI systems.

Conclusion

Researchers interested in the intersection of privacy and machine learning are encouraged to submit their work to this workshop. This is an excellent opportunity to engage with leading experts in the field and contribute to the ongoing discourse on privacy in machine learning. Explore the submission guidelines and participate in shaping the future of privacy-preserving technologies in AI.


Category: CFP & Deadlines
Tags: privacy, machine learning, differential privacy, cryptography, neural networks, secure computation, data privacy, neurips

Share post:

Subscribe

Popular

More like this
Related

Call for Papers: Submit to Academia AI and Applications Journal

Overview Academia AI and Applications invites researchers to submit their...

Postdoctoral Opportunity in World Models and Reinforcement Learning at University of Toronto

Overview This is an exciting opportunity for qualified candidates to...

PhD and Postdoc Opportunities in Data Science at Danish Institutions

Overview The Danish Data Science Academy is offering exciting PhD...

Fully Funded PhD and Postdoc Opportunities in Ecological Neuroscience at TU Darmstadt

Overview The Centre for Cognitive Science at TU Darmstadt is...