Overview
The CogInterp Workshop at NeurIPS 2025 invites submissions for papers focusing on the intersection of cognitive science and deep learning interpretability. This workshop aims to bridge the gap between cognitive science and AI, addressing the critical challenge of understanding the internal processes of deep learning models. As AI continues to advance, understanding how these models achieve complex behaviors is becoming increasingly important for researchers and practitioners alike.
Background & Relevance
Cognitive interpretability is a burgeoning field that seeks to explain the cognitive processes underlying the behaviors of deep learning models. By applying theories and frameworks from cognitive science, researchers can gain insights into how these models function, particularly in areas such as language processing, visual recognition, and reasoning. This workshop is significant as it aims to foster collaboration between cognitive scientists and AI researchers, promoting a deeper understanding of model behaviors and the cognitive frameworks that can describe them.
Key Details
- Submission Deadline: August 15, 2025
- Event Date: NeurIPS 2025
- Website: CogInterp Workshop
- Paper Length: 4 pages (both technical and positional papers are welcome)
- Themes:
- Behavioral accounts
- Processing accounts
- Learning accounts
Eligibility & Participation
The workshop is open to researchers from various disciplines, including machine learning, cognitive science, psychology, linguistics, vision science, neuroscience, and philosophy. Participants are encouraged to submit papers that contribute to the understanding of cognitive processes in deep learning models.
Submission or Application Guidelines
- Prepare a paper of up to 4 pages in length.
- Ensure your submission addresses one or more of the workshop themes: behavioral accounts, processing accounts, or learning accounts.
- Submit your paper through the workshop website by the deadline of August 15, 2025.
More Information
The CogInterp Workshop is positioned at the forefront of AI research, emphasizing the importance of cognitive interpretability in understanding deep learning models. As AI systems become more complex, the need for interpretability grows, making this workshop a timely and essential event for researchers aiming to explore the cognitive aspects of AI.
Conclusion
Researchers interested in cognitive science and AI interpretability are encouraged to participate in the CogInterp Workshop at NeurIPS 2025. This is an excellent opportunity to contribute to a growing field and engage with experts from various disciplines. Explore the themes, prepare your submissions, and join the conversation on cognitive interpretability in deep learning models.
Category: CFP & Deadlines
Tags: cognitive science, deep learning, interpretability, neuroscience, machine learning, cognition, AI ethics, NeurIPS, behavioral science, learning theory, multimodal AI, psychology