Call for Papers: Exploring Optimization in Reinforcement Learning at NeurIPS 2019

Date:

Overview

The OptRL workshop at NeurIPS 2019 invites researchers to submit papers focusing on the optimization foundations of reinforcement learning (RL). This workshop aims to enhance collaboration between the fields of RL and optimization, fostering a deeper understanding of their foundational principles and bridging theoretical concepts with practical applications.

Background & Relevance

Reinforcement learning has gained significant traction due to its practical applications, yet the methods involved can be complex. Recent advancements in optimization and control theory have provided new insights that can lead to more effective RL algorithms. Understanding the optimization aspects of RL is crucial for developing robust and efficient systems, making this workshop particularly relevant for researchers and practitioners in the field.

Key Details

  • Workshop Title: OptRL Workshop @ NeurIPS 2019
  • Submission Deadline: September 10th, 2019, 23:59 AoE
  • Notification of Acceptance: October 1st, 2019
  • Submission Link: CMT Submission
  • Paper Length: Up to 6 pages in NeurIPS style (excluding references and appendices)
  • Presentation Formats: Talks, spotlights, or poster presentations based on novelty and technical merit.

Eligibility & Participation

This call for papers is open to researchers and practitioners working at the intersection of reinforcement learning and optimization. Contributions can include ongoing work or position papers, allowing for a wide range of submissions that address both theoretical and practical aspects of the topic.

Submission or Application Guidelines

  1. Prepare your paper according to the NeurIPS style guidelines.
  2. Submit your work via the CMT platform by the deadline.
  3. Ensure your submission addresses topics relevant to the workshop’s focus on optimization in RL.
  4. Await notification of acceptance by October 1st, 2019.

Additional Context / Real-World Relevance

The integration of optimization techniques into reinforcement learning is vital for advancing the field. As RL continues to be applied in various domains, understanding its optimization foundations will enhance the development of more efficient algorithms. This workshop serves as a platform for discussing open problems and potential solutions that arise from the intersection of these two areas.

Conclusion

Researchers are encouraged to explore this opportunity to contribute to the OptRL workshop at NeurIPS 2019. By submitting your work, you can engage with leading experts and contribute to the ongoing dialogue surrounding the optimization foundations of reinforcement learning. Don’t miss the chance to share your insights and findings with the community.


Category: CFP & Deadlines
Tags: reinforcement learning, optimization, neurips, machine learning, algorithms, theory, empirical analysis, distributed systems

Share post:

Subscribe

Popular

More like this
Related

Call for Papers: Submit to Academia AI and Applications Journal

Overview Academia AI and Applications invites researchers to submit their...

Postdoctoral Opportunity in World Models and Reinforcement Learning at University of Toronto

Overview This is an exciting opportunity for qualified candidates to...

PhD and Postdoc Opportunities in Data Science at Danish Institutions

Overview The Danish Data Science Academy is offering exciting PhD...

Fully Funded PhD and Postdoc Opportunities in Ecological Neuroscience at TU Darmstadt

Overview The Centre for Cognitive Science at TU Darmstadt is...