Overview
The University of Copenhagen’s Department of Computer Science is inviting applications for several fully funded PhD positions in the field of Natural Language Understanding (NLU). These positions are significant for advancing research in explainable AI and trustworthy NLP, addressing critical challenges in the domain.
Background & Relevance
Natural Language Understanding is a vital area within AI and machine learning, focusing on enabling machines to comprehend and interpret human language effectively. As AI systems become more integrated into daily life, the demand for transparency and reliability in NLP models grows. The research conducted in this field is crucial for developing systems that can provide explanations for their decisions, ensuring trust and safety in AI applications.
Key Details
- Positions Available: 3-year fully funded PhD fellowships
- Start Dates: Spring 2026 and Autumn 2026
- Research Areas:
- Explainable Natural Language Understanding
- Trustworthy Natural Language Processing
- Application Deadline: 31 October 2025
- Supervisors: Isabelle Augenstein and Pepa Atanasova
- Links for Application:
- Explainable NLU Position
- Trustworthy NLP Position
- ELLIS PhD Programme
- More Information: CopenLU PhD Fellowships
Eligibility & Participation
These PhD positions are aimed at candidates who have completed a Master’s degree by the start date. The fellowships are designed for individuals interested in exploring innovative methods in NLU and trustworthy AI, making them suitable for students and researchers eager to contribute to these critical areas.
Submission or Application Guidelines
Interested candidates should follow these steps to apply:
1. Review the specific research areas and supervisors.
2. Prepare application materials as outlined on the provided links.
3. Submit applications by the deadline of 31 October 2025.
4. For the ELLIS PhD programme, ensure to name Isabelle Augenstein as a supervisor in your application.
Additional Context / Real-World Relevance
The focus on explainability and trustworthiness in AI is increasingly recognized as essential for the ethical deployment of AI technologies. Research in these areas not only contributes to academic knowledge but also has practical implications for industries reliant on AI, such as healthcare, finance, and customer service. By fostering research in trustworthy NLP, the University of Copenhagen is positioning itself at the forefront of addressing these pressing challenges.
Conclusion
The University of Copenhagen offers a unique opportunity for aspiring researchers to engage in impactful work in Natural Language Understanding. Interested individuals are encouraged to explore these PhD positions and contribute to the advancement of explainable and trustworthy AI. Applications are now open, and potential candidates should not miss this chance to be part of a leading research group in the field.
Category: PhD & Postdoc Positions
Tags: nlp, natural language processing, explainable ai, trustworthy ai, university of copenhagen, phd fellowships, isabelle augenstein, pepa atanasova, machine learning