Overview
The Department of Informatics at King’s College London is inviting applications for a two-year postdoctoral position in Technical AI Safety. This role is part of a project funded by an Open Philanthropy grant aimed at enhancing the safety and reliability of large language models (LLMs).
Background & Relevance
AI safety is a critical area of research, particularly as the deployment of large language models becomes more widespread. Ensuring that these models operate safely and align with human intentions is paramount. The focus on latent probing and certification techniques is essential for developing robust AI systems capable of identifying and mitigating risks associated with misaligned intentions, such as harmful behaviors or deceptive outputs.
Key Details
- Position: Postdoctoral Research Associate/Fellow in Technical AI Safety
- Duration: Two years
- Funding: Open Philanthropy grant “Verifiably Robust Conformal Probes”
- Application Deadline: 20 November 2025
- Location: King’s College London
- Application Link: Apply Here
Eligibility & Participation
This position is targeted at individuals with a strong background in AI safety, machine learning, or related fields. Candidates should possess relevant experience that aligns with the project’s goals, particularly in the areas of probabilistic guarantees and adversarial robustness.
Submission or Application Guidelines
Interested candidates should submit their applications through the provided link. Ensure that all required documents are prepared and submitted before the deadline of 20 November 2025.
Additional Context / Real-World Relevance
The research conducted in this role will contribute significantly to the field of AI safety by developing methods that enhance the reliability of LLMs. As AI systems increasingly influence various sectors, ensuring their safe operation is crucial for public trust and ethical deployment.
Conclusion
This postdoctoral opportunity at King’s College London represents a significant chance to contribute to the vital field of AI safety. Interested candidates are encouraged to apply and engage with this important research area, which has far-reaching implications for the future of AI technology.
Category: PhD & Postdoc Positions
Tags: ai safety, king’s college london, postdoc, large language models, technical ai safety, conformal prediction, latent probing, robustness