Adarga (Safety/NLP for Defence)
StartupLondon-based AI safety and intelligence platform purpose-built for defence and national security, ensuring NLP and knowledge graph outputs are auditable.

Clues Intelligence LTD


55 organisations matching 1 filter
London-based AI safety and intelligence platform purpose-built for defence and national security, ensuring NLP and knowledge graph outputs are auditable.
Advai specializes in developing fair and responsible AI systems by identifying vulnerabilities and stress-testing AI models for security and compliance.
A UK state-backed organization focused on advanced AI safety, evaluating frontier AI models and setting global testing standards.
UK Government body dedicated to AI safety research, testing, and evaluation of frontier AI models.
The UK AI Safety Institute (AISI), now part of DSIT, is a government body headquartered in London responsible for evaluating the safety of advanced AI systems.
A research group at Imperial College London studying privacy and safety risks arising from AI systems, including jailbreaks and prompt injection attacks.
UK government research organisation (formerly AI Safety Institute) focused on evaluating frontier AI models and advancing AI safety research.
A directorate of the UK's Department for Science, Innovation and Technology, established in London in November 2023 to facilitate rigorous AI safety r...
UK government directorate under DSIT conducting rigorous frontier AI safety evaluations and research; has 100+ technical staff including alumni from OpenAI, DeepMind, and Anthropic.
The UK AI Security Institute (formerly AI Safety Institute) is a London-based government directorate within DSIT tasked with evaluating frontier AI models.
The Alan Turing Institute, located within the British Library in London, is the UK's national institute for data science and AI and conducts dedicated safety research.
Aligned AI is a London-based AI safety company focused on building AI systems that are provably aligned with human values.
Non-profit research lab focused on long-term AI alignment. Published framework for evaluating catastrophic risks in LLMs (Science, March 2026).
UK-based AI safety research group focused on technical alignment research, studying how to ensure that advanced AI systems pursue intended goals.
AI safety research organisation running hackathons and fellowship programmes to accelerate empirical alignment and interpretability research.
London-based AI safety research organisation focusing on AI deception, scheming, and evaluation of advanced model behaviours; a partner of the UK AI Safety Institute.
AI safety and responsible AI research group at Imperial College London, investigating interpretability, robustness, and governance of machine learning systems.
An independent think tank working with governments to improve resilience to extreme risks, including advanced artificial intelligence.
Research organization with a strong London presence dedicated to understanding and shaping the governance of advanced AI.
A research organization focused on the political and governance challenges posed by advanced AI.
An AI safety and alignment startup focused on building bounded, interpretable AI systems.
AI governance platform helping enterprises assess, govern, and audit AI systems for safety, fairness, and regulatory compliance.
DeepMind leads AI safety and alignment research in London.
Spun up dedicated AI safety team in March 2026 to work with UK government on evaluating high-risk AI systems. Partnered with NCSC on red-teaming LLMs.