Faculty AI (Responsible AI Practice)
ScaleupEnterprise AI consultancy with a dedicated responsible and explainable AI practice, translating advanced machine learning into auditable, fair, and accountable systems.

Clues Intelligence LTD


55 organisations matching 1 filter
Enterprise AI consultancy with a dedicated responsible and explainable AI practice, translating advanced machine learning into auditable, fair, and accountable systems.
London-based applied AI firm with a dedicated responsible AI and safety practice, advising government and enterprise clients on AI governance.
The world's leading AI research lab, headquartered in London's King's Cross. Conducts foundational safety research including alignment and interpretability.
Google DeepMind's London HQ hosts a dedicated AI safety research team working on alignment, interpretability, and specification.
Google DeepMind's London-based safety and alignment research team conducts fundamental research into AI alignment, interpretability, and evaluation.
University research group at Imperial College focused on technical AI safety, robustness, and fairness of machine learning systems.
Imperial College London is one of the UK's top-ranked universities for AI research, employing among the largest numbers of machine learning professionals in the UK.
Imperial College London's AI research group conducts foundational and applied research across AI safety, robustness, and interpretability.
London-based AI startup founded by former DeepMind researcher David Silver (AlphaGo, AlphaStar) targeting $1 billion in seed funding to develop superhuman AI systems.
Research centre at King's College London supporting AI research with focus on real-world applications including machine learning, robotics, and NLP.
London-based AI safety research programme running 13-week residencies for technical researchers, producing papers on AI alignment and interpretability.
An AI safety research organisation focused on technical alignment and mitigating risks from advanced AI systems.
The London Initiative for Safe AI is a London-based organisation that helps talented researchers enter high-impact AI safety roles through fellowships...
London-based independent AI safety research centre hosting individual researchers and small organisations pioneering technical AI safety research.
Runs Generative AI Adoption Working Group and Responsible AI programme for London boroughs.
London-based AI security company spun out of Lancaster University research, providing automated AI red-teaming, shadow AI detection, and protection against adversarial attacks.
London-based independent AI safety research organisation focused on evaluating dangerous capabilities in frontier models.
London-based grassroots AI safety advocacy organisation calling for a global pause on frontier AI development.
Imperial College London's AI safety research group conducts foundational work on verifiable AI, safe reinforcement learning, and responsible deploymen...
Safe AI London is a London-based non-profit that supports individuals and researchers interested in reducing risks from advanced AI, with a focus on r...
London-based AI safety startup acting as an automated red-teaming platform for LLMs.
London lab focuses on formal verification of AI systems. Hired 10 researchers from UCL and Imperial in March 2026 to work on automated theorem proving.
The Alan Turing Institute is the UK's national institute for data science and AI, based in the British Library in London, conducting foundational research across AI safety, ethics and applications.
A leading university research hub at University College London focusing on foundational AI research, safety, and industry collaboration.