We build software for the automated generation of proof obligations for the safety verification of reactive and cyber-physical AI systems, and the validity verification of proof certificates responding to these obligations. We envision a technology where AI systems produce formal proofs of probabilistic compliance with safety specifications, and will be working towards the seamless integration between automated reasoning and machine learning infrastructures.
Our team combines world-class research expertise with real-world industry experience to advance the safety and security of AI systems.
Luca is a computer science researcher specialising in cybersecurity, with a focus on safeguarding AI systems through rigorous verification. He leads efforts to make machine learning models demonstrably safe, grounded in real-world industrial applications.
Pascal is an expert in the security and privacy of AI and blockchain systems and pioneered the concept of membership inference attack in ML models and co-invented ML-Leaks.
Marco finished his PhD at Heriot Watt University, his research interests involve verification and machine learning. More precisely, they involve enforcing logical constraints to neural networks through loss functions. His most recent work focused on ensuring robustness of NLP systems.
Marek develops algorithms and tools for verification and monitoring of systems and led the development of the award winning software verification tool Symbiotic.
Mirco specialises on the integration of machine learning and automated reasoning technologies for the formal verification of hardware and software and the safeguard of cyber-physical systems.
Edoardo is an expert in neural network verification at the software and hardware level. He is an advocate for checking the implementation of AI systems. He holds ARIA opportunity seed funding on the safety of closed-loop AI systems in finite precision.
Greg is a researcher in automated reasoning and formal methods, contributing to the development of safe and verifiable AI systems.
Simon develops technical AI-based solutions for customers. He obtained a PhD in mechanical engineering in computational mechanics, with focus on numerical simulation algorithms.
Ayberk is an expert in automated theorem proving with a focus on constructive mathematics in the foundational setting of Homotopy Type Theory. He obtained his PhD at Birmingham working in predicative pointfree topology.
We are looking for talented individuals to join us in advancing the safety and security of AI systems.
Zeroth Research is dedicated to the development of open infrastructure for the safety and security verification of AI systems. We are seeking a Research Engineer with a strong background in formal verification and model checking to support the design, implementation, and evaluation of robust verification tools and methods. This role combines software engineering with applied research and requires the ability to translate advanced technical concepts into reliable, well-documented systems that can be understood and used by a broad range of stakeholders.
Contract type: Temporary (until September 2026)
Starting Date: Immediate
Working pattern: Full-time (flexible working arrangements considered)
Location: Birmingham / hybrid / remote
Salary Range: £55,000 - 75,000 per annum
Application Deadline: January 30, 2026
Zeroth Research is committed to fostering an inclusive and diverse working environment. We are an equal opportunities employer and welcome applications from all suitably qualified candidates regardless of age, disability, gender identity or expression, marital or civil partnership status, pregnancy or maternity, race, religion or belief, sex, or sexual orientation. We value diversity of thought, background, and experience, and we are committed to making reasonable adjustments throughout the recruitment process and employment to support accessibility and inclusion.