Department Seminar Series
Neural Network Verification for Formally Verifying Neuro-Symbolic Artificial Intelligence (AI)
12th December 2025, 13:00
Ashton Lecture Theatre
Taylor T. Johnson
Vanderbilt University, USA
Abstract
The ongoing renaissance in artificial intelligence (AI) has led to the advent of data-driven machine learning (ML) methods deployed within components for sensing, perception, actuation, and control in safety-critical cyber-physical systems (CPS). While such learning-enabled components (LECs) are enabling autonomy in systems such as autonomous vehicles and robots, ensuring such LECs operate reliably in all scenarios is extraordinarily challenging, as demonstrated in part through recent accidents in semi-autonomous/autonomous CPS and by adversarial ML attacks. We will discuss formal methods for assuring specifications---mostly robustness and safety---in autonomous CPS and subcomponents thereof using our NNV software tool (https://github.com/verivital/nnv), developed partly in DARPA Assured Autonomy and Assured Neuro Symbolic Learning and Reasoning (ANSR) projects. These methods have been evaluated in CPS development with multiple industry partners in automotive, aerospace, and robotics domains, and allow for formally analyzing neural networks and their usage in closed-loop systems. These methods are also enabling verification for the third wave of AI systems, namely neuro-symbolic systems. We will describe classes of neuro-symbolic models we have been developing such as neuro-symbolic automata and behavior trees. Neuro-symbolic behavior trees (NSBTs) are behavior trees (a form of hierarchical state machine becoming widely used in robotics) that execute neural networks, and for which we have been developing verification methods implemented in a tool called BehaVerify (https://github.com/verivital/behaverify). We will cover ongoing community activities we help organize, such as the Verification of Neural Networks Competition (VNN-COMP) held with the International Conference on Computer-Aided Verification (CAV) and the Symposium on AI Verification (SAIV) the past few years, as well as the AI and Neural Network Control Systems (AINNCS) category of the hybrid systems verification competition (ARCH-COMP) also held the past few years. We will conclude with a discussion of future directions in the broader safe and trustworthy AI domain, such as in ongoing projects verifying neural networks used in medical imaging analysis and malware classifiers.![]()
Biography
Dr. Taylor T. Johnson, PE, is A. James and Alice B. Clark Foundation Faculty Fellow, Associate Professor of Computer Science (CS), and Associate Dean for Graduate Education in the College of Connected Computing (CCC) at Vanderbilt University, where he directs the Verification and Validation for Intelligent and Trustworthy Autonomy Laboratory (VeriVITAL) and is a Senior Research Scientist in the Institute for Software Integrated Systems (ISIS). Dr. Johnson's research has been published in venues such as AAAI, CAV, ESORICS, EMSOFT, ESOP, FM, FORMATS, HSCC, ICCPS, ICDCS, ICDM, ICSE, IJCAI, MEMOCODE, NDSS, NeurIPS, NFM, RTSS, SEFM, STTT, TNNLS, UAI, WACV, among others, several of which have received best paper and best artifact/repeatability awards. Dr. Johnson earned the PhD and MSc in Electrical and Computer Engineering (ECE) from the University of Illinois at Urbana-Champaign, where he worked in the Coordinated Science Laboratory with Prof. Sayan Mitra, and earlier earned a BSEE from Rice University. Dr. Johnson is a recipient of the Air Force Office of Scientific Research (AFOSR) Young Investigator Program (YIP) award and the National Science Foundation (NSF) Computer and Information Science and Engineering (CISE) Research Initiation Initiative (CRII) award, and his group's research is or has recently been supported by AFOSR, ARO, AFRL, DARPA, DOE, MathWorks, NSA, NSF, NVIDIA, ONR, Toyota, and USDOT.
Ashton Street, Liverpool, L69 3BX
United Kingdom
Call the department
+44 (0)151 795 4275