Evolution of AI in Automotive Safety

In recent years, headlines showcasing how AI technology is being incorporated into automotive software solutions have become increasingly common. The establishment of dedicated AI facilities (e.g. Izmo’s Automotive AI Factory, Qualcomm’s AI R&D Center) and collaborative initiatives regarding Automated Driving Assistance System (ADAS) development (e.g Bosch & Cariad, GM & NVIDA) are just a few examples of how the automotive sector is rapidly embedding AI across the vehicle lifecycle 

When it comes to automotive safety software, AI adoption has advanced along two simultaneous fronts. In one dimension, AI is positioned as a Safety Enabler, actively embedded in tools and solutions to strengthen resilience, detect risks and improve the reliability of vehicle platforms. From another perspective, AI is treated as a Safety-Critical Element, subject to rigorous standards and certifications to ensure that its deployment is trustworthy, robust and auditable 

This blog aims to explore these two complementary perspectives on AI in automotive safety one driven by industry innovation and the other shaped by regulatory and standards-based assurance. Together, they illustrate how AI has evolved from a promising technology to a core component of both engineering practice and compliance frameworks.

AI as a Safety Enabler  

Across both the development and operational stages, OEMs, Tier1 suppliers and cybersecurity firms are applying AI to augment safety functions strengthening resilience through proactive risk detection, automated testing and system-wide awareness.

AI as a Safety Enabler in Automotive Systems

I. Development Stage

In the development stage, AI is increasingly used to validate safety-critical components by automating test generation and expanding scenario coverage 

Fault Injection and Vulnerability Testing 

Traditional fuzzing relies on random or manually crafted test inputs, which can miss subtle flaws. AI-enabled fuzzing, by contrast, generates protocol-specific, context-aware test cases at scale, uncovering vulnerabilities more quickly and systematically. A representative example is the AutoCrypt CSTP Security Fuzzer Solution which leverages AI-generated inputs to probe in-vehicle communication protocols and expose weaknesses in ECUs, braking controllers and telematic units with greater depth and coverage 

Scenario Generation & Simulation 

Another area where AI enhances safety is in the generation of synthetic, edge-case scenarios that supplement baseline test datasets. Addressing a key challenge of ADAS and AV validation surrounding reflection for rare, safety-critical scenarios, AI allows engineers to proactively evaluate system safety under unusual conditions. The Gatik Arena platform illustrates this approach, employing techniques such as NeRFs, 3D Gaussian splatting and diffusion models to create synthetic scenarios, which are then fed into a modular simulation engine for end-to-end validation.  

System-Level AI Safety Architecture 

Beyond individual tools, AI is also embedded into holistic safety frameworks that span the entire lifecycle of software-defined vehicles. These frameworks account for the multi-dimensional nature of automotive software, monitoring and validating AI performance from training to deployment. The NVIDIA AI Systems Inspection Lab highlights this application, offering a safety framework that integrates cloud-based training oversight, model inspection and in-vehicle runtime validation to ensure system-wide assurance.  

II. Operational Stage 

AI also plays a crucial role in maintaining and extending safety during vehicle operation, both at the individual and fleet level.  

Sensor-Aided Risk Detection  

Leveraging multi-modal data fusion, AI enables vehicles to analyze real-time inputs from tires, cameras, radar and LiDAR to identify conditions that could compromise safety. The collaboration between AEye and BlueBand illustrates this approach: by combining AEye’s OPTIS™ autonomous system and Apollo long-range LiDAR with BlueBand’s AI orchestration platform, the solution delivers real-time insights for traffic monitoring, incident detection, and adaptive road safety management.  

Fail-Safe & Safety Redundancy Systems 

Overcoming the limitations of traditional automotive systems which often fail to account for systemic decision-making errors, AI continuously interprets both the driving environment and system health to determine when fallback responses are necessary. The patent for Guident’s Remote Monitoring and Control Center (RMCC) represents this scenario: it’s AI-driven fusion system processes sensor data from multiple autonomous vehicles and can assume remote control when risk levels exceed predefined safety limits.  

Distributed Sensor Fusion & Fleet-Level Threat Analysis 

Reflecting the fact that safety hazards regarding environmental disruptions affect entire fleets, AI enables fleet-level data aggregation and threat analysis, transforming distributed sensor inputs into system-wide safety insights. NIRA Dynamic’s partnership with BANF demonstrates this with the integration of triaxial tire sensor data into fleet management systems, enabling large-scale hazard detection and broadcast-level warnings to improve fleet safety.   

AI as a Safety-Critical Element  

While AI enables safer and more resilient automotive systems, it is also recognized as a safety-critical element requiring rigorous evaluation to ensure trustworthinessThis perspective is reflected in a series of international standards: ISO 26262: 2018, ISO 21448: 2022 and ISO/PAS 8800: 2024.

AI as a Safety-Critical Element in Automotive Systems

I. ISO 26262: 2018 (Functional Safety) 

The ISO 26262 standard focuses on addressing hardware and software faults inside road vehicles that can lead to hazardous behavior. While it does not directly reference AI or ML, AI modules are implicitly covered as safety-related component that may fail due to defects in software implementation, hardware execution, or system integration.  

The first connection appears in the definition of a “safety-related itemunder Part 3. System & Item Definition. Any component which failure could lead to a hazard qualifies, and thus AI modules can be treated as such. Similarly, Part 3. System & Item Definition and Part 4. Hazard Analysis & Risk Assessment (HARA) define “hazards” as malfunctions requiring assignment of an Automotive Safety Integrity Level (ASIL). Under this framework, AI failures such as object misclassification or a neural network crash can be classified and addressed as safety hazards.  

The standard also indirectly applies to AI within software and hardware development. For example, Part 5. Hardware Development requires diagnostic coverage and safety mechanisms for critical hardware faults. This extends to SoCs or accelerators running AI inference (e.g. GPUs, NPUs), which must be safeguarded to prevent silent failures that could compromise AI workflows.  

While ISO 26262 provides a baseline framework for addressing AI malfunction scenarios, it falls short in covering the non-deterministic behavior of AI systems. These gaps have prompted the development of complementary standards ISO 21448, ISO/PAS 8800 to more fully address AI-related safety risks 

II. ISO 21448: 2022 (Safety of the intended functionality, SOTIF)

Whereas ISO 26262 focuses on risks from system malfunctions, ISO 21448 addresses situations where the system behaves as designed but still poses safety risks under certain conditions. As with ISO 26262, terms explicitly referencing AI or machine learning are absent. Nevertheless, the standard is widely recognized as highly relevant to AI-driven systems, which are especially sensitive to incomplete data, edge cases and unknown scenarios 

One key concept appears in Clause 11. Hazardous Scenarios, which introduces the distinction between “known hazards” (anticipated cases) and “unknown hazards” (unanticipated conditions). The latter is particularly relevant to AI, as machine learning models are prone to failure when exposed to out-of-distribution inputs. The standard emphasizes the need to achieve acceptable residual risk even in such unknown conditions.

Expanding beyond definitions, Clause 9. Verification and Validation stresses the importance of robust validations strategies that go beyond normal operating conditions. This is especially critical for AI/ML systems, as traditional deterministic testing methods cannot guarantee complete coverage of rare, long-tail scenarios.  

By incorporating concepts of non-deterministic behavior and unquantifiable risks, ISO 21448 plays a crucial role in framing AI-related safety challenges in automotive systems. It highlights how limitations in AI perception and decisionmaking can result in unsafe outcomes. However, with methodologies for residual risk evaluation still relying on conventional statistical methods, there remain limitations in guaranteeing coverage for rare or unforeseen inputs.  

III. ISO/PAS 8800: 2024 (Safety and artificial intelligence 

Building on the foundations of ISO 26262 and ISO 21448, ISO/PAS 8800 provides the first global assessment framework dedicated to systematically evaluating AI systems in road vehicles. The document explicitly states its intent to extend and adapt the principles of functional safety (ISO 26262) and SOTIF (ISO 21448) to AI and machine learning elements.   

ISO/PAS 8800 raises AI-specific safety concerns directly, linking identified hazards to clear safety requirements and goals. It details procedures covering the entire lifecycle of AI systems including dataset quality management, model development and safe deployment practices. In addition, the standard also places emphasis on runtime monitoring and post-deployment governance, ensuring continuous oversight of AI performance.  

Through this framework, ISO/PAS 8800 ensures that AI safety measures are embedded from the earliest stages of system design through post-deployment operation, closing gaps left by prior standards and providing a structured foundation for AI assurance in automotive systems.  

AI Safety Standards for Automotive Systems

Future Progress of AI in Automotive Safety  

As illustrated in the previous sections, the automotive safety industry has approached AI from two contrasting angles: as a defense mechanism to strengthen safety levels, and as a potential risk factor requiring strict evaluation. Nevertheless, both perspectives converge on the same overarching goal leveraging AI to improve resilience of automotive systems against internal flaws (i.e. software errors, model weakness) and external risks (i.e. environmental hazards, cyber threats) 

Looking ahead, the progress of AI in vehicle systems will center on two parallel developments: advancing innovation in AI-driven safety tools and establishing rigorous compliance and certification frameworksAs this dual evolution unfolds, AUTOCRYPT is committed to playing a leading role in not only providing solutions that integrate AI to enhance safety and resilience but also by staying closely aligned with the evolving regulatory landscape that governs the safe deployment of AI-embedded vehicle systems.  

Learn more about our products and solutions at https://autocrypt.io/all-products-and-offerings/.

An Integrated Approach to Automated Driving System (ADS) Validation

As we enter an era increasingly populated by highly autonomous vehicles, there is a vast range of dynamic driving scenarios that Automated Driving Systems (ADS) may encounter. From hazardous environmental conditions to internal system failures and external cybersecurity risks, ensuring ADS safety across diverse operating situations is essential for enabling safe autonomous driving experiences 

The recent release of “ISO 34505: 2025”  underscores this need by providing a structured framework for generating, evaluating and managing test scenarios that reflect real world driving conditions. By standardizing how test scenarios should be defined and tested, the initiative aims to enable consistent, repeatable validation practices across the industry and thereby support development of robust ADS provision.  

As autonomous systems grow more complex, the need for robust, scalable validation practices become increasingly critical. In response, an integrated approach — combining regulatory audits, system-level testing and adversarial simulations — provides OEMs and Tier 1 suppliers a structured path for both vehicle safety and regulatory compliance. Focusing on cybersecurity, this blog outlines the key components and methodologies of ADS Validation, and demonstrates how an integrated approach can be effectively executed.  

Automated Driving System (ADS) Validation: Approach & Methodology  

According to “SAE J3016: 2021”, Autonomous Driving System (ADS) refer to the collective technology stack responsible for performing dynamic driving tasks (DDT) at SAE Level 3 and above. With the system taking full responsibility for autonomous decision-making and vehicle control, validating ADS safety calls for identifying diverse validation targets and a multidisciplinary process for executing them.  

I. Approach  

The UNECE WP.29 Working Group emphasizes ADS Validation should be approached from multiple angles, including audit and assessment, simulation and virtual testing, real-world testing and more. Drawing on key industry whitepapers (e.g. The Autonomous Working Group, Association for Standardization of Automation and Measuring Systems, Mercedes-Benz), validation efforts can be broadly categorized into three core pillars: functional performance, internal system reliability and external cybersecurity resilience. 

Automated Driving System (ADS) Validation Approach

The first pillar, Functional Performance, focuses on ensuring the embedded vehicle system behaves as expected across a full range of driving conditions — particularly under abnormal scenarios such as complex environments or sensor limitations. In alignment with the “ISO 34505: 2025” standard, which outlines scenario-based ADS testing, this pillar evaluates system capabilities in perception, decision making and control execution under realistic conditions.  

The second pillar, Internal System Reliability, addresses resilience against system-level faults. This includes the inspection of fault detection mechanisms, hardware failure mitigation strategies, and adherence with Automotive Safety Integrity Level (ASIL) grades. Relevant to the “ISO 26262: 2018” standard defining the framework around electrical/electronic (E/E) system failures, this pillar assesses the system’s ability to maintain safety in the presence of internal malfunctions.  

The third factor, External Cybersecurity Resilience, evaluates the system’s tolerance against external cybersecurity threats. Verification over secure communication and data integrity under potential attacks such as vehicle hacking, spoofing and denial-of-service (DoS)) is a key objective of this pillar. Associated with the “ISO/SAE 21434: 2021” standard illustrating cybersecurity risk management for vehicle E/E systems across the lifecycle, this phase assesses the system’s ability to proactively mitigate attack vectors targeting sensors, ECUs and OTA updates.   

II. Techniques   

While various techniques exist to evaluate functional performance, system reliability and external attack resilience, this blog focuses on three core cybersecurity validation methodsCompliance Auditing, Software-in-the-Loop (SiL) Module Testing, Hardware-in-the-Loop (HiL) Penetration Testingto better illustrate the differences across diverse validation approaches. 

Automated Driving System Validation Techniques

The first technique, Compliance Auditing, focuses on verifying whether development practices and system architectures align with established safety and cybersecurity regulations (e.g. ISO/SAE 21434, UN R155). This method is widely used by OEMs and Tier 1 suppliers to conduct gap analyses during early-development stages or in preparation for CSMS Certification audits, to check whether internal processes conform to regulatory requirements.  

AutoCrypt CSTP Compliance serves as a representative tool to accommodate these needs by validating vehicle vulnerabilities on a unified platform. It supports multiple testing domains including Security Validation, Functional Testing, Penetration Testing, Fuzz Testing and Vulnerability Testing and consolidates results into a comprehensive report suitable for regulatory submission. By combining testing execution and documentation, it reduces redundant tasks and streamlines the compliance process.  

Architecture of AutoCrypt CSTP Platform

Another key validation technique is Software-in-the-Loop (SiL) Module Testing, which assesses robustness of embedded security components in virtualized test environments before hardware integration. Commonly applied to TEE (Trusted Execution Environment) based key management testing and V2X certificate handling simulation, this technique enables rapid iteration and early validation of security logic in controlled conditions, before advancing to high-cost hardware testing.  

In accordance with these needs, the AutoCrypt CSTP Functional Tester  validates hardware-dependent security functions using virtual ECU models in a Software-in-the-Loop (SiL) environment. By integrating communication interfaces, debugging tools, ECU source code and test code, this solution facilitates early detection of design flaws and integration issues well before mass production.  

Testing Environment of AutoCrypt CSTP Functional Tester

Another core testing approach is Hardware-in-the-Loop (HiL) Penetration Testing, which evaluates cybersecurity resilience of physical ECUs by simulating real-world attack vectors in controlled HiL testing environments. Often applied for in-vehicle network fuzz testing and Telematics Control Units (TCUs) penetration testing, this technique identifies system vulnerabilities under actual runtime configurations, moving beyond theoretical scenarios.  

Serving this purpose, the AutoCrypt CSTP Fuzzer solution actively injects malformed, unexpected inputs into in-vehicle networks to test ECU-level resistance to cyber intrusions. Covering a broad spectrum of communication layers including the Network Layer (e.g. CAN, CAN-FD, Automotive Ethernet), Application Layer (e.g. UDSonCAN, UDSonCAN-FD) and Transport/Data Layer (e.g. VehicleCAN, VehicleCAN-FD), the tool enables precise testing of vehicle systems under a wide range of adversarial conditions. 

Operational Flow of AutoCrypt CSTP Fuzzer

 

Effective ADS Validation through an Integrated Approach  

With a wide range of checkpoints to address and multiple techniques available, establishing a cohesive and effective strategy for ADS validation is essential. To meet this need, a structured progression from Compliance Auditing to Software-in-the loop Testing and finally to Penetration Testing offers a practical pathway for comprehensive and efficient ADS validation.  

  • At the first stage, Compliance Auditing defines the baseline and sets the strategic direction through regulatory compliance and process control.  
  • Next, software design implementation and testing activities are supported through Software-in-the-Loop (SiL) Module Testing, which enables validation before hardware integration.  
  • Lastly, Hardware-in-the-Loop (HiL) Penetration Testing technique can be utilized to observe real-world cybersecurity readiness under adversarial conditions.  

This layered approach demonstrates how each phase builds upon and reinforces the next, enabling a robust and scalable validation framework.  

With AUTOCRYPT being an authorized Vehicle Type Approval (VTA) Technical Service (TS) Provider , the firm is uniquely positioned to integrate diverse testing techniques and facilitate comprehensive ADS validation through the AutoCrypt CSTP Platform. From the AutoCrypt CSTP Compliance, which ensures design-level safety, to the AutoCrypt CTSP Functional Tester, which verifies correct functional behavior and the AutoCrypt CSTP Fuzzer able to test attack resilience, the platform enables a unified security analysis by consolidating all validation layers into a single, integrated platform 

Integrated ADS Validation using AutoCrypt CSTP Platform

Supporting a streamlined process for Vehicle Type Approval from ADS validation to export of results into compliance documents (e.g. TARA Report, Cybersecurity Test Report), the whole approval process can be effectively managed.  

To learn more about the Autocrypt CSTP platform, check this page. For more information about our comprehensive suite of our automotive products & offerings, check this page