The New Automotive Headache: When AI Breaks the V-Cycle
- Raghu Ram
- Dec 7
- 7 min read
Updated: Dec 8

During my real-vehicle tests, the workflow is simple and surgical. When I see an unexpected behavior, I:
trace the responsible subsystem
reproduce the bug
tune and test the subsystem in isolation
then move on to open-loop and closed-loop validation
This process works beautifully until the algorithm I am testing is built on deep learning.The bolt I want to tighten is no longer explicit. It is hidden inside an abstract mapping between input and output, shaped by data rather than deterministic logic. That is what made me question whether the classical V-Cycle of requirements, unit tests and validation can truly accommodate AI-driven automotive development.
Deep Learning and the Engineering Mindset
Classical engineering is built on predictability and determinism. Each subsystem has a single purpose, clear boundaries and explicit logic. Because the workflow is transparent, concepts like unit testing, traceability and root-cause isolation make complete sense.
Deep learning follows a very different lifecycle. A model passes through data curation, architecture design, training, validation and deployment, but none of these stages map cleanly to a unit with a single clear functionality. What connects input to output is an abstract learned representation, not a deterministic block of logic.
This is where the V-Cycle begins to crack.
The V-Cycle assumes that systems can be decomposed into well-defined components, each with a specific role. But an AI-based, end-to-end driving model does not expose such boundaries. So the question becomes: How do we fit an AI system, whose reasoning is learned and opaque, into an architecture designed for deterministic engineering?
How real is the problem?
Imagine testing a parking assist feature where the vehicle suddenly fails to detect a parking line it had identified correctly just a few minutes earlier. As an engineer, your instinct is clear: check the camera feed, test the fusion logic, verify calibration, inspect thresholds. The usual suspects.
But with a deep-learning perception module, the investigation quickly dissolves into uncertainty.
Is the model sensitive to a slight change in lighting?
Did the texture of the asphalt fall outside the training distribution?
Did deployment quantization alter the learned representation?
Is the embedding drifting over time due to noise?
There is no single subsystem to blame, no explicit logic block to tighten and no neat cause-effect chain to follow. The problem exists in a space that is abstract, distributed and often irreproducible. This is exactly the type of failure pattern noted in real investigations such as the Uber ATG crash and the disengagement reports from the California DMV.
This is the moment classical engineering confidence breaks.
Is the industry facing the same problem?
Once you recognise this mismatch at the algorithm level, the next question naturally arises: How big is this problem?Are OEMs and Tier-1 suppliers ahead of this curve, or is everyone dealing with the same blind spots?
From what I have seen, and from what the data shows, everyone is dealing with the same issue.
The moment perception or planning passes through a deep neural network, the determinism stops at the black box. Inputs and outputs exist, but the reasoning in between is abstract. Even in a crash scenario, identifying where the failure originated becomes complicated. If it is a perception miss, at which stage of the V-Cycle should it have been caught?
Requirements can say, "The obstacle shall be correctly identified."But by the time we reach software design, the opaqueness of the model starts appearing. During unit testing, we can only test the outputs of the perception module. We cannot meaningfully analyse how the model interpreted textures, spatial structure or context.
A 2020 survey published in the IEEE/CAA Journal of Automatica Sinica reports that modern autonomous stacks rely heavily on deep learning for perception, localization, prediction and even parts of planning. This transition replaces deterministic logic with learned representations, which is exactly where traditional validation frameworks begin to struggle.
Regulators have noticed the same mismatch.The NTSB investigation of the Uber ATG crash showed that the perception network repeatedly switched classifications before impact. No unit-test framework could have anticipated that behaviour.Similarly, California DMV disengagement reports from Waymo and Cruise often mention perception uncertainty, planner hesitation and localization drift. These failures are rooted in ML-driven ambiguity, not classical software defects.
Even safety standards acknowledge this shift. ISO 21448 (SOTIF) states that machine-learning systems introduce behaviour that cannot be validated using traditional verification techniques.
The automotive safety framework is being rewritten because the old one no longer fits the new category of algorithms we are using today.
In short: Yes, the entire industry is facing this problem.And as AI adoption increases, the gap between deterministic engineering and data-driven intelligence will only become larger.

Why the V-Cycle Cracks when AI enters the Stock?
When a deep-learning feature misbehaves, the first question that hits me is simple:Which part of my stack is failing?
With classical logic, there is a clear path. I can probe the smallest unit, print values, check scopes and verify the implementation against the mental model I carry in my head. With deep learning, that comfort disappears.I can inspect the visible components of a Deep RL system such as the reward function or the entropy, but the core learning process remains hidden. I cannot see how exactly the data is being interpreted or which internal representation caused the final output. That gap breaks the foundation of how the V-Cycle expects us to reason about software.
The mismatch becomes even clearer when I look at how many things can go wrong. In the data stage alone there are endless questions.
Is the data relevant?
Is it balanced?
Did augmentation introduce bias?
Are we skewed toward patterns we did not notice?
Training introduces another layer of uncertainty.
Is the optimizer behaving well?
Is the learning rate appropriate?
Are the weights converging?
Is regularization helping or hurting?
Every input can change the system in its own unique way. None of this is traceable in the way classical engineering expects.
This is when I realised that the V-Cycle needs more blocks when AI is involved.Requirements, code development and model development are not enough.We need stages that verify data quality, analyse uncertainty, check distributions, validate drift and study parameters. Even then, there are places we simply cannot break down further.
For me, the weakest stages of the V-Cycle are validation and integration.Validation struggles because the real world is infinite while datasets are finite. A model with 98 percent accuracy still sees only the world contained in its dataset.Integration adds complexity because deployment introduces new compute constraints and quantization effects that can alter behaviour in unpredictable ways.
My biggest hesitation when signing off a deep-learning feature is whether it can handle new scenarios. The model only knows what it has seen. The real world does not have an endpoint, and that mismatch makes safety assurance difficult. What I miss most from classical engineering is certainty.A unit test used to tell me exactly what worked and what did not. With deep learning, that clarity disappears. The V-Cycle was built for systems that could be decomposed, inspected and proven. AI-based systems simply do not fit that shape.
How should Engineers equip themselves in AI era?
The way forward is not to replace classical engineering but to combine it with the strengths of deep learning. A future automotive engineer must learn to limit the stochastic behaviour of AI models during development. The safest path is to anchor the core of the system in predictable classical methods and use deep learning only where it adds value. For example, an MPC controller is predictable but its tuning can be guided by a deep learning model. This keeps the guard rails in place.
The biggest mindset shift is learning how to intuitively map where deep learning models fail. In classical systems you know exactly where the error lives. With AI, failure has its own patterns, and engineers need an internal map of these patterns to design system checks. Without this intuition, debugging becomes blind.
AI also introduces concerns that never mattered before. In classical development, if something works, you know it learned physics. With AI, you constantly question whether the model understood the environment or simply memorised patterns.
If engineers hold on to the old V-Cycle without learning AI workflows, they will eventually fall behind in vehicle level analysis and ADAS certification. The system will evolve, but their ability to evaluate it will not.
When someone asks where to start, my suggestion is simple. Build something and debug it. Learn where things go wrong in the AI pipeline. Develop the habit of checking distributions and understanding out-of-distribution behaviour. Explore explainable AI. Think about safety overrides. Ask what the system should do when the model fails.
Engineers also need to let go of old habits. In classical engineering you report with complete confidence on the working of a subsystem. With AI, confidence is not binary. Introduce a confidence parameter and analyse where the model is likely to fail. This becomes the basis of vulnerability analysis.
To me, an AI-ready automotive engineer is someone who understands the confidence of the system he built and has a mental map of its vulnerabilities.
The Road Ahead
If there is one message I want the reader to take away, it is responsibility. AI is not the final answer to automotive problems. It is a powerful tool, but only if we build the discipline and processes needed to use it safely in the real world. Without that, AI becomes a risk multiplier, not a capability booster.
The biggest truth I have learned is simple. We turn to AI to bypass complex mathematical modelling, but the cost we pay is predictability. The more intelligence the system learns, the less direct control we have over the internal reasoning. That trade-off needs to be acknowledged.
I believe the future of automotive engineering will adopt a more sophisticated development cycle that merges classical methods with AI-focused stages. Data validation, model design, parameter tuning, scenario creation, explainability checks, safety overrides and vulnerability analysis will all become standard. Large volumes of synthetic data will support validation and provide confidence far beyond what known datasets can offer.
For engineers entering this field, you are standing at the point where the shift is happening. Along with technical knowledge, a system level intuition is essential. Understand where models fail, understand the confidence of your predictions and treat AI safety as a core requirement. The systems we build will go out into the real world. The responsibility that comes with that cannot be ignored.





Comments