Looking past cyber narratives to protect the brass ring of process integrity and operational trust.
Most of the discussion around AI in OT cybersecurity is framed in threat terms. Smarter attackers. Faster exploits. More automated reconnaissance. Better malware. Better evasion.
It’s a familiar way to talk about the problem, but it leaves too much of the operational reality untouched.
What’s missing is any serious discussion of operational assurance. Not whether an attacker can get in, but whether the system continues to behave within acceptable bounds defined by its engineering intent when things drift, degrade, mis-calibrate, or quietly fail.
One place this shows up clearly, and where the consequences are hard to ignore, is in medical device manufacturing.
According to reporting highlighted by IEEE Spectrum and summarized in a recent piece from Control Global, many of these failures sit at the intersection of manufacturing systems, quality systems, and cyber-physical control integrity, and they are far more prevalent than most people realize. Roughly 15 percent of more than 56,000 medical device recalls were attributed to what regulators labelled “process control” errors. Not malware headlines. Not nation-state attacks. Process control.
These incidents have resulted in real harm. A recent vendor disclosure involved incorrect low-glucose readings that led to 736 serious injuries and seven deaths. That is not an abstract cyber risk. That is a control integrity failure with human consequences.
What’s striking is how rarely these events are discussed in cybersecurity terms at all, let alone in AI conversations.
Cyber incidents are usually defined as electronic communications between systems or between systems and people that affect confidentiality, integrity, or availability. They can be malicious or unintentional. In medical device manufacturing, many of these incidents are not about compromise in the classic sense. They are about integrity loss that goes undetected.
Sensor accuracy. Calibration drift. Inconsistent readings. Silent failure modes.
I don’t need an academic paper to understand this. I experience this at an end user level. I wear a continuous glucose monitor. Anyone who does knows the experience. One sensor may tell you you’re crashing. While another says you’re fine. When readings drift, trends wobble, and confidence erodes.
In a consumer context, that might send me reaching for a glass of juice. In a manufacturing or clinical context, the same kind of drift at Level 0 and Level 1 can tip safety margins, production tolerances, and patient outcomes out of shape long before anyone declares an incident or files a report.
That is a cyber problem, but only if we’re clear about what we expect “cyber” to mean. The term began as defense against attackers, expanded to include protection from undesired system behavior, but in OT the expectations diverge sharply because the discussion is driven more by threat than by consequence.
AI is being applied aggressively to threat detection, anomaly detection on networks, log analysis, and incident response workflows. All useful. None of that addresses the core question raised by these medical device failures:
Are we continuously validating that the physical signals we depend on remain accurate, trustworthy, and fit for purpose?
AI could be extraordinarily powerful here. Not to hunt attackers, but to correlate sensor behavior, detect subtle drift, flag deviations from expected physical behavior, and continuously validate assumptions that engineering teams take for granted once systems are commissioned. This is where AI can reinforce Management of Change, inform HAZOP and LOPA discussions, support safety and environmental protection objectives, and provide early evidence that controls are performing as intended. It can correlate physical reality and observations with digital intent. In other words, AI becomes part of the operational assurance fabric that connects cyber, process safety, environmental stewardship, and resilience, rather than a separate layer bolted on to watch for intruders.
In part, this may reflect a broader discomfort with automation itself. AI is often framed as something that replaces judgment, rather than as a tool that does what it does best: correlating minutiae across large, diverse data sets and connecting signals that no individual or team could reasonably synthesize on their own.
The FDA’s current medical device cybersecurity requirements reflect this gap. They focus heavily on software lifecycle controls, vulnerability disclosure, and access management. They do not meaningfully address control system integrity, sensor accuracy assurance, or continuous validation of cyber-physical behavior. Training for manufacturers and end users follows the same pattern. Plenty of cyber awareness. Very little operational assurance.
If we want to be serious about safety, security, and resilience, especially in high-consequence environments, we need to stop treating AI as a better burglar alarm and start using it as a confidence engine.
- Confidence that sensors are telling the truth.
- Confidence that control loops behave as designed.
- Confidence that drift, degradation, or subtle manipulation will be detected before harm occurs.
For executives, this translates into confidence that safety margins are not quietly eroding, that product quality and regulatory commitments remain intact, that environmental boundaries are being respected, and that operational performance is not leaking out the door through small, compounding deviations no dashboard was built to see.
Layer 0 and Layer 1 are not boring plumbing. They are where comfort zones quietly stretch until something breaks.
If AI has a role worth defending in OT, it starts there.

