2026 State of Practice from a Functional Safety Engineer

Robotics has existed in industry for decades, yet the field feels newly emergent as it converges with artificial intelligence and machine learning. As a functional safety engineer preparing for 2026, I see firsthand how rapidly the industry is advancing and how urgently it must anchor that advancement in disciplined safety engineering. Robotics is entering a transformative era, but the future of the field will depend on whether we build systems that the public, regulators, and users can trust.

From Caged Machines to Autonomous Co-Workers
Industrial robots were historically predictable machines operating behind fences, executing repeatable motions in environments free from human contact. Today, the rise of industrial mobile robots, AI-driven manipulators, and humanoid robotics has introduced systems capable of autonomy, perception, and decision-making. These systems no longer operate in isolation; they share floors with people, learn from data, and adapt to conditions in real time.

AI and machine learning are on a similar trajectory. While the underlying techniques are decades old, accessibility and computational power have transformed them into engines of capability. As we enter 2026, the lines separating robotics and AI have collapsed. Robots perceive through neural networks, plan through probabilistic models, and interact through algorithms that do not always behave deterministically. This shift is exciting, but it also highlights why functional safety must be embedded from the start, not introduced as a late-stage requirement.

Why Progress Demands Disciplined Safety Engineering
There is a collective desire among engineers, designers, and manufacturers to push toward technologies that improve lives. Yet current industry standards and expert consensus speak to why this progress must be grounded in disciplined safety engineering.

Nearly 60 percent of robot-related accidents occur during maintenance and debugging, not routine operation. This reinforces that in the coming year, safety must be understood as a lifecycle property, not a performance characteristic. A robot that behaves well during a demo or under controlled integration conditions may still pose significant risk when a technician is troubleshooting, reprogramming, or modifying it.

Standards Catching Up to Reality
The recent updates to ANSI/RIA R15.06 and ISO 10218 formalize this lifecycle mindset. Functional safety requirements, explicit PLr and SIL targets, and traceable validation are now foundational expectations. Cybersecurity threat assessments are also mandated, acknowledging that modern robotic systems can be compromised digitally in ways that directly affect physical safety. These revisions mirror the latest safety requirements from ISO/TC 299, emphasizing that standards exist not to restrain innovation, but to encode the collective lessons of decades of engineering experience.

In the past, products were released when almost all issues were solved. The new trend in some robotics segments reverses this balance. In extreme cases, more problems are outstanding than resolved. This is not a sustainable or responsible model for technologies interacting directly with people. Videos of humanoids walking, lifting, or manipulating objects often mask the unresolved engineering problems beneath the surface. Stability under disturbance, fall management, actuator reliability, uncertainty in AI-based perception, and unpredictable behavior outside training conditions remain major challenges.

What Functional Safety Is Really For
Functional safety exists to prevent hazardous behavior when systems fail, behave unexpectedly, or encounter conditions outside their assumptions. This discipline has guided industries with far higher safety expectations, including aviation. Airliners earned public trust not because they are free of faults, but because every fault is anticipated, mitigated, and validated through rigorous engineering and regulation.

The robotics industry is now approaching its own aviation moment. If robots are to coexist safely with people, especially autonomous robots and humanoids, they must demonstrate reliability under failure conditions. Safety functions such as protective stops, monitored speeds, and software-based limiting must meet defined performance levels and be validated against real-world scenarios. The current updated standards make this explicit.

Safety as the Enabler of Scalable Innovation
Functional safety is not a barrier to innovation. It is what allows innovation to scale. Without it, the industry risks losing public and regulatory confidence, especially after a single high-profile failure. When companies prioritize hype over engineering maturity, they jeopardize the trust the entire robotics sector depends on. The more human-like a robot is, the higher the public expectation for reliability and safety. A humanoid robot that falls unpredictably, misinterprets hazards, or can be misused in a dangerous way invites scrutiny that extends far beyond the manufacturer.

A setback in one company can set back the industry. In 2026, the industry must commit to the principle that safety enables innovation. This means:

  • Functional safety must be integrated from concept design, not retrofitted.
  • Risk assessments must consider maintenance, debugging, and changes over time.
  • AI and ML models must be evaluated not just on performance but on safety robustness.
  • Standards must continue to evolve to address mobility, autonomy, and learning.
  • Users, integrators, and manufacturers must share responsibility across the lifecycle.

Capability Builds Robots — Trust Builds Industries
The robotics field in 2026 will be defined by capability. The robotics field in the decades after will be defined by trust. Every robot placed into a factory, warehouse, or public space contributes to that trust, positively or negatively.

If we build responsibly, grounded in functional safety, robotics will become a cornerstone technology as transformative as commercial aviation or the internet. The future of robotics is bright, but its success hinges not only on what our machines can do, but on the discipline and foresight of the people building them.

Author

Kenny Nonso
Systems Safety Engineer, Reynolds & Moore
Richardson, Texas, USA
kenny.nonso@reynolds-moore.com