
From Controlled Spaces to Crowded Places: Keeping Robots Safe in the Real World
What modern safety thinking teaches us about robots in hospitals, warehouses, and our homes
For decades, robots operated within controlled environments such as factories, laboratories, and industrial settings. These spaces were carefully designed, with trained personnel, consistent lighting, predictable floors, and well-defined rules. The boundaries were clear, and risks were managed through established procedures.
Now, robots are moving into the real world. They navigate hospital corridors, pick products in bustling warehouses, deliver packages along sidewalks, assist workers in factories, and even function in homes and schools. Unlike before, these environments are not controlled. The people they encounter are diverse and typically not robotics experts. Unpredictable events are now common, not rare.
This transition calls for a shift in safety thinking. The main safety concern is no longer simply about system failures. It is about robots performing as designed in settings that cannot be completely controlled or predicted by their designers. In other words, even if the system works exactly as designed, is the design itself safe enough for the real world?
Two Kinds of Safety
As robots move from controlled spaces into real-world environments, it becomes useful to distinguish between two different kinds of safety. Both are important, but they address different types of risk. They are Functional Safety and Operational Environment Safety.
Functional Safety
Functional safety focuses on preventing harm when systems fail. Traditional safety engineering has concentrated on ensuring that hardware and software behave safely under fault conditions. Standards such as IEC 61508 were developed to manage risks arising from component failures, software crashes, and electrical or electronic faults. Techniques like redundancy, monitoring, and fault handling remain essential, because failures still happen.
This foundation of functional safety is necessary, but it is no longer sufficient on its own. Modern robots increasingly rely on perception systems, complex software, and machine learning. In many real-world situations, the most significant risks arise not from failures, but from systems operating exactly as designed in environments that cannot be fully predicted or controlled. This leads to a second, equally important category; operational environment safety.
Operational Environment Safety
Operational environment safety focuses on how a robot behaves in the world it operates in. It addresses the limitations of sensors, algorithms, and design assumptions when robots encounter cluttered spaces, unexpected human behavior, unusual objects, or changing conditions. In these cases, the system may be fully functional, yet still make unsafe decisions because its understanding of the environment is incomplete.
In automated driving safety, this perspective is captured by frameworks such as ISO 21448, which address the safety of the intended functionality. The same principles apply to robots that share spaces with people. Safety is not only about ensuring systems fail safely. It is also about ensuring systems behave safely when nothing has technically failed.
Together, functional safety and operational environment safety provide a more complete framework for understanding and managing risk as robots move into crowded, dynamic, and human-centered environments.
Defining Where a Robot Belongs
Every robot is designed with an intended environment. For example, a warehouse robot anticipates smooth floors, marked aisles, standard pallet sizes, and consistent lighting. A sidewalk delivery robot expects reasonable weather, accessible curb ramps, and clear paths. But real environments are dynamic. Construction may block aisles; reflective materials can confuse sensors; misplaced objects; weather can create new obstacles, and children or pets might move unpredictably.
Therefore, a crucial safety measure is to clearly define the operational boundaries for each robot: where it should operate, under what conditions, at what speeds, and with what assumptions about human behavior. Vague boundaries often lead to increased risk.
Understanding System Limitations
No perception system is flawless. Cameras can be affected by glare or shadows; lidar may be confused by reflective surfaces; radar has its own limitations. Machine learning systems depend on the data they were trained on. In real-world settings, like hospitals or factories, robots may encounter unusual equipment, unpredictable movements, or unfamiliar objects.
In such cases, the hardware and software may be fully operational, but the system’s understanding of its environment is limited. Effective safety begins by identifying and documenting these limitations, and designing the robot to act responsibly when those limits are reached.
When Small Factors Combine
Risk often arises not from a single large failure, but from the combination of several small limitations. For example, dim lighting, a person moving quickly around a corner, a slightly higher speed setting, and a reflective floor may each be manageable alone. Together, however, they can create a hazardous situation. Safety thinking must therefore shift from analyzing individual components to understanding the system as a whole. Instead of only asking what happens if a part fails, we must also ask how various environmental conditions, human actions, and design assumptions might interact to create danger. This approach is scenario-driven and contextual.
Designing for the Unknown
It is impossible to test every possible real-world scenario. The world is too complex and unpredictable. Robots might face street festivals, temporary fencing, individuals intentionally blocking their path, or layout changes that were not considered during development.
Modern safety strategies accept that uncertainty is unavoidable. The goal is not to eliminate uncertainty, but to ensure robots respond safely when it arises. This might involve slowing down, stopping, requesting human assistance, or entering a minimal-risk state. A mature robotic system is defined by its ability to handle the unexpected gracefully.
Safety as a Continuous Process
Deploying robots in the real world is just the beginning of the safety journey. Field data reveals rare edge cases, patterns of human interaction, and environmental conditions that may not have been fully tested. Organizations that use this data as a safety resource improve their systems over time. They track incidents and near-misses, analyze behavior, refine perception models, and clarify operational boundaries.
This process of continuous improvement is similar to lessons learned in automotive and other safety-critical industries. The most reliable robots will be those that learn from real-world experience.
Why This Matters Now
Robots are no longer isolated from people; they are becoming active participants in human environments. As autonomy increases, structured thinking about operational environment safety becomes even more important. Principles from advanced vehicle safety, including those in ISO 21448, emphasize that safety is about more than preventing faults. It also involves understanding limitations, anticipating misuse, and designing for uncertainty.
Today, robotics has reached a point where systematic safety thinking, including defining operational boundaries, identifying and managing limitations, and continuously improving based on real-world data, is essential, not optional. As robots move from controlled spaces into our everyday environments, the future will be shaped by those systems that can operate safely among us. Companies that prioritize operational environment safety today will build the machines that earn human trust. Ultimately, trust, more than technical capability, will determine which robots make a lasting impact on the world.
Turning Safety Challenges into Engineering Reality
If the situations described here feel familiar, you are not alone. Many teams recognize these risks but conclude that the real world is simply too complex, too unpredictable, or too human to be addressed systematically. Others are told that once robots leave controlled environments, meaningful safety assurance is no longer possible. That conclusion is understandable, but it is also wrong.
The challenges of operating in crowded, dynamic environments can be addressed with the right safety frameworks, the right questions, and the right engineering discipline. Defining operational boundaries, identifying system limitations, and designing for uncertainty are not theoretical ideals; they are practical, achievable steps that leading teams are already taking.
If your organization is struggling with these questions, or if you are being told that “this is just how the real world is,” we invite you to talk with us. Reynolds & Moore helps teams move from uncertainty to clarity, and from perceived impossibility to demonstrable, trustworthy safety.
Author
Paul Schmitt
Director of Engineering, Reynolds & Moore
Boston, Massachusetts
paul.schmitt@reynolds-moore.com


