Probe Sensors for Simulating Warehouse Robot Fleets
March 15, 2022
· Written by
Massimo Isonni

Highly Detailed Environmental Perception Without Compromising System Performance

At Duality Robotics, we continuously seek out the most innovative and effective ways of simulating the components, systems and environments that we might encounter in real life commercial apps. Digital twin simulation provides the ability to accurately model virtual assets as precise representations of their physical counterparts, ultimately enabling data permeability of digital twins between the physical and virtual worlds.

Inherent to Duality’s approach is a disciplined emphasis on workflow efficiency – ours and our customers. Imagine if you will a warehouse floor populated with autonomous robots handling fulfillment. Just as a customer needs to achieve the most efficient warehouse operations, so must we identify the most efficient robotic systems, the most efficient components to comprise them, and the most efficient use of the computing and power resources that underpin them – in simulation and in real life.

It's this ethos that drove Duality to develop and patent a novel approach to simulating a probe sensor – the kind that might help guide an autonomous robot through a warehouse, among many possible applications.  


The simulated probe sensor we developed can be used to emulate a perception stack in robotics by using the returned information to determine what objects are around the sensor and how they are moving – it informs how the machine views the world around it. For a robot in a warehouse floor application, a probe sensor can be used to detect and identify obstacles in the robot’s path to help guide its rerouting so as to enable the most efficient fulfillment.

With the probe sensor, rays are shot in a pyramid based on horizontal and vertical field of view (FOV). What’s fundamentally unique about our patented probe sensor simulation is the manner in which those rays are distributed for maximum performance and efficiency. We call this approach stochastic raytracing, and as the name implies, it leverages a random probability ray distribution.  

What’s the primary benefit of this approach? While shooting rays in this pyramid FOV, we won’t be shooting rays in every single position of the pyramid because this extreme workload will ultimately yield very low performance. With stochastic raytracing, we’re randomly shooting X,Y,Z numbers of rays per frame in a manner that enables a precise view that’s both realistic and highly efficient.

Each ray provides us with important information – incorporating ray origin, direction and collision, etc – that we can use to calculate distance from the detected object and the direction of the object relative to the probe sensor onboard the robot. Overall, it enables a more efficient use of available resources by leveraging a ‘narrower’ view that still provides the rich data quality that’s needed.

With Falcon, warehouse robots leveraging probe sensors and stochastic raytracing to detect and identify surrounding objects, enabling efficient real-time path navigation.


This probe sensor technique compares to a more resource-intensive, omnidirectional lidar approach that detects every position within the pyramid and can therefore provide a highly detailed point cloud depicting the surrounding environment. Mind you, this lidar approach is wholly appropriate for certain applications, particularly for robots navigating environments that human operators can’t effectively or safely navigate. For many applications, however, lidar is simply overkill! And no matter the application, lidar is extremely taxing on computing and power budgets.

In controlled environments like the aforementioned warehouse app, the robot can readily know if it’s encountering a wall, object, etc – or perhaps one of many additional robots working in close proximity – through the use of stencil IDs. For any given scenario, stencil IDs represent known objects/obstacles in the target environment and give the robot the ability to quickly match detected object parameters to an onboard repository of corresponding stencil IDs that define what the object is, and how to proceed when the robot encounters it.

Leveraging stochastic raytracing via the onboard probe sensor in combination with a stencil ID scheme, a robot can ‘know’ exactly what it’s encountering in any given environment, benefitting from a precise and realistic sense of the scene around it. Acting as the eye of the robot, the probe sensor provides a highly efficient and optimized means to gather and act on data for countless commercial apps in the metaverse.