Deep Vision's data abstraction technology delivers real-time passive ranging.
Deep Vision's novel concept of data abstraction quickly transforms abundant sensor data into a form that is easily classified and efficiently analysed.
The abstractions, created from the raw sensor data, are used to recognise the objects of interest. The abstractions for these objects, coupled with their size on the sensor and knowledge of the object's true size, provides the information required to passively gauge the line-of-sight distance to each of the objects.
Further to this, utilising the rate at which the sensor data is acquired (i.e. Frame rate), the relative bearing for each of the objects is ascertained. This includes:
Deep Vision's data abstraction technology operates with a throughput of 100+ frames per second.
† Typical. Based on a 640 x 480 data set
The following videos illustrate the use of Deep Vision's technology for passive ranging. This technology is capable of determining an object of interest's bearing relative to the sensor. The bearing information includes:
- Line-of-sight distance between the object and the centre of the sensor
- Distance between the object and the centre of the sensor in all three planes: Horizontal (X-axis), Vertical (Y-axis) and Depth (Z-axis)
- Velocity of the object (relative to the sensor)
- Acceleration of the object (relative to the sensor)
- Azimuth of the object (relative to the sensor)
- Elevation of the object (relative to the sensor)
Additional navigational information can be derived from the above. These videos used a single, static knowledge base consisting of the following description-label pairs:
- A cross: First Aid
- An airplane: Airport
- The letter H: Hospital
This video illustrates the recognition of the First Aid object. The bearing information and associated label of the object is displayed at the top. The 3D distance of the object, relative to the sensor, is presented in a tooltip attached to the object.
This video uses the same raw video used in the Object Tracking example. In this video, the First Aid and Hospital objects are recognised and their bearing information is displayed.
In this video the distance between two objects is displayed. When two or more recognised objects are present, the user must select the desired object in order to see the bearing information. In actuality, the bearing information for every recognised object is known - the selection mechanism is only a means of presentation.
This video demonstrates how the technology could be used for interactive navigation. The application guides the user through a series of signs by evaluating the user's movements and describing any errors. The following capabilities are illustrated:
• Recognition of multiple signs
• Distance from the camera to each sign
• User instructions
• Validation of movement
• Explanation of error