AI Secondary: GeckoImager™

GeckoImager™ uses sensor fusion by incorporating structured light machine vision and sonar range finding to compliment GeckoOrient's™ solid-state compass, accelerometer and odometry sensor fusion. Sensor fusion is the combining of sensory data or data derived from sensory data from disparate sources such that the resulting information is more accurate than would be possible when these sources were used individually.

This provides the automatic, self-navigation artificial intelligence (AI) software, GeckoNav™, with sufficient and timely data to achieve actionable situation awareness while providing a very safe, loose crowd level of autonomy to be "collision proof."

Main Features

Additional

Examples of sensor fusion can be found in sonar and radar equipment as well as television cameras. Humans also use sensor fusion in everyday situations such as smoke detection and lip reading. We make these choices -- however mundane they may be -- based on data that is interdependent, or incomplete, versus using only one of our five senses. Thus, the better the sensor fusion, the better choices we make and the more "actionable" our "situation awareness" is.

Traditional video centric machine vision is very expensive in dollars, power consumed and time required to update. Taking a clue from compound insect eyes in nature, we invented the GeckoImager, which 'creates' 70 facets at 10 times per second. We fused two Microsoft Kinects' Field of Views (FOV) with sonar range finders to give greater range and reliability to the GeckoImager for locating stationary and/or dynamic obstacles quickly and at low cost, both in dollars and time.

Machine vision is a combination of structured lighting, a detector, and a computer to precisely gather and analyze data. Scanning the object with the light constructs 3-D information about the shape of the object. This is the basic principle behind depth perception for machines, or 3-D machine vision. In this case, structured lighting is sometimes described as active triangulation.

Structured light is the projection of a light pattern (plane, grid, or more complex shape) at a known angle onto an object. Although other types of light can be used for structured lighting, laser light is the best choice when precision and reliability are important. This technique can be very useful for imaging and acquiring dimensional information. Fanning out a light beam into a sheet-of-light generates the most often used light pattern. When a sheet-of-light intersects with an object, a bright line of light can be seen on the surface of the object. By viewing this line of light from an angle, the observed distortions in the line can be translated into height and/or distance variation.

GeckoSystems employs proprietary sensor fusion technologies not only in its flagship automatic self-navigation software, GeckoNav™, but also in GeckoTrak™, the GeckoSPIO™ and GeckoOrient™.

Technically speaking, the addition of the GeckoImager allows the system to process over 2,000 times more raw data than from the previous CompoundedSensorArray (CSA) system. GeckoImager provides an abstraction layer between this massive stream of data and GeckoNav, providing 8 times the quantity of data compared to the CSA with better quality and reliability. Despite this increase in data flow into GeckoNav, the abstraction provided by GeckoImager results in a reduction in the overall processing load on GeckoNav instead of an increase.

Consequently, the amount of data that the new GeckoImager provides is far greater than what can reasonably be collected with fixed sensors and at a much lower cost than scanning laser range finding systems that are frequently used.