Researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have developed a new system that could equip robots with something we take for granted: the ability to link multiple senses together.
The new system created by CSAIL involves a predictive AI that’s able to learn how to see using its ‘sense’ of touch, and vice versa. That might sound confusing, but it’s really mimicking something people do every day, which is look at a surface, object or material and anticipate what that thing will feel like once touched, ie. whether it’ll be soft, rough, squishy, etc.
The system can also take tactile, touch-based input and translate that into a prediction about what it looks like – kind of like those kids’ discovery museums where you put your hands into random boxes and try to get at the objects you find within.
These examples probably don’t help in terms of articulating why this is actually useful to build, but an example provided by CSAIL should make that more apparent. The research team used their system with a robot arm to help it anticipate where an object would be without sight of the object, and then recognize it based on touch – you can imagine this being useful with a robot appendage reaching for a switch, lever or even a part it’s looking to pick up, and verifying that it has the right thing, and not, for example, a human operator it’s working with.
This type of AI could also be used to help robots operate more efficiently and effectively in low-light environments without requiring advanced sensors, for instance, and as components of more general systems when used in combination with other sensory simulation technologies.
This post was originally posted at http://feedproxy.google.com/~r/Techcrunch/~3/GXi0tCd-yEI/.