With all the recent advancements in robotics, we knew it would not take long before we would be introduced to robots that could see through walls. That time is now here. A team working at the University of California, Santa Barbara, designed robots that have the ability to look through solid walls employing just Wi-Fi signals for the process.
With an enormous range of potential applications in detection, surveillance, search and rescue and archeology, these robots can, not only identify the position of the object on the other side of the wall but can also outline these unseen objects within a scanned structure, and consequently categorize their composition as flesh, timber or metal.
Functioning in pairs, these robots go over the perimeter of the object or structure under consideration and alternately transmit and receive Wi-Fi signals between each other letting the signals pass through the object being scanned. The differences detected in transmitted and received Wi-Fi signal strengths, allow for the recognition of the hidden object. The system makes use of a wave-propagation model with a 2 cm(0.8 in) target resolution. By careful measurement of the received field strengths of the wireless transmissions, the robots generate an essentially accurate map of the structure, specifying in detail the location of solid objects and empty spaces.
Although the concept of allowing vision through concrete isnt entirely original in the field of robotics, this is by-far the most efficient of the machines. The Cougar20-H surveillance Robot achieved the same goal a few years ago but it relied on a number of GHz-range, high-power radio sensor arrays that were essentially intricate radar systems. Another attempt at this concept was made by MIT where a fixed Wi-Fi system was developed, functionally designed to detect movement behind walls with the help of Wi-Fi transmitter and receiver. The resolution, however was too low to allow the system to do anything more than detecting movement, let alone categorizing and identifying objects.
The UCSB robots, however, are different because they rely solely on Wi-Fi radio transmissions and their interpretations, which indicates that the post-capture computation and the signal processing are probably essential to their X-ray vision capabilities. Here we should probably mention that these are still of lower dynamic range than higher-powered arrays and have lower strength but they seem to work. This is explained in the teams insisting claims that they use total variation, wavelet and spatial domain filters and computations in their processing computers and receiving equipment, and also the use of a SLAM algorithm in their on-the-fly mapping computations.
For mapping the coordinates of the object and the area spatially, each robot determines its own position and that of the other robot based on the set distance travelled and speed using a gyroscope and a wheel encoder for positioning. Although, this apparently seems to make precise measurements even more tedious, but it is much simpler if we imagine it working as a medical imaging system, since the principle of working with movement in concerted parallel fashion is similar. The robots work like an MRI allowing for the transmitter and receiver to work in unison. A parabolic antenna is mounted onto each robot, which allows for sufficiently high image resolution.
In terms of potential applications for this technology, the UCSB team sees search and rescue as the standout in the list. Being more specific, they look forward to the idea of these Wi-Fi enabled robots assisting in searching through rubble in the aftermath of disaster situations such as earthquakes, in order to look for survivors.
The team is also excited for the use of these robots at archaeological sites where they can help in the detection and classification of objects behind walls, without necessarily damaging or in some cases removing them. This non-invasive mapping technique prior to digging might prove as an enormous help to archaeologists.
The researchers also suggest a use for this Wi-Fi scanning technique in building detection systems, to alert about the presence of intruders which might be out of rage of conventional surveillance sensors or infrared detectors. Preliminary body scan for monitoring of health might also be a potential use for this technique if incorporated in handheld scanning devices.
The team further intends to explore other imaging applications for this technology, as well as the possibilities of incorporating laser guidance, which would certainly enhance the spatial accuracy and perk up the resolution of captured image maps.
This might be what the future was supposed to look like.