Wired reports that Google’s new privacy-invading technology is capable of reading your body language without the use of cameras. One Google designer said, “We’re just pushing the boundaries of what we perceive as possible for human-computer interaction.”
Wired reports that Google’s latest tech uses radar to identify users’ body language, and then takes actions based on the analysis. Google’s Advanced Technology and Product division has spent more than a year researching how radar could be used in computers to learn about people based on their movements and then to respond to them.
In the past, Google has used radar to improve its technology. Soli, a sensor which can detect electromagnetic waves from radar to analyze gestures or movements, was released by Google in 2015. This technology was first used in the Google Pixel 4 smartphone, which detected hand gestures and could turn off alarms or pause the music without touching the device.
This Soli sensor is now being used for further research. Google’s ATAP is investigating whether radar sensor input could be used to control a computer. ATAP’s head of design Leonardo Giusti said: “We believe that technology will become more present in our lives, so it is fair to ask technology to take a few cues from me.”
Proxemics is a large part of technology that is based upon the study of how people use the space around them in order to facilitate social interactions. An example of this is how intimacy and engagement can be increased by getting closer to someone.
Radar can detect when you are getting closer to a computer, and allow you to enter its private space. The computer might then be able to choose to perform certain actions without you pressing a button, such as booting up the screen. Google uses ultrasonic sound waves instead of radar to determine a person’s distance to the smart display. If a Nest Hub detects that you are moving closer to it, it will highlight any current reminders, calendar events or other important notifications.
It is not enough to have proximity. Imagine if you were to walk past the machine, and then look in a completely different direction. Soli uses machine learning algorithms to refine the data and capture subtleties in movement and gestures. This rich radar information allows Soli to better predict if you are about to engage with the device and what type of interaction it might be.
The team performed a series choreographed tasks in their living rooms, which was done with overhead cameras that tracked their movements and real time radar sensing.
Lauren Bedal, senior interaction designer at ATAP commented that she was able “to move in different ways and perform different variations of that movement and then–given that this was a real time system that we were using–we were in a position to improvise, kind of build on of our findings in real-time.”