by adgrooms on July 31, 2019

It's hard to remember how we used cell phones before the iPhone came along in 2007. Now Google is about to release its new Pixel 4 with gesture control. Yes, you won't have to touch your phone to use it. Although functionality will be limited at first to a few basic controls (unlock the phone, swipe between songs, silence an alarm), it really gets the imagination going about the possibilities, especially when adopted by healthcare.

The technology behind Google's motion control, called Soli, relies on a tiny microchip that emits an electromagnetic field (radar) that can track the smallest movements. Unlike the sweeping motions needed in camera-based motion control, Soli will be able to see fine-grained motions such as turning a knob and blinking an eye. Apply machine learning to motion technology and you could have a gesture-based conversation with a device.

A hands-off EHR comes to mind immediately. By not having to touch a mouse or keyboard, the battle against spreading infection gets a little help. Gesture-based navigation could dramatically speed up cumbersome operations. A series of keystrokes and clicks to get to a specific page in a patient file could be replaced by a single gesture. Perhaps double tapping the middle of your chest could bring up cardiology information...Or a biting hand gesture could gastro-intestinal information. Signing something in the air couldn't make our handwriting look any worse than it does on a checkout signature pad.

Another application that could help fight cross-contamination would be to include gesture control in facility infrastructure. What if you could open and close doors, flip a light switch, turn on the water. That alone would dramatically reduce the number of surfaces that need to be touched by everyone just to physically move around a facility. Taking the idea of the smart home to the idea of the "sterile smart hospital".

What about diagnostics? Motion sensing detectors could map a range of motion with higher accuracy. It could be used to measure chest expansion during breathing. It could precisely track and record movements during sleep. Machine learning could take the inputs and give motion-based diagnostic analysis.

Then, of course, there are the apps that the patient can use on their phone outside of the clinical setting. We will save that topic for another post. Where do you see motion-sensing bringing innovation and efficiencies in the clinical environment?