AI and wearables
Today, the pervasiveness of wearables, powered by AI, continues to flourish as tools to enhance our quality of life. Whether it’s a smartphone that pops up directions as you start on your daily drive or an exercise ring that prompts you to stand at least once an hour, these devices are just that—pervasive, i.e. a persistent and powerful entity in our lives.
Wearables, devices that are seamlessly integrated into clothing, worn on the body, or even implanted in the body, are continuously being utilized for enhancing our quality of life and our productivity.
In the home environment, they can help us keep track of healthy habits, remind us about our medication doses or daily tasks, and even keep us honest about those pesky birthdays.
In the work environment, wearables can enhance productivity by using data to nudge workers toward good behavior in order to increase their happiness or monitoring their keystrokesto enhance computer security. In fact, a number of companies are even using wearables to track fatigue on the job.
Wearables though aren’t just for adults – they also have the potential to help parents check on the safety of their children, especially during sleep time. There are a number of companies already exploring the use of these wearable applications for children. Prior research, including ours, have even designed wearables for pre-term and at-risk infants in order to track their kicking movement patterns to enable early detection of a disability.
Although the use of AI in the domain of wearables continues to gain momentum, one challenge that the industry is still struggling with is security. Wearables, a form of IOT (Internet of Things), which are physical devices connected to the Internet, are known to have security vulnerabilities. Wearables collect a massive amount of data in order to enable the AI algorithms to personalize the interaction with users. This same information can also enable tracking and access to the user’s connected profile – whether a banking account, a home security system, or their physical location. And most wearable applications, although optimized for personalization, are not optimized for privacy or safety.
Another exciting AI technology is the emergence of AI systems that can not only recognize human emotions, but also emote in an equivalent, socially appropriate, manner. Emotional AI, also known as affective computing, attempts to sense, understand, and react to human emotions.
Whether it’s a customer service chatbot, a virtual therapist, or a math tutor, advancements in facial analysis, language understanding, voice recognition, and sentiment analysis have provided AI the ability to decode human emotions at a level that is uncanny. Based on analyzing large amounts of data, AI is very good at recognizing whether slight voice inflections are concealing signs of stress or whether facial micro expressions are masking a customer’s exasperation.
By carefully deciphering the human’s emotional state, AI can enhance the interaction in order to keep users engaged in the task at hand. Of course, given that these AI systems are learning from human data, the challenge is to develop appropriate techniques to mitigate algorithmic biases that might be hidden in the system.
There are many other opportunities for using AI in our daily activities. The two aforementioned technologies represent only a small segment of the possibilities. As AI continues to operate in our human-centered world, adapting to our nuances, it can positively enhance our lives, as long as we also address its challenges, including resolving issues of privacy, security, and bias, to name a few.
Ayanna Howard, Ph.D. is the Linda J. and Mark C. Smith Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology. She is also the Chief Technology Officer of Zyrobotics. Dr. Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world. Her work encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics. Dr. Howard recently spoke at the IEEE Vision, Innovation, and Challenges Summit on ethics in AI.