The quest to crack the machine learning code has been a dream of scientists since the invention of the computer. In 1950 the Turing Test captivated the public’s imagination with a question that likely seemed more based in science fiction than reality: could a computer ever match human intelligence?
We’ve come a long way since the technology of the 1950s, but the goal for machine learning is essentially the same. And we are living into the age when machine learning is bringing the kind of solutions to life that have previously been relegated to sci-fi.
Computers that can learn and adapt autonomously have fast become the new gold standard. And the startups leading the way into our AI future are answering questions even more interesting than whether androids dream of electric sheep. One of the most exciting industries ripe for an AI makeover is healthcare. Machine learning has the power to revolutionize how we detect and treat illness, and also how we approach patient care. Here are 5 machine learning startups in the healthcare sector to keep on your radar:
1. ID Avatars
ID Avatars recently raised $1M in funding to create emotional intelligence aimed at improving patient care for people with chronic diseases. By using avatars to interact with patients directly, this startup hopes to lead the charge in creating technology capable of providing empathy. This would have a huge ripple effect across the healthcare industry and would be as useful to hospitals and pharmaceutical companies as to patients.
The mission of Arterys is to make clinical care data driven with a machine-learning based imaging platform. They have developed the first 4D Flow Technology to measure blood flow more accurately and in a non-invasive way than current methods. The blood flow work can be done with any MRI. The tech is integrated with a SaaS platform for doctors to better interpret data on the go, and the potential applications for non-invasive blood analytics span across all kinds of medical needs.
London-based startup Babylon has raised more than $30M, and it’s easy to see why because their idea is so simple but brilliant: remote app-based medical care. In other words, Babylon is your virtual doctor, nurse and pharmacy in one. Imagine how much more streamlined emergency and on-demand care could be through app-based consultations. It’s already the highest rated service in UK healthcare and aims to expand to areas where healthcare is even more essential, like rural Africa, where doctors are scarce but cell phones are becoming commonplace.
Utilizing machine learning to create custom diet plans, Nuritas will be a game-changer in preventative health and overall wellness. Based on AI plus DNA analysis, Nuritas is designed to identify the healthiest ingredients specific to your dietary needs. The future will be full of bioactive peptides.
Developed by MIT scientists, Ginger.io uses predictive models to create a mental healthcare platform. This is a useful alternative model to traditional mental health care, which can be prohibitively expensive and frankly not that convenient for the modern world. App users can arrange a video call with a therapist, text with a health coach, learn coping strategies, and analyze their mood over time with the help of embedded sensors. This is mental health care for the 21st century.
Sure, it’s convenient to have voice-controlled bots that can turn on our music or dictate emails, but the real dream for artificial intelligence has always been to create an emotionally intelligent machine. Only when A.I. tech can accurately read human emotions will we truly see the full potential for this technology to change our lives in fundamental ways.
We may be on the cusp of major A.I. breakthroughs, but culturally we have been fascinated by its possibilities at least since 1872, when Samuel Butler published the satire Erewhon to popular acclaim. “There is no security against the ultimate development of mechanical consciousness,” he wrote. And while A.I. has functioned as a sci-fi trope for good versus evil and human versus machine ever since, it’s only become more relevant as our technology has become more sophisticated. The ethics of A.I. are beginning to pose real and important questions now that our technology is catching up to our imagination.
It’s this convergence of the real and the impossible that makes A.I. so gripping. Take the instant popularity of the new HBO hit Westworld, which poses the creepy questions we’ll soon have to ask ourselves in real life. Questions like, can we empathize with robots? What nuances make a robot seem human? And have we reached the ultimate development of mechanical consciousness that Samuel Butler warned about more than a century ago?
In many ways, A.I. technology is still catching up to the sci-fi realities presented in shows like Westworld. But thanks to the new convergence of A.I. and emotion recognition software, we are closer than ever before. These three startups are on the forefront of bringing emotionally controlled machines to life, stepping irrevocably across a line that has always separated fiction and reality.
By combining emotion research with big data and machine learning, Affectiva emerged from MIT’s Media Lab to lead the field in making software that can accurately read a wide range of facial expressions. Affectiva claims to have the world’s largest emotion database to date, with more than 4.5 million faces across 75 countries analyzed. Of more than 30 patents filed to date, Affectiva has secured seven so far.
Think about it: virtually any app on your phone could be enhanced by Affectiva’s analytics, from dating apps that could read your interest not by a swipe but a smile, to games that self-adjust based on the player’s level of engagement. Thanks to Affectiva, we’ll become even more addicted to our phones. The company is also working on software that can read emotion through voice, which would have a host of practical applications like better customer service interfaces. For now the technology is still making the rounds on the tech conference circuit, but we can expect to see Affectiva to partner with advertisers, app developers, and more. After all, it’s all about the user experience and creating technology that feels even more personalized and indispensable.
Lightwave is an “applied neuroscience platform” that promises golden data to marketers keen to understand audience engagements at peak moments in sporting events, movies, and more. By tracking real time biometric data from wearable wristbands embedded with sensors, Lightwave measures audience engagement at peak moments. Created by 29-year-old former DJ Rana June, Lightwave grew out of her frustration at not being able to read her audience’s engagement in real time. Major companies including Pepsi and Unilever have already utilized Lightwave to measure fan engagement at events like the NCAA basketball championship. 20th Century Fox also used Lightwave to track audience reaction to specific moments throughout The Revenant, providing much more precise feedback for the movie industry than self-reported metrics that were previously the benchmark for audience engagement.
Earlier this year Apple bought startup Emotient in a bid to take the lead in the emotion recognition race. Emotient uses facial recognition technology to read feelings. While we’re still waiting to see how Apple will integrate Emotient’s technology, it could be used to help advertisers read consumer reactions, or to help doctors interpret pain levels in patients. Emotient can accurately read different emotional responses based on faces in a large crowd of people, which could provide valuable feedback for marketing insights in a number of different fields.
Reading The Future of Feelings
Affectiva, Lightwave, and Emotient are just a few of the startups trying to solve the puzzle of how big data, artificial intelligence, and facial recognition algorithms can accurately interpret human emotion. The winning combination will likely combine facial recognition software with biometric sensors for more nuanced data feedback. It’s not hard to see how this would transform marketing, healthcare, and other industries. Once this technology goes mainstream, we’ll be one step closer to stepping over that invisible line that has long separated sci-fi’s version of A.I. from the very real future we are living into.
Google has been on top of cutting-edge technology since its now-ubiquitous search engine rose to prominence. You may or may not have noticed, but the search engine has gotten smarter: it learns from your behavior and alters your results based on search trends and location.
Essentially, Google is the king of algorithms. And guess what? Its machine learning algorithms can belong to your business, too. Recently, Google opened its Cloud Machine Learning platform to all businesses in public beta. Essentially, it allows businesses to train their own models at a faster rate using Google’s system — just a few hours compared to days or more.
Google, now part of conglomerate umbrella Alphabet, is much more than a search engine now, so this service has little to do with your daily queries. Still, it encompasses what we’ve come to expect from Google: speed, intelligence, and constant improvement. That is machine learning in a nutshell, too.
So, how can businesses utilize Google’s Cloud Machine Learning? We can take one example for starter’s: Airbus Defense and Space. This company used the system to automate the process of detecting and correcting satellite images that contain imperfections, like cloud formations, for example.
According to Mathias Ortner, the company’s Data Analysis and Image Processing Lead, “Google Cloud Machine Learning enabled us to improve the accuracy and speed at which we analyze the images captured from our satellites. It solved a problem that has existed for decades.”
More problems can be solved with machine learning — this only scratches the surface. Google has launched two separate services: their Machine Learning Advanced Solutions Lab, which allows businesses to work with Google engineers to solve complex problems, and the Cloud Start program, which offers educational workshops to teach them the fundamentals.
Considering it was Google’s engineers that worked with Niantic before the launch of Pokemon Go, an opportunity to work with the tech giant could definitely be transformative for many. With more businesses in the machine learning game, more customers across the board will benefit from smarter services geared to get even smarter over time.
Thanks to the Internet of Things, we’re awash in more data than we know what to do with. Data from our cars, our watches, our toothbrushes — collecting information is the easy part, but handling it can be complicated. Machine learning can make sense of data, and the implications could help IoT — and wearables specifically — really take off.
When machines are designed to recognize patterns and update their algorithms accordingly, they can make wearable technology more useful for consumers. An article on Warable.com explains some of the opportunities machine learning already presents for technology companies and their wearable products.
Take Google as one example. Android Wear’s “Google Now” app is getting a new feature called “Now On Tap” that uses machine learning to provide contextual suggestions without any prompting. Between search, email, text, location, calendar, and apps, the amount of data that can be mined is huge, and the software can learn a lot about you and what you need at any given moment. It might suggest movie times and trailers when a friend texts you about seeing a film, or quick lunch spots based on your physical location and schedule.
Apple is working on similar technology through which the Apple Watch will recommend more and more relevant apps based on user data. Machine learning could also empower Siri to make informed, proactive suggestions.
Since fitness is the most popular form of wearable, machine learning is influencing the efficacy of health software as well. The Microsoft Band will use the company’s Intelligence Engine to learn how your daily activities influences your exercise routine. The system could, for example, determine whether a high amount of meetings correlates with a slower run, or less sleep.
On the healthcare end, wearables with sensors can inform medical professionals of patient data such as air quality, humidity, steps, times opening the fridge, or using the bathroom. This provides a more comprehensive picture of a patient’s wellness.
Wearables can also track and learn about wearer’s emotions, body movements, fertility, and medication compliance.
All of this together may seem amazing and terrifying at the same time. I see it as an inevitability of technology; we have the data, so it follows that we’ll program the technology to take advantage of it. As long as it’s being used with consent of the user to the user’s advantage, it has the possibility to benefit all of mankind.