Among the most intriguing innovations in wearable technology is high-tech clothing. Will we all be decked out in smart clothes from head-to-toe in the future? Will our shirts and sneakers be collecting data and making suggestions? Some people think so, and it could be athletes that are the first to roll up their sensored sleeves and getting down to business.
Companies are already designing high-tech clothing for athletes. For example, take “e-skin” by the wearable tech company Xenoma. This smart shirt tracks gestures and makes suggestions on form to athletes.
How does it work? E-skin is made with Printed Circuit Fabric, which has stretchable sensors and wires embedded into the textile during manufacturing. A centralized “hub” sits in the middle of the shirt, able to transmit data to a smartphone, tablet, or other devices.
Xenoma offers an e-skin software development kit so that developers can create apps that take advantage of e-skin’s innovative capabilities. The kit starts at $5000.
One use of e-skin that you can witness on video? E-skin for golfers. The shirt succeeds in analyzing the wearer’s swing, form and stance. Then it provides feedback to help the athlete improve. It could do much the same for any sport.
Xenoma also showed off e-skin this January at the Consumer Electronics Show (CES), this time for its gaming capability. Able to record data at 60-frames a second, e-skin can translate your movements into digital form, animating game characters on screen.
What more do you need from e-skin? Try machine washable (check), rechargeable (check), and long-lasting (it has a four-hour battery life). Xenoma is expected to release a consumer version for around $600 by mid-2017.
For gamers, especially of the VR variety, it could be an amazing way to get an immersive and active experience out of gaming. I expect it will be even better for athletes, acting as a very personal trainer that knows your body better than you do. For the rest of us, it can remind us to breathe and relax, let us know if our posture is poor, and encourage us to live a more active lifestyle.
Every year, the Consumer Electronics Show unveils the hottest new tech products and trends, which predictably leave tech junkies drooling with anticipation. This is just as true for sports technology, which makes its debut in various iterations at CES to the delight of those like myself with interest in this growing field.
2017’s CES opened on January 5 in Las Vegas with just as much fanfare as usual, and with it a slew of amazing new products.
FitBit, which recently acquired Pepple, unveiled its new personal trainer app. It is also partnering with nutrition app Habit, indoor training bike company Peloton, and VR sports pioneer VirZOOM, and even Uber — though we may just have to wait and see what comes of these. Nonetheless, FitBit has proven itself an industry leader in athletic wearable tech.
Another product that cropped up at CES was Athlete Recovery Sleepwear, introduced by Under Armor. In partnership with Tom Brady, Under Armor’s product promises to help regulate body temperature and improve sleep for better daytime performance.
Other smart apparel at CES include the Pro Team Shirt, which comfortable operates as a heart monitor and GPS, a smart baseball training shirt from SwingIQ, and a smart running shoe by Sensoria and VIVOBAREFOOT.
Biometric data wearables are also proving to be a trend, and the less visible, the better. According to Sports Illustrated, a trend called “hearables,” as demonstrated by in-ear data trackers by Bodytrak, KUAI, and the Dash. These small devices are perfect for collecting internal metrics, like core body temperature, while also having the ability to play music. Fun!
All in all, it looks like another successful CES for athletes and for fans. It’s clearer than ever that as technology gets more advanced, so does the athletic prospects and fan experience of those that adopt it.
If high stress levels tend to rule your days and you’re committed to living a healthier lifestyle, technology may be able to help you out. Cigna has unveiled the Cigna Virtual Relaxation Pod that gives you a chance to escape reality for a while — a two-minute break that works to push the reset button on your day.
The pod makes use of virtual reality technology using the Oculus Rift headset and headphones. You’ll enter a sound-insulated environment for a multi-sensory experience. The entire experience induces a state of mindfulness and relaxation, an opportunity to explore a virtual environment with expert-guided meditation. You’ll be locked in a sound-insulated pod and all you need to do is sit back, relax, and breathe deeply. You get to choose your environment: a Zen garden, a tropical beach, or a woodland campsite. You’ll go on a two-minute sensory journey that allows you to just let go, escaping from reality to experience a state of mindfulness.
In an interview with Medical Daily, Cigna Solution Architect Rachel Stein shares that Oculus technology helps transport the user to a relaxing environment where they can take advantage of the benefits of mindfulness therapy. The pod engages all the senses, making you aware of your current state of mind. The guided meditation helps you maintain a state of mindfulness, peace, and calm, in a relaxing setting.
The team behind the Cigna Virtual Relaxation Pod are hoping to see the pod installed inside hospital lobbies and other places where people tend to feel tense, nervous, or anxious. Just a couple of minutes in this pod can promote a meditative state which helps induce a state of calm and natural relaxation.
Would you be first in line to settle in to a virtual relaxation pod? Innovative experiences like these could be the next generation of health and wellness. Virtual reality can control the mind and in ways that make significant changes to our health and well-being.
As technology becomes omnipresent in people’s lives–people go about their lives smartphone in hand–it follows that we would no longer have to carry technology, but simply slip it on. Smartwatches and fitness trackers are the first widespread wave of wearable technology, but they certainly will not be the last. These wrist devices are already gathering vast amounts of information about their wearers that can be translated into lifestyle research, which will inevitably lead to even more convenient tech accessories.
The trick with wearable technology, however, has not been ease of use, but style. The trendsetters who pioneer new fashion styles could also pave the way for wearable tech, but it will need to look the part. The contradiction is in the name: for technology to be wearable, it will need to look enough like fashion. But tech geeks have never been known as fashion plates, so what happens when these two worlds collide?
For decades, Apple has been the frontrunner for sleek product design, and indeed, their Apple Watch–like their phones–can now command a wait. However, a wearable not only needs to look good enough to be shown off, but needs to function well enough to become essential. Otherwise, why wear it in the first place? Fashion is famously ephemeral, but wearable technology cannot afford to be so short-lived. Although new models will be released, both the functionality and aesthetic need to meet certain standards for wearable tech to be fully integrated into people’s lives.
Wearable technology also offers a valuable service to athletes that could make it a functional part of sports uniforms. Major League Baseball players experimented with wearable technology this past year on a voluntary basis. Approved devices were evaluated and tested before they were allowed on the field, and rules govern the gathering and use of information. But wearable technology could greatly help athletes and their support staff monitor players’ health and performance when they are most active.
Technology isn’t going away, which means it will likely become even more integral to our daily lives. What better way to weave technology into the day-to-day than by wearing it?
It turns out that the future of space exploration may look more like Avatar than other popular science fiction depictions. Those cinematic organic blue avatars are a form of wearables–mechanical exoskeletons that allow for remote control of robots. The immediate implications of such technology include deep space exploration that would drastically reduce risk to humans, as well as ambulatory capabilities for those paralyzed or without full control of their limbs.
NASA worked for years to build a robotic exoskeleton with The Florida Institute for Human and Machine Cognition (IHMC). Weighing in at 57 pounds, the X1 exoskeleton could both restrict and enable limb movement. The former capacity could provide valuable exercise for astronauts through muscle resistance, particularly during longer spells in space or on expeditions, replacing bulkier exercise machines. The exoskeleton can also transmit biometrics back to scientists planetside. As the technology is developed further, a machine exoskeleton could provide extra power for astronauts on the surface of a distant planet, helping them to walk with less gravity, for example.
Astronauts at the International Space Station tested another form of wearable technology: a joystick developed by the European Space Agency. The METERON (Multi-Purpose End-To-End Robotic Operations Network) research collaboration between countries like the Netherlands, Germany, the United States, and Russia is interested in giving astronauts in orbit the ability to teleoperate robots on the planet’s surface. The highly-sensitive joystick will simulate the force of objects on a planet’s or moon’s terrain, resisting the astronaut’s control.
Testing in the International Space Station will help METERON better understand how this kind of control–which most people experience in video games–feels for those operating in little or no gravity, with extended exposure to weightlessness. Officials from the European Space Agency explained their vision: “’Future planetary exploration may well see robots on an alien surface being teleoperated by humans in orbit above them—close enough for real-time remote control, without any significant signal lag, to benefit from human resourcefulness without the expense and danger of a manned landing.’”
Just a year ago, in late 2015, NASA gave MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) the R5 robot known as “Valkyrie,” that is expected to be part of Mars missions. These humanoid robots can pave the way for astronauts’ arrival, reducing human risk, and stick around to assist astronauts on missions. “‘Human-robotic collaboration’” is seen as critical for longer missions like Mars.
Most recently, DFKI spearheaded the CAPIO project, which produced an exoskeleton that remotely steers a robot called AILA. This project improved on the fine motor skills of the robotic avatar: AILA can even send sensory feedback to the wearer of the CAPIO exoskeleton. The ideal test would involve sending AILA to the International Space Station and testing the remote connection from the planet’s surface.
All this research, development, and investment indicates that wearables are seen as the way forward for space exploration, at least until it can be made safer for humans. But even with improvements, wearables will likely still be an invaluable resource for exploring the true unknown, as we venture deeper and further into space.
Sure, it’s convenient to have voice-controlled bots that can turn on our music or dictate emails, but the real dream for artificial intelligence has always been to create an emotionally intelligent machine. Only when A.I. tech can accurately read human emotions will we truly see the full potential for this technology to change our lives in fundamental ways.
We may be on the cusp of major A.I. breakthroughs, but culturally we have been fascinated by its possibilities at least since 1872, when Samuel Butler published the satire Erewhon to popular acclaim. “There is no security against the ultimate development of mechanical consciousness,” he wrote. And while A.I. has functioned as a sci-fi trope for good versus evil and human versus machine ever since, it’s only become more relevant as our technology has become more sophisticated. The ethics of A.I. are beginning to pose real and important questions now that our technology is catching up to our imagination.
It’s this convergence of the real and the impossible that makes A.I. so gripping. Take the instant popularity of the new HBO hit Westworld, which poses the creepy questions we’ll soon have to ask ourselves in real life. Questions like, can we empathize with robots? What nuances make a robot seem human? And have we reached the ultimate development of mechanical consciousness that Samuel Butler warned about more than a century ago?
In many ways, A.I. technology is still catching up to the sci-fi realities presented in shows like Westworld. But thanks to the new convergence of A.I. and emotion recognition software, we are closer than ever before. These three startups are on the forefront of bringing emotionally controlled machines to life, stepping irrevocably across a line that has always separated fiction and reality.
By combining emotion research with big data and machine learning, Affectiva emerged from MIT’s Media Lab to lead the field in making software that can accurately read a wide range of facial expressions. Affectiva claims to have the world’s largest emotion database to date, with more than 4.5 million faces across 75 countries analyzed. Of more than 30 patents filed to date, Affectiva has secured seven so far.
Think about it: virtually any app on your phone could be enhanced by Affectiva’s analytics, from dating apps that could read your interest not by a swipe but a smile, to games that self-adjust based on the player’s level of engagement. Thanks to Affectiva, we’ll become even more addicted to our phones. The company is also working on software that can read emotion through voice, which would have a host of practical applications like better customer service interfaces. For now the technology is still making the rounds on the tech conference circuit, but we can expect to see Affectiva to partner with advertisers, app developers, and more. After all, it’s all about the user experience and creating technology that feels even more personalized and indispensable.
Lightwave is an “applied neuroscience platform” that promises golden data to marketers keen to understand audience engagements at peak moments in sporting events, movies, and more. By tracking real time biometric data from wearable wristbands embedded with sensors, Lightwave measures audience engagement at peak moments. Created by 29-year-old former DJ Rana June, Lightwave grew out of her frustration at not being able to read her audience’s engagement in real time. Major companies including Pepsi and Unilever have already utilized Lightwave to measure fan engagement at events like the NCAA basketball championship. 20th Century Fox also used Lightwave to track audience reaction to specific moments throughout The Revenant, providing much more precise feedback for the movie industry than self-reported metrics that were previously the benchmark for audience engagement.
Earlier this year Apple bought startup Emotient in a bid to take the lead in the emotion recognition race. Emotient uses facial recognition technology to read feelings. While we’re still waiting to see how Apple will integrate Emotient’s technology, it could be used to help advertisers read consumer reactions, or to help doctors interpret pain levels in patients. Emotient can accurately read different emotional responses based on faces in a large crowd of people, which could provide valuable feedback for marketing insights in a number of different fields.
Reading The Future of Feelings
Affectiva, Lightwave, and Emotient are just a few of the startups trying to solve the puzzle of how big data, artificial intelligence, and facial recognition algorithms can accurately interpret human emotion. The winning combination will likely combine facial recognition software with biometric sensors for more nuanced data feedback. It’s not hard to see how this would transform marketing, healthcare, and other industries. Once this technology goes mainstream, we’ll be one step closer to stepping over that invisible line that has long separated sci-fi’s version of A.I. from the very real future we are living into.