<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=235914&amp;fmt=gif">

75F Team's "Deep See" project helps Alexa See

May 14, 2018 11:49:17 AM

75F Machine Learning Engineer Madhushan Tennakoon and Software Intern Myat Mo created a program together called "Deep See," originally developed as part of the 4th Annual IoT Fuse Hack Day. Deep See utilizes the intelligence of Amazon's Alexa to help you see. Madhushan and Myat presented a track on Deep See at IoT Fuse Conference on May 3rd, 2018. It fascinated a lot of the audience who attended their presentation; however, with any live demo comes the risk that something won't cooperate. We were thankful for the forgiving audience that waited patiently in anticipation. So, we decided to film a brief video segment of the Deep See demo at the 75F office and share it here for everyone to be able to experience this amazing passion project by Madhushan and Myat. 

IoT Fuse - Deep See -Mahu + Mo w glasses

How does it work?

This project was based on the premise that Alexa can be made smarter than she already is. Alexa is very sophisticated with voice assistance. She's already been trained on the most complex data set known to us, which is human speech. She can hear what you say and synthesize speech in an almost human like manner. But, what if she could also see?

For this project, the team used Raspberry Pi connected to a camera that is also connected to a pair of glasses. The glasses will take a snapshot when you ask Alexa a question. Then the Raspberry Pi will upload the image to AWS, where it'll do it's analytics. For example, if you wanted to find a friend in the crowd she'll use AWS recognition to scan all the faces and match them in an internal database. You can ask her questions like "what mood is my friend in?" or "what do you see?" and Alexa will describe what she sees in a narrative. 

 

The general aim of the project was to quickly build something that could help people stay connected to the physical world without having to rely on their eyes (i.e. see without looking) or having to be physically there at all. The inner deep neural networks can be re-trained and fine-tuned with specific datasets, so they could be designed to for applications in specific settings like remote monitoring or in industrial environments for real-time product insight. 
 
How do we use the technology available to us today to make their lives not just easier but also more meaningful? Maybe the answer lies in artificial intelligence. Maybe we can unlock the power of deep learning to help those without vision to see and understand the breathtakingly complicated world in front of us and deliver it as a narrative.
 
When Madhu and Myat aren't orchestrating machine learning to help with visual recognition, they are busy developing solutions to make the invisible qualities of air (temperature, humidity and IAQ) more comfortable for building occupants, and helping facility mangers sense, visualize and efficiently manage HVAC and lighting with intelligent building software.
 
Kelly Huang

Written by Kelly Huang

Lists by Topic

see all