Plenary Speakers
Monday, March 6
Rana El-Kaliouby
Humanizing technology with Emotion AI Our emotions influence every aspect of our lives, from our health and wellbeing, to the way we learn, the decisions we make and how we communicate with one another. We are increasingly surrounded by advanced AI systems with cognitive and autonomous capabilities. Our interactions with devices are becoming more conversational and relational. Yet all these technologies have high IQ and no EQ, no emotional intelligence. Artificial emotional intelligence, or Emotion AI, promises to humanize technology, with applications from advertising, autism to automotive and social robotics. The industry is growing very quickly and is projected to be a 30 Billion dollar industry in the next 5 years. This talk will provide an overview of the state of the art in facial emotion and speech recognition with a focus on the commercial applications. Rana el Kaliouby, PhD, is co-founder and CEO of Affectiva, the pioneer in Emotion AI, the next frontier of Artificial Intelligence. She leads the company’s award winning, emotion recognition technology, that is built on a science platform that uses deep learning and the world’s largest emotion data repository of nearly 4.9 million faces analyzed from 75 countries, amounting to more than 50 billion emotion data points. Prior to founding Affectiva, as a research scientist at MIT Media Lab, Rana spearheaded the applications of emotion technology in a variety of fields, including mental health and autism research. Her work has appeared in numerous publications including The New Yorker, Wired, Forbes, Fast Company, The Wall Street Journal, The New York Times, CNN, CBS, TIME Magazine, Fortune and Reddit. A TED speaker, she was recognized by TechCrunch as a women founder who crushed it in 2016, Entrepreneur Magazine named her as one of the “7 Most Powerful Women To Watch In 2014”. She got inducted into the “Women in Engineering” Hall of Fame and is a recipient of the 2012 Technology Review’s “Top 35 Innovators Under 35” Award, listed on Ad Age’s “40 under 40” and recipient of Smithsonian magazine's 2015 American Ingenuity Award for Technology. Rana holds a BSc and MSc in computer science from the American University in Cairo and a PhD from the computer laboratory, University of Cambridge. |
Tuesday, March 7
K. J. Ray Liu
Smart Radios for Smart Life What smart impact will future 5G and IoT bring to our lives? Many may wonder, and even speculate, but do we really know? With more and more bandwidth readily available for the next generation of wireless applications, many more smart applications/services unimaginable today may be possible. In this talk, we will show that with more bandwidth, one can see many multi-paths, which can serve as hundreds of virtual antennas that can be leveraged as new degrees of freedom for smart life. Together with the fundamental physical principle of time reversal to focus energy to some specific positions, or more generally by employing waveforming, a revolutionary smart radio platform can be built to enable many cutting-edge IoT applications that have been envisioned for a long time, but have never been achieved. We will show the world’s first ever centimeter-accuracy wireless indoor positioning systems that can offer indoor GPS-like capability to track human or any indoor objects without any infrastructure, as long as WiFi or LTE is available. Such a technology forms the core of a smart radios platform that can be applied to home/office monitoring/security, radio human biometrics, vital signs detection, wireless charging, and 5G communications. In essence, in the future of wireless world, communication will be just a small component of what’s possible. There are many more magic-like smart applications that can be made possible, such as to answer questions like how many people are next door, who they are, and what they are doing without any sensors deployed next door. Some demo videos will be shown to illustrate the future of smart radios for smart life. Dr. K. J. Ray Liu is the founder of Origin Wireless, Inc., a high-tech start-up developing smart radios for smart life. He was named a Distinguished Scholar-Teacher of University of Maryland, College Park, in 2007, where he is Christine Kim Eminent Professor of Information Technology. Dr. Liu was a recipient of the 2016 IEEE Leon K. Kirchmayer Technical Field Award on graduate teaching and mentoring, IEEE Signal Processing Society 2014 Society Award, for “influential technical contributions and profound leadership impact", IEEE Signal Processing Society 2009 Technical Achievement Award, and more than a dozen best paper awards. Recognized by Thomson Reuters as a Highly Cited Researcher, he is a Fellow of IEEE and AAAS. Dr. Liu is a member of IEEE Board of Director. He was President of IEEE Signal Processing Society, where he has served as Vice President – Publications and Editor-in-Chief of IEEE Signal Processing Magazine. He also received teaching and research recognitions from University of Maryland including university-level Invention of the Year Award (three times), and college-level Poole and Kent Senior Faculty Teaching Award, Outstanding Faculty Research Award, and Outstanding Faculty Service Award, all from A. James Clark School of Engineering (each award given once per year from the entire college). |
Wednesday, March 8
David Nahamoo
Spoken Systems and the Society of Human and Machine Knowledge Experts In the early days of speech processing, technical innovation was focused on digitization, enhancement and compression of speech signals to address storage and transmission needs. In the early 70s, application of statistical machine learning methods to speech recognition opened the way for higher level automated cognitive capabilities of conversational interaction and speech analytics to address a variety of applications such as customer care. The future will no doubt push for higher level automated capabilities by integrating human spoken and linguistic capabilities with other human skills such as vision, motor skills, and emotions. This future will bring about a society of human and machine experts that collaborate together for improved outcomes in complex processes such as decision making. In this talk we describe the progress in building Spoken Systems, covering speech to text and text to speech, spoken interaction and analytics, as well as emotion detection and generation. We examine the evolution of the learning techniques across these technologies. We explore the inter-connectivity between speech, vision, and motor skills and the potential for additional improved learning possibilities in the human and machine experts society. David Nahamoo is an IBM Fellow and Chief Scientist for Conversational Systems. David joined the speech recognition effort at IBM Research in 1982 after obtaining his PhD from Purdue University. His passion has been in the innovation of statistical machine learning techniques for improving speech recognition systems. Recently, he has been working on cognitive conversational systems and dialog management. He was the Interim General Manager for IBM Speech Business Unit for 3 months in 1993. He was responsible for delivering speech technologies to IBM Divisions for desktop, embedded, and server based products during 1993 to 2006. In 2006, he became the IBM Research Speech CTO, responsible for IBM Research technical strategy for speech technologies. David was appointed an IBM Fellow in 2008. David holds more than 55 patents and has published more than 70 technical papers in scientific journals and conferences. He is a Member of the IBM Academy of Technology, a Fellow of the IEEE, and an ISCA Fellow. In 2001, he received the IEEE Signal Processing Best Paper Award. David is proud of the IEEE Corporate Innovation Award to IBM Team in 2009 for long term contributions in speech recognition. |
Thursday, March 9
Jan Rabaey
IoT++: The Living Network of Everything and Everyone Advances in every aspect of information technology enable entities such as robots, drones and UAVs to becoming more ubiquitous, efficient, and effective, enabling them to interact more effectively with each other and/or with humans. These networks of everyone and everything create entirely new options in so many areas such as augmented reality, tactile inter-networking, distributed manufacturing and smart cities. To go even one step further, the miniaturization of motor-sensory functions enables the swarm to encompass the human body (“The Human Intranet”) opening the potential of true introspection, extrospection and augmentation, and may fundamentally alter how we humans interact with ourselves and with the world around us. Turning these opportunities into realities requires progress on so many different fronts. The distributed systems of large numbers of interacting devices are extremely dynamic, governed by feedback loops of varying but often tight latencies (< 1 msec). Built-in guarantees of robustness, safety, security and privacy are essential. The Human Intranet on itself forms a perfect case study. Jan Rabaey holds the Donald O. Pederson Distinguished Professorship at the University of California at Berkeley. He is a founding director of the Berkeley Wireless Research Center (BWRC) and the Berkeley Ubiquitous SwarmLab, and is currently the Electrical Engineering Division Chair at Berkeley. Prof. Rabaey has made high-impact contributions to a number of fields, including advanced wireless systems, low power integrated circuits, sensor networks, and ubiquitous computing. His current interests include the conception of the next-generation integrated wireless systems over a broad range of applications, as well as exploring the interaction between the cyber and the biological world. He is the recipient of major awards, amongst which the IEEE Mac Van Valkenburg Award, the European Design Automation Association (EDAA) Lifetime Achievement award, and the Semiconductor Industry Association (SIA) University Researcher Award. He is an IEEE Fellow and a member of the Royal Flemish Academy of Sciences and Arts of Belgium. He has been involved in a broad variety of start-up ventures. |