Company

Founded in London, UK, in 2014, Emotech is the world’s first AI 2.0 startup dedicated to multimodality and proactivity in AI. We have been selected as one of the Top 14 Start up teams in Europe by TechCrunch and were 2016 TechCrunch Disrupt Alumini. In 2017 Emotech was recognised in the Top 30 Successful and Influential Business Cases in China, and in 2018 was selected by TechWorld as one of the top 16 hottest robotics startups.

Our first product Olly is the world’s first robot with personality, and the first startup product that has won innovation awards in 4 categories in the CES history.

Technology

Reimagining AI interaction, through inspired machine learning solutions.
Our groundbreaking approach to machine learning utilises multimodal design - unlocking a new way of interacting with AI.

  • Multi-Modal
    Emotion And
    Engagement Analysis

    To recognise and analyse nuances in emotion, we combine sight, sound & linguistic understanding, fusing modality-specific data streams into a complete picture.
  • Custom Voice
    Trigger

    Low-power voice triggering solutions, enabling easily customisable wake up keywords, allowing you to define a unique name for your digital assistant application.
  • Real-Time Multi
    Object/Person
    Detection And Tracking

    Our tracking software is designed to detect people or objects across video shots in real time.
  • Spatial Audio-Visual
    Localisation And
    Recognition

    Rich multimodal spatial understanding, to recognise and localise users as well as their acoustic activity.
  • Situationally Aware
    Dialogue System

    Our dialogue system is designed to understand semantics in speech, necessary to determine an optimal response.
  • Small-Footprint
    Models
    And Algorithms

    Our solutions are optimised to operate effectively on local devices, with reduced memory footprint, and algorithms created for the available local resources.
  • Speaker
    Recognition Engine

    We create bespoke solutions for speaker identification and verification, integrated with a small memory footprint, and deployable in embedded devices allowing for privacy to be fully protected.
  • Audio-Visual
    Tracking

    We perform relationship mapping between recognised users and their activity in the audio-visual scene, so the users’ 2D or 3D pose can be automatically inferred even if the audio-visual data is only partially observable.
Meet Olly. The World’s First Home
Robot With Personality.
READ MORE

Awards

Olly became the most awarded robotic product in the history of CES at the International Consumer Electronics Show.

    • In December 2015, Emotech was selected by the world's top technology media TechCrunch as one of the top 14 entrepreneurial teams in Europe.

    • In December 2016, TechCrunch Disrupt invited Emotech CEO Zhuang Hongbin as a guest speaker. He was one of three sector leaders chosen to talk to an audience of over 600.

    • In November 2017, Emotech was selected together with leading companies such as Jingdong, Keda Xunfei and Tencent WeChat in the “Evaluation of the Times – Top 30 Chinese Business Cases” hosted by the Financial Times Chinese website.

    • In April 2018, Olly was selected as the “Top Ten Popularity Project” in the 6th China (Shanghai) International Technology Import and Export Fair, and was selected as the “Ten Popularity of the Town Hall” by the highest vote.

    • In June 2018, Emotech CEO Zhuang Hongbin and co-founder Chen Xi both were selected by the British government Diversity UK as the top 100 Asian technology stars, among which Chen Xi was rated as one of the top five creative industries.

    • In August 2018, Emotech CEO Hongbin was awarded the 2018 prize for Excellence in Innovation and Technology, at the Chinese Business Leaders Awards Ceremony by the Lord Mayor of the City of London in partnership with PwC.

    • In February 2019 Emotech was awarded the One to Watch Prize at the inaugural London Business Awards, organised by London & Partners. The awards highlighted success and innovative excellence across London’s business community, celebrating the growth and achievements of companies that are proud to call London their home.

prev
next

Media Endorsement

    • TechCrunch

      "The most likable personal
      assistant around"

    • CNET

      "Home robot who will grow
      to be just like you"

    • BBC

      "Aim to be more personal
      than Amazon’s Echo"

    • Engadget

      "Everyone’s making a smart
      personal assistant these days, but most of
      them aren’t as adorable as the Olly"

    • Digital Trends

      "There are plenty of virtual assistants out there but few promise to be as personalised and personalised as Olly"

prev
next

Timeline

  • 2014.08

    Emotech was founded in London,
    England.

    2015.12

    Emotech was selected by TechCrunch as one of the top 14 entrepreneurial teams in Europe.

    2016.12

    Emotech was named one of the top three most influential startup teams at the TechCrunch Disrupt Conference.

  • 2017.01

    Olly named CES 2017 Innovation Awards Honoree in four categories: Home Appliances, Smart Home, Drones and Unmanned Systems and Home Audio-Video Accessories.

    2017.05

    Emotech co-founder Chelsea founds Meet AI, the first comprehensive platform for international artificial intelligence academic exchange in London.

    2017.06

    Olly invited by BBC One ' Invented in ' documentary as leading featured product. Co-founder Chelsea invited for Cannes International Festival and interviewed by CNBC.

  • 2017.07

    Emotech and The British Financial Times co-host the 2017 China Artificial Intelligence Frontier Development Forum, in Shanghai Zhangjiang.

    2017.09

    Olly officially launched on Indiegogo, reaching over 280% of goal.

    2017.09

    Emotech named ' the most watched London smart company ' by the London Development Promotion Agency, second only to Google’s DeepMind.

  • 2017.11

    Emotech selected by Financial Times alongside WeChat, Jingdong and iFLYTEK in ' The Influence of the Times – Top 30 Chinese Business Cases '.

    2017.12

    Emotech team receive an invitation from the Royal Academy of Sciences to serve as the guest speaker at the Royal Christmas Lecture Series.

    2018.01

    CEO Hongbin invited by MIT Technology Review to deliver a speech by EmTech 2017 on ' Building Products in the AI
    Era '. He was also interviewed by FT Times China, discussing his motivation for the creation of Olly, and his vision for Emotech.

  • 2018.02

    Emotech named one of the best robotics companies in the UK by TechWorld.

    2018.04

    Olly awarded as one of the ' Top 10 Popular Projects ' of the China International Technology Fair, before securing the overall prize of ' The Best of 2018 among the 10 most Popular Projects​ ', earning headline status on the front page of Xinmin News.

  • 2018.06

    Emotech CEO Hongbin and co-founder Chelsea both selected by the British government Diversity UK as one of the top 100 Asian technology stars in the UK. Among them, Chelsea was named one of the top five innovative industries.

    2018.08

    August 2018, Emotech CEO Hongbin was awarded the 2018 prize for Excellence in Innovation and Technology at the Chinese Business Leaders Awards Ceremony, by the Lord Mayor of the City of London in partnership with PwC.

prev
next

University Collaboration

Emotech is proud to partner with some of the world’s leading academic institutions, including UCL, Imperial College and Carnegie Mellon University. Research areas include multimodal emotion recognition, personality systems, facial emotions and user recognition, personality impact and motor control.

  • Carneigie Melon University
    The Robotic Institute

  • University Of Cambridge

  • Imperial College Ibug Group

  • University College London

  • University Of Edinburgh

  • University Of Sheffield

  • Heriot Watt University

Discover A Different Way To Work. Join Our Eclectic Team.

Our eclectic mix of creatives & innovators cross borders and disciplines for a truly original team.
We’re always looking for the brightest talents to join us, and help shape the future of AI.

Team

The Emotech team is made up of an incredibly talented and diverse group of individuals who all share a common goal: to create technology that’s more human - technology that we can truly connect with, that understands us, and that ultimately improves our lives through innovative and more personal interactions. The company brings together more than 30 top artificial intelligence scientists and researchers from the fields of machine learning, speech technology, and computer vision, as well as highly experienced software and hardware engineers, designers, and globally integrated marketing specialists. This eclectic mix of innovators inspire a different way of approaching AI, that will redefine the relationship between humans and technology. At Emotech, we come from over 20 countries and now work in London, Edinburgh, San Francisco, Taipei, and Shenzhen.
    • Hongbin Zhuang

      CO-FOUNDER

      Hongbin was the product director of Renren. com, facilitating its growth to one of the largest social networks in China with over 35 million monthly users. This led to the company’s IPO on the NYSE in 2011.
      He graduated from UCL with distinction in the degree of MSc HCI & Ergonomics, and strongly believes that the revolution of human computer interaction will make a better world.

    • Chelsea Chen

      CO-FOUNDER

      Chelsea is a technologist and creativist, with diverse and international experience in consumer strategy planning, marketing and brand communications. She previously led the OgilvyOne team for clients such as Diageo, VW, Nestle and P&G to name a few.

    • Jan Medvesek

      CO-FOUNDER

      Jan Medvesek is a technical co-founder of Emotech & holds a PhD in Computer Science from UCL. His industrial experience includes Computer Vision projects across Europe and collaborations with Microsoft, MathWorks and BBC.

    • Indeera Munasinghe

      Software Architect

      Indeera is a Computer Scientist and Software Engineer with decades of experience in software design and development. During his decade in Microsoft, he was involved in large scale machine learning projects in machine vision and holds several patents and awards.

    • Jake Lin

      Supply Chain Operations

      Jake has extensive experience in wearables, IOT, robotics and mobile. He was one of the earliest member of Pebble Tech, heading Pebble’s Manufacturing and Ops and APAC expansion; helping the manufacture and shipping of millions of smart watches.

    • Jason Rentfrow

      Research Scientist

      Jason uses his incredible academic experience to guide Emotech’s exploration of personality & emotion. He is Reader in Personality and Individual Differences in the Psychology Department at Cambridge University, and the Director of Studies for Psychological and Behavioural Sciences at Fitzwilliam College.

    • Juris Laivins

      QA Engineer

      Juris is a QA Software/Hardware Engineer at Emotech. Previously, Juris worked at Sony, where he was a part of the team, launching Sony’s VR platform. He also spent a few years
      at Google and Microsoft before that.

    • Mehul Shewakramani

      Product Manager

      Mehul is responsible for helping the team understand what to build, why it should be
      built and how it should behave. Prior to joining Emotech, Mehul spent a number of years building technology products for premium vehicles at Jaguar Land Rover in the UK.

    • Ondrej Miksik

      Research Scientis

      Ondra is Emotech’s leading Computer Vision Research Scientist. He holds a PhD in Engineering Science from the University of Oxford, in the areas of computer vision and machine learning applied to wereable and mobile robotics, and film post-production with
      a focus on understanding dynamic aspects
      of videos.

    • Raymond W.M. Ng

      Research Scientist

      Raymond works on spoken language technology, with focused work on speaker and language recognition, and machine translation. He has a PhD degree from CUHK. Prior to Emotech, he led the research team at the University of Sheffield in the participation of different international challenges on speech and language technology.

    • Rory Beard

      Research Scientist

      Rory joins Emotech as a research scientist, having completed the triple threat of Undergraduate Degree, Masters and PhD from the University of Oxford. Previously he was a researcher in Oxford’s Machine Learning Research Group.

    • Stefano Mezza

      Research Scientist

      Stefano specialised in Data Science and NLP at the University of Trento and at the University of Edinburgh, and was selected to take part in the Amazon Alexa Prize 2017, working on state-of-the-art dialogue systems and helping advance Conversational Artificial Intelligence.

    • Szu-Hung Lee

      Mechanical Engineer

      Szu holds a PhD in Design Engineering from Imperial College London. He is in charge of mechanical design, giving Olly a robust and maneuverable body. Szu was heading several projects with Apple during his time at Chimei Innolux Corporation.

    • Zafeirios Fountas

      Research Scientist

      Zaf is Emotech's computational neuroscientist working on brain-inspired A.I. with focus on biological action selection, (time) perception and deep theories of predictive coding. He's an honorary research fellow at the Wellcome Trust Centre for Human Neuroimaging at UCL and a visiting lecturer at the Royal College of Art.

    • John Shawe-Taylor

      Advisor Board

      Professor Taylor is the Director of the Centre for Computational Statistics and Machine Learning at University College. His work has helped to drive a fundamental rebirth in the field of machine learning with the introduction of kernel methods and support vector machines.

    • Yvonne Rogers

      Advisor Board

      Professor Rogers is the Chair of Interaction Design and Deputy Head of the Computer Science Department at University College London. Her book Interaction Design: Beyond Human-Computer Interaction has sold more than 200,000 copies worldwide and has been translated into six other languages.

    • Maja Pantic

      Advisor Board

      Professor Pantic is the Head of the iBUG group, the world-leading intelligent behaviour understanding group. Maja received BCS Roger Needham Award. She is one of 25 most cited female researchers in computer science in the world.

prev
next

Enterprise

Provide Incredible User Experience Through
Our Bespoke AI Services.

We are pioneering a new generation of AI, creating bespoke software solutions for your business.
Our services blend various machine learning algorithms,
allowing our clients to implement incredible user experiences for their consumers.

Make Your Device Smarter By Integrating Emotech’s API / SDK

I have read and agreed User privacy privacy agreement

Publications

MULTI-MODAL EMOTION AND
ENGAGEMENT ANALYSIS

Licensed services for automated emotion analysis of photographed subjects have started to appear ... but are so far focused purely on image/video data*. Given the nuanced nature of emotion/engagement understanding, and the dependence on sight, sound, and linguistics of the humans who are carry this expertise, it stands to reason that an machine for automating this subtle analysis must have the ability to fuse the modality-specific data streams into a complete picture.

CASE STUDY: Such a tool would assist, for instance, in psychometric analysis of individuals engaging with a product, person or experience, to inform improvement and iteration (in an A/B testing fashion).

CUSTOM VOICE
TRIGGER

Our low-power voice triggering solution allows for easily customise wakeup keywords. So whether you are an end or a business customer you can define your unique name for your digital assistant application.

It is data-free, but we can easily leverage small amounts of audio data on an ongoing basis to adjust models to ambient acoustic conditions, increasing its overall robustness.

CASE STUDY: In a personal assistant robot, a custom voice trigger would enable users to select a nickname with which to address their device, enhancing the overall interaction experience.

REAL-TIME MULTI OBJECT/PERSON
DETECTION AND TRACKING

We can detect and track people and arbitrary objects across video shots in real-time.

Our service processes videos on-the-fly (i.e. immediately as they are being captured by a camera) and can be configured to work on a large variety of hardware ranging from low-spec devices to cloud solutions.

CASE STUDY: Real time tracking will play a key role in the evolution of surveillance services, where speed of recognition and uninterrupted monitoring enable prevention rather than retrospective analysis.

SPATIAL AUDIO-VISUAL LOCALISATION
AND RECOGNITION

We offer rich multimodal spatial understanding of who is located around your device. We
use vision cameras and multi-channel audio to recognise and localise users, as well as their acoustic activity.

On top of that, we offer acoustic event detection.

CASE STUDY: In smart home devices, spatial understanding allows for the consideration of context. For example smart speakers alter acoustic setting to project sound specifically according to user location.

SITUATIONALLY AWARE
DIALOGUE SYSTEM

In modern dialogue systems, the broad aim is to extract from speech acts (whether spoken or textual) the semantics necessary to determine a reply that is appropriate or optimal, given some desired end.

Semantic understanding is typically limited to classifying a users intent, and some values to fill slots, but ignores both (i) linguistic and paralinguistic cues that might add emotional / personality / psychological relevance to the speech act, and (ii) cues from vision and / or any other non-linguistic measurements of the user or world around them.

In general these will be needed for an agent to be fully informed and situationally aware, before selecting the best action to take next. For instance, a user may become confused or frustrated at some point during an interaction, and this information is both hidden to a purely textual NLU and essential information for an agent attempting to provide a good user experience.

CASE STUDY: In education, situationally aware dialogue systems could enhance a teacher’s ability to analyse a student’s true intent, thus allowing for adjustments in teaching methodology, to optimise learning efficiency.

SMALL-FOOTPRINT MODELS
AND ALGORITHMS

Small-footprint models and algorithms are crucial in case of running services locally on IoT/Wearable/Consumer devices. These devices cannot offer a large memory space or computational power, and are not continuously connected to a global network.

Therefore the models have to be reduced in size, and algorithms optimised for the available local computational resources. The use-case example is to run a voice trigger together with
a domain-specific ASR and NLP locally, to provide full control of a device such as a light controller.

CASE STUDY: Size matters. In the expanding world of wearable tech, a small footprint allows
our technology to sit on the device itself. For rugged explorer gear, which will often find itself deployed out of WiFi zones, it is crucial that our software is not dependant on cloud connection.

SPEAKER RECOGNITION
ENGINE

We can build bespoke solutions for speaker identification and verification (SI). Our system works in a text-independent manner, which means speakers are not constrained to say pre-defined passphrases, and SI can be executed in parallel with other applications such as voice recognition.

Our SI systems are optimised towards low latency, have a small memory footprint, and can be deployed on embedded devices allowing for privacy to be fully protected.

CASE STUDY: Accurately identifying a user through voice is not just crucial in smart home devices, but is also applicable to the road. Dependable SRE in automobiles opens up the possibility for user specific settings to be deployed through voice command.

AUDIO-VISUAL
TRACKING

The idea relies on performing relationship mapping between detected / recognised objects / users and their activity in the audio-visual scene, so the users’ 2D or 3D pose can be automatically inferred and the belief can be maintained across longer time spans even if the audio-visual data is only partially observable (e.g. user is not visible within the camera frustum, however microphone array is able to recognise and localise the sound she makes by walking, speaking or some other action).

CASE STUDY: The device recognises an object/user, however this object is then no longer visible in the camera (e.g. limited field-of-view, occlusions, etc). The device is able to keep tracking the user based on the natural sound she makes (noise, speech, ...) during this period. Hence it is able to infer when the object/user appears again in the camera view and successfully performs data association.

Privacy Policy

This privacy policy has been compiled to better serve those who are concerned with how their 'Personally Identifiable Information' (PII) is being used online. PII, as described in US privacy law and information security, is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context. Please read our privacy policy carefully to get a clear understanding of how we collect, use, protect or otherwise handle your Personally Identifiable Information in accordance with our website.


What personal information do we collect from the people that visit our blog, website or app?

When ordering or registering on our site, as appropriate, you may be asked to enter your name, email address or other details to help you with your experience.


When do we collect information?

We collect information from you when you subscribe to a newsletter or enter information on our site.
Data is kept in perpetuity unless otherwise instructed.


How do we use your information?

We may use the information we collect from you when you register, make a purchase, sign up for our newsletter, respond to a survey or marketing communication, surf the website, or use certain other site features in the following ways:

To administer a contest, promotion, survey or other site feature.


How do we protect your information?

We do not use vulnerability scanning and/or scanning to PCI standards.
We only provide articles and information. We never ask for credit card numbers.
We do not use Malware Scanning.

Your personal information is contained behind secured networks and is only accessible by a limited number of persons who have special access rights to such systems, and are required to keep the information confidential. In addition, all sensitive/credit information you supply is encrypted via Secure Socket Layer (SSL) technology.
We implement a variety of security measures when a user enters, submits, or accesses their information to maintain the safety of your personal information.
All transactions are processed through a gateway provider and are not stored or processed on our servers.


Do we use 'cookies'?

We do not use cookies for tracking purposes
You can choose to have your computer warn you each time a cookie is being sent, or you can choose to turn off all cookies. You do this through your browser settings. Since browser is a little different, look at your browser's Help Menu to learn the correct way to modify your cookies.
If you turn cookies off, Some of the features that make your site experience more efficient may not function properly.that make your site experience more efficient and may not function properly.


Third-party disclosure

We do not sell, trade, or otherwise transfer to outside parties your Personally Identifiable Information.


Third-party links

We do not include or offer third-party products or services on our website.


Google

Google's advertising requirements can be summed up by Google's Advertising Principles. They are put in place to provide a positive experience for users. https://support.google.com/adwordspolicy/answer/1316548?hl=en
We have not enabled Google AdSense on our site but we may do so in the future.


California Online Privacy Protection Act

CalOPPA is the first state law in the nation to require commercial websites and online services to post a privacy policy. The law's reach stretches well beyond California to require any person or company in the United States (and conceivably the world) that operates websites collecting Personally Identifiable Information from California consumers to post a conspicuous privacy policy on its website stating exactly the information being collected and those individuals or companies with whom it is being shared. - See more at: http://consumercal.org/california-online-privacy-protection-act-caloppa/#sthash.0FdRbT51.dpuf


According to CalOPPA, we agree to the following:

Users can visit our site anonymously.
Once this privacy policy is created, we will add a link to it on our home page or as a minimum, on the first significant page after entering our website.
Our Privacy Policy link includes the word 'Privacy' and can easily be found on the page specified above.


You will be notified of any Privacy Policy changes:

On our Privacy Policy Page


Can change your personal information:

By emailing us


How does our site handle Do Not Track signals?

We honor Do Not Track signals and Do Not Track, plant cookies, or use advertising when a Do Not Track (DNT) browser mechanism is in place.


Does our site allow third-party behavioral tracking?

It's also important to note that we allow third-party behavioral tracking


COPPA (Children Online Privacy Protection Act)

When it comes to the collection of personal information from children under the age of 13 years old, the Children's Online Privacy Protection Act (COPPA) puts parents in control. The Federal Trade Commission, United States' consumer protection agency, enforces the COPPA Rule, which spells out what operators of websites and online services must do to protect children's privacy and safety online.
We do not specifically market to children under the age of 13 years old.
Do we let third-parties, including ad networks or plug-ins collect PII from children under 13?


Fair Information Practices

The Fair Information Practices Principles form the backbone of privacy law in the United States and the concepts they include have played a significant role in the development of data protection laws around the globe. Understanding the Fair Information Practice Principles and how they should be implemented is critical to comply with the various privacy laws that protect personal information.


In order to be in line with Fair Information Practices we will take the following responsive action, should a data breach occur:

Within 7 business days

We will notify the users via in-site notification

Within 7 business days

We also agree to the Individual Redress Principle which requires that individuals have the right to legally pursue enforceable rights against data collectors and processors who fail to adhere to the law. This principle requires not only that individuals have enforceable rights against data users, but also that individuals have recourse to courts or government agencies to investigate and/or prosecute non-compliance by data processors.


CAN SPAM Act

The CAN-SPAM Act is a law that sets the rules for commercial email, establishes requirements for commercial messages, gives recipients the right to have emails stopped from being sent to them, and spells out tough penalties for violations.


We collect your email address in order to:

Send information, respond to inquiries, and/or other requests or questions
Market to our mailing list or continue to send emails to our clients after the original transaction has occurred.


To be in accordance with CANSPAM, we agree to the following:

Not use false or misleading subjects or email addresses.

Identify the message as an advertisement in some reasonable way.

Include the physical address of our business or site headquarters.

Monitor third-party email marketing services for compliance, if one is used.

Honor opt-out/unsubscribe requests quickly.

Allow users to unsubscribe by using the link at the bottom of each email.


If at any time you would like to unsubscribe from receiving future emails, you can email us at info@emotech.co and we will promptly remove you from ALL correspondence.


Contacting Us

If there are any questions regarding this privacy policy, you may contact us using the information below.

https://heyolly.com/
4-5 Bonhill Street
London, London EC2A 4BX
UK
info@emotech.co
Last Edited on 2018-05-25