Future Apple devices may be able to tell if you’re happy or sad and have your on-screen avatar respond accordingly. Apple has been granted a patent (number US 11727724 B1) for “emotion detection.”

About the patent

The patent relates generally to image processing, specially in regards to techniques and systems for estimating an emotion from an image of a face. In the patent Apple says that avatars — computerized characters that represent and are controlled by users — can include avatars with facial expressions that are driven by a user’s facial expressions. 

One use of facially-based avatars is in communication, where a camera and microphone in a first device transmits audio and real-time 2D or 3D avatar of a first user to one or more second users such as other mobile devices, desktop computers, videoconferencing systems and the like. However, Apple says that known existing systems tend to be “computationally intensive, requiring high-performance general and graphics processors, and generally do not work well on mobile devices, such as smartphones or computing tablets.” 

What’s more, existing avatar systems do not generally provide the ability to communicate nuanced facial representations or emotional states, the company says. Apple wants to overcome such issues.

Summary of the patent 

Here’s Apple’s abstract of the patent: “Estimating emotion may include obtaining an image of at least part of a face, and applying, to the image, an expression convolutional neural network (“CNN”) to obtain a latent vector for the image, where the expression CNN is trained from a plurality of pairs each comprising a facial image and a 3D mesh representation corresponding to the facial image. Estimating emotion may further include comparing the latent vector for the image to a plurality of previously processed latent vectors associated with known emotion types to estimate an emotion type for the image.”




Article provided with permission from AppleWorld.Today