With people wearing masks so much of the time now, it can be often difficult to tell what expression is on someone face. A new latest system can reportedly do so, though, utilizing cameras mounted on their headphones.
Known as C-Face, the experimental setup was developed by a Cornell University team led by Assistant Prof. Cheng Zhang. It incorporates 2 miniature computer-connected RGB cameras, which are positioned below the subject’s ears on a third-party set of headphones
By analyzing picture of the changing contours of the person’s cheeks (which the cameras shoot the rear of) the system is in a position to determine the present position of 42 of their key facial feature points. This data is successively used to determine present shape of their mouth, eyes and eyebrows – these combined shapes structure their overall expression, which the system indicates by displaying one among eight corresponding emojis on a display screen.
In tests conducted on 9 volunteers, the new technology was used alongside an existing state-of-the-art setup that tracks the position of facial landmarks. The latter does so utilizing frontal cameras that capture images of the whole unmasked face.
As compared to-that system, C-Face had a margin of error of but less-than 0.8 mm. Additionally, the emojis that it displayed reflected the person real expression with an accuracy rate of over 88%. That figure should increase because the system is developed further.
Down the road, C-Face could even be used for applications like the silent hands-free control of computers in quiet settings like libraries, with users putting on specific expressions for specific commands.