from the I-saw-what-you-said-there dept
It’s hardly news to Techdirt readers that China carries out censorship on a massive scale. What may be more surprising is that its censorship extends to even the most innocuous aspects of life. The ChinAI Newsletter, which provides translations by Jeff Ding of interesting texts from the world of Chinese AI, flags up one such case. It concerns a Chinese online TV series called “The Bad Kids”. Here’s how the site Sixth Tone describes it:
Since its first episodes were released on China’s Netflix-like video platform iQiyi in mid-June, “The Bad Kids” has earned sweeping praise for its plot, cinematography, casting, dialogue, pacing, and soundtrack. It’s also generated wide-ranging online discussion on human nature due to the psychology and complex motivations of its characters.
However, as the Sixth Tone article points out, the authorities required “a lot of changes” for the series to be approved. One fan of “The Bad Kids”, Eury Chen, wanted to find out what exactly had been changed, and why that might be. In a blog post translated by ChinAI, Chen explained how he went about this:
Two days ago, I watched the TV series “The Bad Kids” in one go, and the plot was quite exciting. The disadvantage is that in order for the series to pass the review (of the National Radio and Television Administration), the edited sequences for episodes 11 and 12 were disrupted, even to the point that lines were modified, so that there are several places in the film where the actor’s mouth movements and lines are not matched, which makes the plot confusing to people. Therefore, I tried to restore the modified lines through artificial intelligence technology, thereby restoring some of the original plot, which contained a darker truth.
The AI technology involved using Google’s Facemesh package, which can track key “landmarks” on faces in images and videos. By analyzing the lip movements, it is possible to predict the sounds of a Chinese syllable. However, there is a particular problem that makes it hard to lipread Chinese using AI. There are many homophones in Chinese (similar sounds, different meanings). In order to get around this problem, Chen explored the possible sequences of Chinese characters to find the ones that best match the plot at that point. As his blog post (and the ChinAI translation) explains, this allowed him to work out why certain lines were blocked by the Chinese authorities — turns out it was for totally petty reasons.
Perhaps more interesting than the details of this particular case, is the fact that it was possible to use AI to carry out most of the lipreading, leaving human knowledge to choose among the list of possible Chinese phrases. Most languages don’t require that extra stage, since they rarely have the same number of homophones that Chinese does. Indeed, for English phrases, researchers already claimed in 2016 that their AI-based LipNet achieved “95.2% accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders”.
It’s clear that we are fast approaching a situation where AI is able to lipread a video in any language. That is obviously a boon for the deaf or hard of hearing, but there’s a serious downside. It means that soon all those millions of high-quality CCTV systems around the world will not only be able to use facial recognition software to work out who we are, but also run AI modules to lipread what we are saying.