Mind Matters Natural and Artificial Intelligence News and Analysis
rawpixel-267079-unsplash

So lifelike…

Another firm caught using humans to fake AI
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

iFLYTEK, a Chinese speech recognition software company, was accused last week of hiring humans to fake its simultaneous interpretation tools, supposedly powered by AI:

In an open letter posted on Quora-like Q&A platform Zhihu, interpreter Bell Wang claimed he was one of a team of simultaneous interpreters who helped translate the 2018 International Forum on Innovation and Emerging Industries Development on Thursday. The forum claimed to use iFlytek’s automated interpretation service.

While a Japanese professor spoke in English at the conference on Thursday morning, a screen behind him showed both an English transcription of what he was saying, and what appeared to be a simultaneous translation into Chinese which was credited to iFlytek. Wang claims that the Chinese wasn’t a simultaneous translation, but was instead a transcription of an interpretation by himself and a fellow interpreter. “I was deeply disgusted,” Wang wrote in the letter. Qian Zhecheng, “AI Company Accused of Using Humans to Fake Its AI” at Sixth Tone

Byzantine claims and counterclaims followed as other interpreters came forward with similar stories. According to Qian, something similar happened last year.

… the presenter stalled the audience with jokes until the girls who provided “Emotionally Meaningful Dialogue 3a” were back from coffee …

Jonathan Bartlett of the Blyth Institute has noted, “many apparently automated services that have a considerable “wow” factor are actually outsourcing pieces of the puzzle to humans who do the creative work,” something along the lines of Amazon’s Mechanical Turk. In some cases, the issue is that a company has promised potential investors more high tech than it can deliver. Unfortunately, the firm could be exploiting low-paid workers doing dull jobs.

The drive to replace as many positions as possible with AI will slow because of the universal tendency for trends to level off. For one thing, not everything that is possible seems worthwhile. The robot priest will seem ridiculous to religious people; the politically correct chatbot isn’t a friend. And many people prefer not to work alone even if they could. But it takes a while for the novelty to wear off.

Note: Sixth Tone? “There are five tones in Mandarin Chinese. When it comes to coverage of China, Sixth Tone believes there is room for other voices that go beyond buzzwords and headlines to tell the uncommon stories of common people.”

Hat tip: Eric Holloway

See also: Sometimes the ‘bots turn out to be humans. That “lifelike” effect was easier to come by than some might think

and

”Artificial“ artificial intelligence: What happens when AI needs a human I? (Jonathan Bartlett)


So lifelike…