Those sometimes annoying puzzles called CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) were originally created to prevent spammers and other Net bottom feeders from creating bogus webmail accounts and poisoning online forums and comment sections with spam. Some creative types though, among them Google and the Civil Rights Defenders, have expanded the function of the technology beyond security.
When Google, for example, launched its free reCAPTCHA service, it added a component to help turn printed books into digital books. It does that by creating CAPTCHAs that consist of two words. One word is known by the CAPTCHA engine. The other is a word taken from a printed work which could not be recognized by Google’s optical character recognition software (OCR).
When a CAPTCHA solver types in the two words, if the first word is typed correctly, the second is assumed to be typed in correctly too. In that way, the CAPTCHA not only thwarts spammers but also helps Google solve the knotty problem of what to do when its OCR software fails to recognize a scanned word.
The Civil Rights Defenders (CRD) have added another wrinkle to CAPTCHA, one that’s likely to make these devilish puzzles even more frustrating than some of them already are.
The CRD CAPTCHA presents a solver with three words. The words, in typical CAPTCHA distressed type, are answers to a question. The question involves a situation where the United Nations Universal Declaration of Human Rights has been violated. If the solver chooses a word that shows compassion and empathy, the CAPTCHA is solved and the solver may continue with their business on the website. If a word is chosen that shows the solver isn’t concerned about human rights, then they’ll be blocked from performing the function protected by the CAPTCHA at the site.
For example, one sample CAPTCHA goes like this: Members of the Russian group Pussy Riots were sentenced to two years in prison after staging a 41 second long performance in a Moscow church. How does that make you feel? The solver is given three choices: sorry, solid and proud. Even if you don’t share the CRD’s views, you can guess that the appropriate answer would be “sorry.” That is, if you’re not an auto bot trying to crack CAPTCHAS with machine intelligence and OCR or a worker on a CAPTCHA plantation in Asia with a marginal knowledge of English solving puzzles for two dollars a day.
While the CRD’s CAPTCHA scheme allows webmasters to let their visitors know where their site stands on human rights, it’s not very practical for politically agnostic sites. However, the combination of using a question and distressed text in a CAPTCHA has some merit. For example, a question such as “Who is Popeye’s girlfriend?” could have three answers: beet, olive and lemon. Even if you don’t know the answer, you can always Google it. Auto bots don’t have that option.
Auto bots work because all they have to do is decipher some characters. They don’t have to make choices that require human intelligence. Such intelligence would be needed to pick from three choices the answer to a simple question.
CAPTCHA coolies would also be foiled. Most of them are just punching letters they see on their displays on their keyboards. They may guess the correct answer to a question occasionally but not often enough to make it worthwhile to either them or their employers.