Why can't bots check “I am not a robot” checkboxes?  (Answer from Quora Social Media)

Discussion created by elstaci on Mar 16, 2019
Latest reply on Mar 16, 2019 by black_zion
Don't know how credible this answer is but seems to be fairly convincing. This doesn't concern GPU or CPU but interesting question that seems like it should be easy to answer but isn't concerning Security in Browsers:
Oliver Emberton
Oliver Emberton, CEO at Silktide

How complicated can one little checkbox be? I mean it’s just OH MY GOD YOU CAN’T EVEN IMAGINE.

For starters, Google invented an entire virtual machine – essentially a simulated computer inside a computer – just to run that checkbox.

That virtual machine uses their own language, which they encrypt twice.

This is no simple encryption. Normally when you password protect something, you might use a key to decode it. Google’s invented language is decoded with a key that is changed by the process of reading the language, and the language also changes as it is read.

Google combines (hashes) that key with the web address you’re visiting, so you can’t use a CAPTCHA from one website to bypass another. It further combines that with “fingerprints” from your browser, catching microscopic variations in your computer that a bot would struggle to replicate (like CSS rules).

All this is just to make it hard for you to understand what Google is even doing. You need to write tools just to analyse it. (Fortunately people did just that).

It turns out they record and analyse:

  • Your computer’s timezone and time
  • Your IP address and rough location
  • Your screen size and resolution
  • What browser you’re using
  • What plugins you’re using
  • How long the page took to display
  • How many key presses, mouse clicks, and tap/scrolls were made

And … some other stuff we don’t quite understand.

We do know they ask your browser to draw an invisible image and send it to Google for verification. The image contains things like a nonsense font, which – depending on your computer – will fall back to a system font, and be drawn very differently. They add to this a 3D image with a special texture, drawn in such a way that the result varies between computers:

Source: https://hovav.net/ucsd/dist/canv...

Finally they combine all of this data with their knowledge of the person using the computer. Almost everyone on the Internet uses something owned by Google – search, mail, ads, maps – and as you know Google Tracks All Of Your Things™️. When you click that checkbox, Google reviews your browser history to see if it looks convincingly human.

This is easy for them, because they’re constantly observing the behaviour of billions of real people.

How exactly they check all this information is impossible to know, but they’re almost certainly using machine learning (AI) on their private servers, which is impossible for an outsider to replicate. I wouldn’t be surprised if they also built an adversarial AI to try to beat their own AI, and have both learn from each other.

So why is all this hard for a bot to beat? Because now you’ve got a ridiculous amount of messy human behaviours to simulate, and they’re almost unknowable, and they keep changing, and you can’t tell when. Your bot might have to sign up for a Google service and use it convincingly on a single computer, which should look different from the computers of other bots, in ways you don’t understand. It might need convincing delays and stumbles between key presses, scrolling and mouse movements. This is all incredibly difficult to crack and teach a computer, and complexity comes at a financial cost for the spammer. They might break it for a while, but if it costs them (say) $1 per successful attempt, it’s usually not worth them bothering.

Still, people do break Google’s protection. CAPTCHAs are an ongoing arms race that neither side will ever win. The AI technology which makes Google’s approach so hard to fool is the same technology that is adapted to fool it.

Just wait until that AI is convincing enough to fool you.

Sweet dreams, human.

(Everything here is my best understanding of reCAPTCHA 2, which has certainly got smarter since, and Google is on v3 now. I don’t have a clue how that works, but it probably involves 11-dimensional hypercubes in a virtual multiverse).