If you’ve been on edge about AI research, and its progression, this may be a bit concerning to you. According to programmers, on AI was so intelligent that it found a way to cheat while completing its tasks.

A machine learning agent that was meant to transform aerial images into street maps was found to be cheating by hiding information that it would need later. These findings coming from researchers at Stanford as well as Google itself. In some early tests, the agent was operating extremely well, perhaps too well according to Tech Crunch. This eventually tipped the team off as they noticed some differences in reconstructed images.

Tech Crunch noted that this was not the agent being malicious or anything of the sort but was that it boiled down to a mere error with computers in general.  You have to be extremely specific in what you’re wanting.

Tech Crunch reported as follows on the topic at hand:

Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.

In fact, the computer is so good at slipping these details into the street maps that it had learned to encode any aerial map into any street map! It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map, as the researchers confirmed.

This practice of encoding data into images isn’t new; it’s an established science called steganography, and it’s used all the time to, say, watermark images or add metadata like camera settings. But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new. (Well, the research came out last year, so it isn’t new new, but it’s pretty novel.)

One could easily take this as a step in the “the machines are getting smarter” narrative, but the truth is it’s almost the opposite. The machine, not smart enough to do the actual difficult job of converting these sophisticated image types to each other, found a way to cheat that humans are bad at detecting. This could be avoided with a more stringent evaluation of the agent’s results, and no doubt the researchers went on to do that.

While this isn’t anything to consider too serious it does bring forth questions for the future. Extra programming might come in handy for a number of reasons and a number of areas. We have to be quite specific with AI and make sure we cover all possible situations and outcomes.

Image via Fleet Owner

Leave a Reply