Artificial intelligence doesn't quite "dream" the same way we humans do, but that doesn't stop it from conjuring up some pretty wild visuals. Recently, Google has been testing artificial neural networks designed to recognize and describe images. In order to understand how the system "thinks," the team instead asked it to create images based on prompts. The results are a bizarre and often mesmerizing look into how an A.I. network interprets the world.
Titled "Inceptionism: Going Deeper into Neural Networks," the Google research team's new report delves deeper into how artificial neural networks, specifically for image recognition software, operate.
The team "trains" the network by showing it examples of images of what it wants it to learn. For example, to teach the A.I. what "mountain" means, they would use a variety of different photos of mountains.
However, things get weird when you ask the network to create images based on what it's learned. In one case, when asked to create images of "dumbbells," the network produced an amalgamation of metal and human arms, likely because every image of dumbbells it'd seen included an arm lifting it.
For some tests, the team asked the neural network to find examples of things in images that didn't actually contain them, forcing it to warp the image to produce the desired subject.
For other tests, Google had the network produce random images without prompts based on random neural static.
Google refers to such randomly-generated images as the artificial neural network's "dreams."
Turns out, A.I. may just dream of electric sheep, along with warped birds and lots of eyes...
...And plenty of other crazy visuals. This must be what watching "Lord of the Rings" on acid is like.
Being a piece of modernist art, it's probably acceptable to take a few liberties with Edvard Munch's "The Scream," but what's up with the aggressive use of eyes? And is that a...dog on the left side?
Beautiful And IntricateGoogle
Other results are strangely beautiful and intricate. The Google team plans to keep observing what kinds of images deep neural levels produce as they continue training it to recognize images better.