Friday, June 19, 2015

Pig-snails in the sky: How Google is exploring the essence of artificial neural networks http://revealedtech.com/computer-system/pig-snails-in-the-sky-how-google-is-exploring-the-essence-of-artificial-neural-networks/

AI dreams

Google is deep into artificial neural network research, but it’s also not afraid to admit there still a great deal of uncertainty over how these collections of synthetic neurons work. Neural networks have made great strides in image and speech recognition, with some models being better at certain tasks than others. To help find out why that is, Google Research is pulling apart the layers of neural networks, feeding in data at different points, and even running the whole thing in reverse. In addition to teasing out the mathematical intricacies of neural networks, it also creates some very cool images.

Most neural networks consist of 10-30 layers of artificial neurons, each with a different task. So if you’re trying to design a network that recognizes objects in an image, you might start with a layer that looks for edges, then another layer that finds basic shapes or colors. The final layers use the data points provided by all the previous ones to search for more abstract things — a building, a face, or whatever else might be in the image.

noise-to-banana

One way to test these networks is to run things backwards. Instead of giving it an image and asking for an interpretation, you give it an interpretation and ask for an image. In the example above, researchers asked a neural network to take the image of random noise and create a banana. The system had been shown bananas before, so it knows what sort of colors, edges, and shapes to look for. So what you’re looking at is a computer’s idea of the essence of a banana, which can be important in tweaking the design and training of the network to recognize things more effectively. There are some more examples of the same process that are pretty neat too.

machine vision

It’s also possible to feed in images without indicating the feature the network is supposed to be amplifying. Then by selecting the enhanced output of different layers, you can see how they work independent of the others. For example, the lower layers that deal with basic shapes and edges generate simple, ornament-like patterns on the image.

If you isolate the higher layers that deal with complicated objects, you get really bizarre stuff. If researchers ask for an enhancement of a basic image (like the clouds below), the layer basically gets in a feedback loop of enhancing the image over and over to make existing features look more like what the network thinks it should see. In this way, a neural network can over-interpret entire objects that aren’t there. The network in the example below is trained to identify animals, so it has created several weird hybrids of animals out of the random shapes in the sky.

Clouds

 

Funny-Animals

This same process of relentless enhancement can also be applied iteratively on its own outputs, so you basically end up with images created completely by the bias of a network (at the top of this post). This process provides researchers with clues about how different models “think” and recognize patterns. It’s also strangely beautiful.


Source Article from http://www.extremetech.com/extreme/208595-pig-snails-in-the-sky-how-google-is-exploring-the-essence-of-artificial-neural-networks http://www.extremetech.com/wp-content/uploads/2015/06/AI-dreams-640x353.jpg
Pig-snails in the sky: How Google is exploring the essence of artificial neural networks

No comments:

Post a Comment