<p><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODQwODUyNC9vcmlnaW4uanBnIiwiZXhwaXJlc19hdCI6MTYxOTg2NzE1NX0.7tSbcSewsmAEB3P1iNy-GK2ldYWEADF79AX2EcaKFzg/img.jpg?width=980" id="eb01e" class="rm-shortcode" data-rm-shortcode-id="31450292a3bc90ae1e70b193e7db3460" data-rm-shortcode-name="rebelmouse-image"><br><br>Google wanted to make its neural network better at detection to the point where it could pick out other objects in an image that may not contain that object (think of it as seeing the outline of a dog in the clouds). Deep Dream gave the computer the ability to change the rules and parameters of the images, which in turn allowed Google’s AI to recognize objects the images didn’t necessarily contain. So, an image might contain an image of a foot, but when it examined a few pixels of that image, it may have seen the outline of what looked like a dog’s nose. <br><br>So, when researchers began to ask its neural network to tell them what other objects they might be able to see in an image of a mountain, tree, or plant, it came up with these interpretations:<br><br><img type="lazy-image" data-runner-src="https://assets.rebelmouse.io/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpbWFnZSI6Imh0dHBzOi8vYXNzZXRzLnJibC5tcy8xODQwODUyNS9vcmlnaW4ucG5nIiwiZXhwaXJlc19hdCI6MTYxOTA1NjkwMX0.Il_qt54jJhjT735wcfnpXwDC5-CH_zqLh5cPQ1M2ofc/img.png?width=980" id="b6106" class="rm-shortcode" data-rm-shortcode-id="77ec0e5c396700a47391f3894ec48319" data-rm-shortcode-name="rebelmouse-image"><br><em>(Photo Credit: Michael Tyka/Google)</em></p> <p>“The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training,” software engineers Alexander Mordvintsev and Christopher Olah, and intern Mike Tyka <a href="https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html" target="_blank">wrote in a post about Deep Dream</a>. “It also makes us wonder whether <strong>neural networks could </strong>become a tool for artists—a new way to remix visual concepts—or perhaps even <strong>shed a little light on the roots of the creative process in general.”</strong></p> <p>Just for fun, Google has opened up the tool to the public and you can generate your own Deep Dream art here: <a href="http://deepdreamgenerator.com/" target="_blank">deepdreamgenerator.com</a></p> <p><strong> </strong></p> <div class="video-full-card-placeholder" data-slug="the-future-of-machine-learning" style="border: 1px solid #ccc;">
<div class="rm-shortcode" data-media_id="Xf4Q6c3a" data-player_id="FvQKszTI" data-rm-shortcode-id="431dd6c8e88b4ff5fb9280ca053d6666">
<div id="botr_Xf4Q6c3a_FvQKszTI_div" class="jwplayer-media" data-jwplayer-video-src="https://content.jwplatform.com/players/Xf4Q6c3a-FvQKszTI.js">
<img src="https://cdn.jwplayer.com/thumbs/Xf4Q6c3a-1920.jpg" class="jwplayer-media-preview">
</div>
<script src="https://content.jwplatform.com/players/Xf4Q6c3a-FvQKszTI.js"></script>
</div>
<strong>the-future-of-machine-learning</strong></div> <p><strong> <br></strong></p>
Keep reading
Show less