#Deep dreamer app how to
Finally, we have shown how to correctly interpret the results. We started by looking at how deep dream on images work, then we proposed how we can implement deep dream over text. This is because model does not have a rich representation of these sentences in its hidden layers. Conclusionīecause the model had no clear understanding of these sentences, the sentence embedding of these two sentences after dreaming are almost similar (look at similar words after dreaming). The sentence was classified as negative, the embeddings after dreaming reflect negative sentiment.
![deep dreamer app deep dreamer app](http://techonomy.com/wp-content/uploads/2016/05/puppyslugs.jpg)
positive prediction was pushed near to words like unique, great, celebrated negative prediction was pushed near to words like mistake, dirty, badĢ. The word embeddings after dreaming become similar to the words in `model prediction`, though if we look at similar words of initial embeddings, they were more or less same for the two sentences even when they were conveying very different meanings, final sentence embedding showed some interesting patterns.ġ. The key observation here is that initially, the sentence embedding was in between positive and negative words, but as dreaming progresses the embeddings were pushed away from negative words.
![deep dreamer app deep dreamer app](https://i.ytimg.com/vi/LOHlAWsnJjU/maxresdefault.jpg)
Moreover, the final sentence embedding is now more similar to red dots(negative words) than green dots(positive words). The graph clearly shows that embeddings got away from positive words and got near negative words and this is in tune with the model prediction. Try Embeddings here on Tensorflow Projector
![deep dreamer app deep dreamer app](https://cdn.theatlantic.com/assets/media/img/mt/2015/09/Iterative_Places205_GoogLeNet_4-1/lead_720_405.jpg)
Image by Author | 100 dimensional embeddings on 2d space