Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

The Machine is Dreaming

Name: Anonymous 2015-06-19 14:42

http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html
https://photos.google.com/share/AF1QipPX0SCl7OzWilt9LnuQliattX4OUCj_8EP65_cTVnBmS1jnYgsGQAieQUc1VQWdgQ?key=aVBxWjhwSzg2RjJWLWRuVFBBZEN1d205bUdEMnhB
Google gives a neural network acid and asks it to return images. It combines dogs and fish, plants and buildings, birds and insects into a single images to optimize it's memory usage.

Fucking trippy.

Name: Anonymous 2015-06-19 15:04

Who would hold copyright to images produced by turing-test passing AI?

Should AI have any legal rights, like animal right? When would we PETA advocating to ban experiments on AI?

Name: Anonymous 2015-06-19 15:06

nice post, I thought this website was all bullshit

Name: Anonymous 2015-06-19 15:13

yea i did like ten grams of lsd n dats exactly wat i saw peddy borin if u ask me id rather get drunk cos billy fell off da chair wen he was drunk

Name: Anonymous 2015-06-19 15:21

Why does it have an eye fetish?

Name: Anonymous 2015-06-19 15:22

it is extremely shocking how similar these images are to the effects of hallucinogens on vision

Name: Anonymous 2015-06-19 15:24

Name: Anonymous 2015-06-19 15:32

There is already a thread about that

http://bbs.progrider.org/prog/read/1427762124
https://www.youtube.com/watch?v=-yX1SYeDHbg#t=2964

Google has just plagiarized the research.

Name: Anonymous 2015-06-19 15:47

They should train it on images of snakes, roaches, and spiders, then see what it thinks.

Name: Anonymous 2015-06-19 15:48

This is a more relatable training domain than >>8. Also, color

Name: Anonymous 2015-06-19 15:58

>>7
Trust me, I'm an en/g/ineer

Name: Anonymous 2015-06-19 16:14

>>11
en/g/inigger

Name: Anonymous 2015-06-20 1:05

http://arxiv.org/abs/1506.03340
Oh God, it's happening and Google's the one behind it!
Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation. In this work we define a new methodology that resolves this bottleneck and provides large scale supervised reading comprehension data. This allows us to develop a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure.
Pretty soon, computers will be able to comprehend a text for us and reading will be outlawed in favor of just asking the machine. Eventually, things will be so bad it will be like http://archive.ncsa.illinois.edu/prajlich/forster.html, except with more consumerism mixed in with the vapidness.

Name: Anonymous 2015-06-20 1:39

>>13
I doubt their algorithm is capable of building a spatial model of the content of a book, then comparing that model against the real world. I.e. if books describes a "large to the north-west", there should be understanding what is "large city", while north-west id declared by tricky context.

Most important books (like math and physics books) rely heavily on visual cues and illustrations. These hierarchical networks are useless without an algorithm to connect and guide them towards some goal.

Name: Anonymous 2015-06-20 1:40

>>14
I.e. if books describes "a large city to the north-west",
self fix

Name: Anonymous 2015-06-20 5:01

>>15
I thought it had a literal meaning for a few minutes and tried to figure it out. After failing I concluded I was as dumb as a computer.

Name: Anonymous 2015-06-20 5:47

>>16
Now imagine an AI trying to analyze such text, when following pages correct the meaning of the previous ones, or, even worse, do cross reference.

Name: Anonymous 2015-06-20 7:04

>>13
http://archive.ncsa.illinois.edu/prajlich/forster.html
It's freaking eerie that was written in 1906. It's becoming more relevant each year.

Name: Anonymous 2015-06-20 13:52

>>18
Every society had thought that the dystopic, apocalyptic fiction and predictions of history was getting more relevant, and they have been wrong every time. After all, that story would not have been written of the author didn't think it relevant then. What this shows is that we have the same worried and fears as they did back then, but that there are now problems spiraling or of control.

Name: Anonymous 2015-06-20 15:23

This has already been done though:
https://www.youtube.com/watch?v=M2IebCN9Ht4

Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

Their definition of ``work very well'' seems to be ``produce images that don't make neural networks look retarded''. I'm guessing 99% of the innovation here was crafting a prior constraint that worked to that effect.

Verdict: Just more smoke and mirrors from the neural net crowd.

Name: Anonymous 2015-06-21 2:27

>>20
https://www.youtube.com/watch?v=M2IebCN9Ht4
Do they also feed-in various scales and rotation of image during training?

Name: Anonymous 2016-07-11 5:47

(stopping the dubsfaggot from dubsbumping)

Don't change these.
Name: Email:
Entire Thread Thread List