; Alyssa P. Hacker doesn't see why if needs to be provided as a special form. ; "Why can't I just define it as an ordinary procedure in terms of cond?" she ; asks. Alyssa's friend Eva Lu Ator claims this can be done, and she defines a ; new version of if:
; What happens when Alyssa attempts to use this to compute square roots? ; Explain.
What the fuck is your problem you stupid monkey? Take your fucking monkey hat off your dumb Jewish ass and quit stressing me over fucking evaluation order and write a fucking decent exercise for once. I wasn't happy with your applicative-order v.s. normal-order bullshit and I'm not happy with this.
The operands that match 'then-clause' and 'else-clause' will be evaluated before new-if is evaluated. This is because they're defined as procedures, not syntax. How many more fucking exercise questions are you going to have that are oriented around this same fucking concept? I get it. I'm not a stupid bitch like Alyssa or Eva. You just need to face the facts, get this feminist bullshit out of your head, and tell Alyssa and Eva to go back in the kitchen, and leave programming to the men. That way you can assume that the reader has the intelligence of the man and you might be able to write a few decent exercises instead of elaborating on the same bullshit over and over. Note that I'm doing you the same courtesy, as I've come to the realisation that, from sitting around in a university all day, wearing a fucking monkey hat, your testosterone levels have surely depleted to the level of a female, and thus you need to be explained things like a female. Now quit wasting my time with these bullshit exercises. Tired of this shit.
And, I repeat, take your fucking monkey hat off, you bloody circus freak.
>>4 Do you even know how to derive the value of an endotensor that is closed under its parent eigenfold? No? Then go scrub my toilet.
Name:
Anonymous2014-06-06 1:59
>>4 aww... I had a go at using RBM's as a Recurrent / Forward-encoder network.. It kind of succeeded at training multiple side-stacked layers concurrently, using shared weights. (Usually you train one layer at a time with RBM or it goes a bit haywire =)
It wasn't a particularly good series to test with though =/
Name:
Anonymous2014-06-06 2:20
The other trick was to use Gradient Splicing =D, which allows all sorts of non-auto-encodings =) ie, you can give a semi-blank input and splice the target onto the input gradient for blind reconstruction ^^ or input t-1 and splice target t
Can you hint how to do deep learning without hardcoding what is important? Like when you do OCR you have to manually specify lines are important, so basically the first layer parses all kinds of lines, while second parses their permutations and topology. Is it possible to design a system, that would guess itself what it should look for?
Name:
Anonymous2014-06-06 2:33
It even has hidden->hidden connections, because i left a large blank in the first input layer to allow for the whole previous hidden layer in the second and third pass =) for that it probably helps to blank the first pass recon negative gradient for the empty hidden layer input (cancels the negative bias on hidden1->hidden2 // hidden2->hidden3 weights)
Name:
Anonymous2014-06-06 3:03
>>10 Computer vision tends to use a lot more hand-crafted lower layers (hog, sift, rift, ..) because pixels are spatially correlated, and it would be difficult to learn those custom feature detectors automatically? You can still just drop the raw pixels into a supervised network, or unsupervised layer (like with an rbm)
Name:
Anonymous2014-06-06 3:03
you will never read SICP for the first time, ever again
Computer vision tends to use a lot more hand-crafted lower layers (hog, sift, rift, ..) because pixels are spatially correlated, and it would be difficult to learn those custom feature detectors automatically?
Anyway, is there a way to infer them automatically? Perhaps using some genetic algorithm? Because programmer doesn't knows what is important and can easily miss some cues, which could have made upper layers order of magnitude more robust.
Name:
Anonymous2014-06-06 4:34
>>14 you can use a convolutional layer, which is sort of the midway point between hand-crafted feature detectors and raw inputs.
It makes sense to automatically select all features, that are invariant under a function (like linear transformations). In general, we can start from sampling random inputs and them throwing away the non-orthogonal combinations.
Name:
Anonymous2014-06-06 9:26
>>16 you can probably get para-orthogonal eigenvectors using pairs of centroids which pivot on & are equidistant to the origin =D
Name:
Anonymous2014-06-06 9:35
Also, if you have a 10x10 px image, doesn't orthogonality limit you to 100 features max?