DON'T FUCK WITH THE ZUCK http://techcrunch.com/2014/11/07/codecademy-reskillusa/ Forget the curryniggers. Pretty soon, you'll be being asked to get your replacement up to speed and turn in your security pass. This replacement will be a 45 year old nigger truck driver who trained online for three months and managed to earn a certificate saying he can implement bubblesort, and he works for $35000 a year! Aren't you glad you spent all that time studying the mechanics of computer science and the mathematics of function? It'll keep you entertained when you're waiting on the results to your welfare application (Denied, Welfare is only for niggers, Mexicans, and women). Maybe if your a good goy who always remembers to check his privilege and brush his teeth before bed, Zuckerkike will give you a job removing child porn from Facebook or something.
Name:
Anonymous2014-11-10 23:03
You can always be a janitor.
Name:
Anonymous2014-11-10 23:09
Thank you, Stephanie. Why don't you go do your nails? The men are eating.
Used to be people absolutely insisted on manual memory allocation, but relatively few do any more. It’s much more efficient, both in run-time measurement and in programmer effort, to use automatic storage allocation, in almost every situation. (The above-mentioned paper shows that it remains true even when cache effects are considered.) It’s the same with multiprocessors. Used to be that people thought it was absolutely essential to allocate processors yourself, lay out memory yourself, control data flow yourself. But it’s not; it’s better to leave that to a scheduler, and to make a clean separation between the conceptual parallelism at the level of a language model and its implementation on a hardware platform.
Garbage collection = scheduling of storage, parallelism = scheduling of processors. That’s the whole idea.
Name:
Anonymous2014-11-18 23:07
here is a fallacy among programmers that better performance is achieved by writing pro- grams in “low-level” languages that are “closer to the metal.” In fact, better (and more pre- dictable) performance is achieved by using languages with well-defined cost models. It happens that many programmers have a reasonably accurate cost model for microprocessors and that this model can also be used for many imperative languages including C and Java. Unfortunately, this model does not easily extend to parallel architectures without requiring programmers to also understand (and implement) load balancing, communication, and synchronization among tasks. In fact, there is no contradiction in achieving good performance with high-level (or highly abstract) languages, so long as there is a well-defined cost semantics for such languages. This claim is supported not only by the work in this thesis but also by an example given in the intro- duction: the use of locality to predict the performance of the memory accesses. While the cost semantics in this thesis does not provide a mechanism as powerful as locality, it represents an important step in reasoning about the performance, including memory use, of parallel programs. Models, such as a cost semantics, are what distinguishes computer science from computer programming . These models give us theories which, through the use of formal proofs and em- pirical tests, can be verified or refuted. These models also reveal something of human character of the study of abstraction: that science, as a human endeavor, reflects our desire to seek out simplicity and beauty in the world around us, including truths about computation.
>>1 Not everything improves with more automation. Maybe this will change with advances in artificial intelligence, but a simple algorithm wont outperform a human at memory management or parallelism. It's like saying a machine should do the typing for you and then holding an electric dildo in front of you and having it repeatedly smash the keyboard.
Name:
Anonymous2014-11-20 3:43
hard ai turns out to be a fantasy and so now they come up with asi