Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Hennessy and Patterson

Name: Cudder !MhMRSATORI 2014-07-13 3:42

Ostensibly one of the most widely used books for studying computer architecture so I had a look, and... WTF? Possible future CPU designers are being fed with tripe like this?

http://i62.tinypic.com/xakqr.png

Despite all the focus on MIPS and performance, it is suspiciously missing any real benchmarks of MIPS processors.

They have an interesting definition of a "desktop computer":
http://i60.tinypic.com/4lq2j7.png

"Heineken and Pilsner" would be a better name for this book, as its authors appear to be as knowledgeable about real-world computer architecture as drunken fools.

Name: Cudder !MhMRSATORI 2014-07-16 10:58

>>32
OpenRISC is just another MIPS clone (complete with branch delay slot!) with some even more stupid design decisions like defaulting to big-endian and an Asm syntax where every single goddamn instruction has a mnemonic starting with "l" or "l.". WTF.

>>34
The difference is that it's much harder to, when the hardware makes it possible to implement more complex operations, recognise a series of simple instructions and recombine them into one op for the dedicated hardware unit than it is to split up a complex instruction into simpler uops when dedicated hardware units are not available. And the multiple simple instructions take more space in cache and memory bandwidth to fetch.

A good example of this is an integer division instruction. x86 has had one ever since the 8086, and while it was slow (it's still faster than a manual shift-subtract loop if you don't know the divisor), software could use it whenever it wanted to divide, and its performance was increased a huge amount throughout the years as newer models came out. Many RISCs started out without any divide instruction, because it didn't fit the definition of "simple" and hardware at the time couldn't do divides in 1 clock cycle. Software had to either call a divide function in a library or inline a shift-sub loop.

At some point they figured out how to make faster division hardware and added instructions for it, which means no gain in performance at all for software that inlined a division loop or statically linked in a library containing one, while those that could benefit from an updated library will still take the extra call/return overhead. Attempting to recognise the nearly infinite possibilities of instruction sequences to implement a divide loop and send that to the hardware divider would take far more complex circuitry than if they had a divide instruction that just internally expanded to the equivalent shift-sub loop on CPUs without fast divide hardware, and there's no way to recover the cost of fetching those instructions. (This is what the 8086's DIV did.) It took ARM seven revisions to add an integer divide, and it's still considered "optional". Maybe this is OK for small embedded cores but it's absolutely idiotic for something intended to do more computation and be general-purpose.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List