>>1,4,5 I cannot think of any situation where it would strongly benefit you to do this sort of ordering by hand. If it really matters, most compilers will allow you to declare an output section for each function so you can set the ordering at link time. You'd probably be better of just using assembly for the entire hot path though.
>>14 You can't rely on this behavior if your toolchain re-orders functions based on input from a profile guided optimizer, or if it generates a separate segment for each function (for link time dead code elimination, etc). If your compiler doesn't support either of these, it's crap.
>>6 Too bad da standard doesn't permit you to assume 0 for success and 1 for failure. Does anyone know of a platform where this isn't the case? (Non-hosted environments where returning from main crashes your program don't count.)
>>19 That doesn't address >>17's ``strongly'', since I can't (off the top of my head) think of any situation in which you would want in-memory ordering to matter AND in which there's no way to do it in a more portable manner, barring IOCCC-level wanking. Can you give a concrete example of a problem in which 1) you need to encode information by ordering, 2) you cannot rely on an optimizer to do so, and 3) there is not another (relatively simple) method of encoding this information?
Name:
Anonymous2014-05-05 14:28
#include <stdio.h>
int main() { if ((int)-1 >> 1 == -1) { printf("sound compiler! knows people use shift to div their ints.\n"); } else { printf("your GCC can't even div a number by two! use MSVC\n"); } return -123; }
Can you give a concrete example of a problem in which 1) you need to encode information by ordering, 2) you cannot rely on an optimizer to do so, and 3) there is not another (relatively simple) method of encoding this information?
Let's say we have a number of memory pools, used to do BIBOP allocation. Each pool has corresponding handler, which hardcodes stuff like memcpy and memcmp for its corresponding pool size. We want these pool handlers to be ordered in memory according their pool sizes, so that determining if object can be copied from one pool to the other would require just handler pointer comparison, instead of calling their length functions and comparing results.
there is not another (relatively simple) method of encoding this information?
Yes. You can store 4-byte length to allocate 1-byte object, wasting 4 bytes per allocation (thank you, dear GCC!).
Name:
Anonymous2014-05-06 2:24
>>24 You can trivially accomplish that without resorting to implementation-defined behavior by dynamically allocating all the pools from the same array of pages. In practice you must do that anyway since portable code can't make assumptions about the size of a page.
You can trivially accomplish that without resorting to implementation-defined behavior by dynamically allocating all the pools from the same array of pages.
How does that give you order?
In practice you must do that anyway since portable code can't make assumptions about the size of a page.
portable code != efficient code. In general portable code is order of magnitude less efficient than native code, using full capability of x86 CPUs.
>>28 An array element at a lower index is guaranteed to have an address that is lower than an array element at a higher index. So if you allocate all the pools from the same array, total ordering is guaranteed.
>>30 I don't know what you are trying to say here, and I suspect you don't either. You are aware that page boundaries and cache line boundaries are different, right? Most compilers will try to put things on a cache line boundary if asked; in practice however this can hurt more than help because the padding required to ensure alignment reduces the total cache utilization.
An array element at a lower index is guaranteed to have an address that is lower than an array element at a higher index. So if you allocate all the pools from the same array, total ordering is guaranteed.
You don't know in advance how many pools you will need or their size. A pool have size of say 512 bytes and when it becomes full, you replace it with an empty one, while older get garbage collected.
in practice however this can hurt more than help because the padding required to ensure alignment reduces the total cache utilization.
That is why a you have to manually optimize everything down to assembly. Otherwise you code will be inefficient.
Name:
Anonymous2014-05-07 19:10
This bullshit wouldn't have happened if you stopped programming for iOS earlier, you maggot dickface.
Name:
Anonymous2014-05-07 19:30
public class JavaSevenIsTheFuture { public static void foo() { } public static void bar() { }
public static void main(String args[]) { if (JavaSevenIsTheFuture::foo < JavaSevenIsTheFuture::bar) { System.out.println("Gay, doesn't even compile."); } } }
Name:
Anonymous2014-05-08 0:44
>>32 And your proposed solution to this is... what, exactly? Encoding information about the pool size in the pool copy code entry point addresses doesn't gain anything over just storing the length of the pool. In the former case you must store a pointer to the handler somewhere.