Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

IDE with advanced code analysis?

Name: Anonymous 2016-02-21 11:23

/prog/, what's the IDE with the most advanced code analysis features out there? I'm talking about things like:

- listing all the free variables that occur in selected code term
- warning the user when he shadows a binding from the outer scope
- finding all the places where a variable gets mutated
- showing all possible execution paths through a code term

Name: Anonymous 2016-08-29 2:28

None of this is true for a recompiling JITted language.

Compilation is always going to be a computationally intensive task. JIT compilation means you're either going to have to deal with a compilation delay at startup, or turn off/reduce optimization to allow your code to be JIT-compiled quickly. Besides, JIT compilation isn't even done with most languages - most are either distributed as pre-compiled binaries, or interpreted. JIT is the exception, rather than the rule.

False for anything but stupid microbenchmarks. Compilers can match Real Work to the processor better than you can.

Okay, if you're talking about the average programmer, who doesn't understand low-level programming. But that's kind of like a self-fulfilling prophecy - the reason the average programmer can't write good assembly is because for the last forty years we've been training them to focus on learning how to write good high-level code, and letting the compiler worry about the low-level detail. But in all honesty, where compiled HLL beats hand-written assembly is in speed, not quality. An assembly programmer might be able to write very highly optimized code, but it will take them 8 days, and 2 days to write unoptimized assembly code. While the compiler's output isn't as highly optimized as what the assembly programmer can do, it's considered worthwhile since the time is much less (30 seconds or so).

You really don't know anything, you fucking moron. Die.
Compilers are still fairly limited in the circumstances under which they can optimize away unnecessary function calls (e.g. TCO). When output of a function call is passed to another function, that means both functions' local variables will need to be allocated on the stack, even if both functions have local variables that refer notionally to the same object. This is where macros have an advantage - they have the same syntax as functions, and are equally useful in breaking down a complex computation into smaller parts, but don't instruct the compiler to construct a stack frame. Functions do make sense in cases where making separate local variables are called for by the algorithm (say you want to repeat some action n times, without changing the value of n, so the logical way of doing this is copying n into i and doing while(i--). This could be done either as
#define REPEAT(n) (int i = n; while(i--) dosomething)
or
void repeat(n) {
while(n--) dosomething;
}

In this case the function is better, because you need a "local variable" in either case, and the function deallocates the local variable as soon as it's done with it, which the macro does not.)

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List