You know, a true statement doesn't literally become false just because you write "False".
And a statement is not true just because you say it is true.
This optimization is not always premature
You or someone else can always extend a program to a point where the type constraint must be undone.
if they're decent, offer the escape hatch of parametric polymorphism
So a type system is good if it's easy to pretend you don't have it in the first place.
If it is a trivial invariant, then unityped languages are shit for not letting us express even such a trivial invariant.
Correct types don't get you much unless you have dependent types. It's not worth it. Use a better system to test invariants like compiler aware assertions.
Moreover, despite the triviality of that invariant, it actually accounts for the majority of errors during develompent.
You don't need to declare static types to detect type errors. Recursive type inference is sufficient for detecting type errors at compile time. You can do this in dynamically typed languages.
If "recursive type inference" really worked, the overhead in "dynamic languages" wouldn't be there. Run some benchmarks, will you?
You mean "if recursive type inference was really implemented for mainstream compiler of shit dynamic language I'm thinking of, then it would be faster". If you want an example, see sbcl. Your reasoning is also a fallacy. Just because an implementation does not yet realize the idea, does not mean the idea is not possible to implement.
You know, a non-trivial problem doesn't just become trivial because once write "It's trivial".
And a problem is not non-trivial just because you say it is non-trivial.
Etc etc, your whole post is a bunch of baseless assertions while Harper always gives clear and argumented opinions. You're full of shit.
I provided counter examples to Harper's supporting points. That is all. Get what you want out of that, or nothing at all. It's up to you.