Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

Is there such a thing as a good low-level language?

Name: Anonymous 2018-02-01 9:37

At work, I occasionally have to do some low-level bit-fucking. I do it in C, and so does everyone - it's the de facto standard language for doing this sort of thing. and not to get into a mental midget-kun tier C hate but the more I do bit-fucking in C the more I notice its flaws when it comes to bit-fucking.

but this is not the thread about complaining about C's shortcomings. my question is: is there an actual programming language (other than some specific assemblers, but this isn't ideal because I'd need to write a lot of code to support different OSes and architectures) that is good for those kinds of things? I mean supporting things like:
- variables with programmer-specified number of bits; I don't mean just uint8, I mean 3-bit, 4-bit, 5-bit etc.
- direct access to specific bits in the variable's binary representation - bitmasks and shits may be how people achieve this but isn't it a bit of a hack? why not accessing bits the way you access bytes
- variable-length integers (like Exp-Golomb) as a native data type - would be useful in codecs because e.g. H264 uses them
- instructions like circular shifts (often used in crypto) being directly supported instead of compiler folding your logical shifts and ORs into them - 'some architectures don't have this instruction' is a bad argument against this; some architectures don't have mul and yet C has * for multiplication instead of folding looped additions into one
- explicit support for vector processing - see above for why 'compiler will fold your code into one' is stupid
- inlining being a command, not a suggestion
- standards-defined way to use inline assembly (I'm not sure that C doesn't have it, I don't remember if it's da standard or GNU shit)

Name: Anonymous 2018-02-01 10:29

Fortran, Pascal

Name: Anonymous 2018-02-01 10:43

>>2
do they have all (or at least some) of the things mentioned in the OP? I know a bit of Pascal but never did any serious bit-fucking in it

Name: Anonymous 2018-02-01 10:51

PHP is pretty good, it has nice support for bit operations.

Name: Anonymous 2018-02-01 10:52

it's impossible to have types smaller than one word

Name: Anonymous 2018-02-01 10:54

>>5
but of course it is possible, /g/-kun!. many languages have them - bool is an obvious example. of course compiler will extend those types to a word or, if its smart, fold them into a bitmap - but from the programmer's perspective, those types have less than one word.

Name: Anonymous 2018-02-01 10:57

bit-fuck my anus!

Name: Anonymous 2018-02-01 11:05

you want all these types. guess what? that's why people use C++. you can make template classes in C++ for them with operator overloading so that it has the same semantics as a native data type.

oh, and it'll compile down to fast code. oh, and inline asm is in the C++ standard.

Name: Anonymous 2018-02-01 11:29

>>8
but extracting single bits, making variables smaller than a byte and using Exp-Golomb is as tedious in Sepples as it's in C. you can deal with the first thing only once and wrap it in a class, I'll give you that - but dealing with non-byte-aligned data is still going to suck

Name: Anonymous 2018-02-01 11:44

- variables with programmer-specified number of bits; I don't mean just uint8, I mean 3-bit, 4-bit, 5-bit etc.
Programmers are not real scientists which is why they just put shit like uint16_t/u16 in languages instead of a generic Z/nZ set.

I'm not sure that C doesn't have it, I don't remember if it's da standard or GNU shit
It is a suggestion in da standard without any detail on it however. GNU implements their own version (which clang also supports) and the MS compilers implement another.

Name: Anonymous 2018-02-01 11:47

unchecked dubs is also less than one byte in this case.

Name: Anonymous 2018-02-01 14:19

unironically, C++ with operator overloading.
You can compile C++ code as via GCC
that doesn't link the bloated stdc++ library and enjoy best of both worlds.

Name: Anonymous 2018-02-01 14:33

>>1
Bitfields
__attribute__((always_inline))
uint32_t rotl32a (uint32_t x, uint32_t n)
{
return (x<<n) | (x>>(32-n));
}

Name: Anonymous 2018-02-01 14:34

Name: Anonymous 2018-02-01 15:49

Ada does most of that, but some requirements are plain weird. A low-level bignum library will be rather annoying to use if it isn't supposed to be some Unix-tier broken garbage — consider that you either have to handle OOM conditions or allocate by hand. A language that does this for you will not be very low level on current crippled machines because you need hardware support for such things or something like conditions and restarts.

Also, standardized inline assembly is a fucking stupid idea, think about it. Either the language will be forever limited to current architectures (why even bother with something above macro assembly then?), or you will have nothing except a hook for asm. But a hook on its own buys you nothing if the assembly part itself isn't standardized.

Name: Anonymous 2018-02-01 17:01

- variables with programmer-specified number of bits; I don't mean just uint8, I mean 3-bit, 4-bit, 5-bit etc.
- direct access to specific bits in the variable's binary representation - bitmasks and shits may be how people achieve this but isn't it a bit of a hack? why not accessing bits the way you access bytes
Ada and PL/I support these.

- variable-length integers (like Exp-Golomb) as a native data type - would be useful in codecs because e.g. H264 uses them
Big integers can probably work similarly to arrays and strings to avoid implicit allocation, but there doesn't seem to be anything that does it this way besides some assembly language libraries.

- instructions like circular shifts (often used in crypto) being directly supported instead of compiler folding your logical shifts and ORs into them - 'some architectures don't have this instruction' is a bad argument against this; some architectures don't have mul and yet C has * for multiplication instead of folding looped additions into one
You can use bit string concatenation and selection to get circular shifts.

- explicit support for vector processing - see above for why 'compiler will fold your code into one' is stupid
Fortran and PL/I support this and I know Fortran compilers optimize it.

- standards-defined way to use inline assembly (I'm not sure that C doesn't have it, I don't remember if it's da standard or GNU shit)
Ada supports inline assembly or machine code intrinsics in System.Machine_Code but it's all implementation defined.

Name: Anonymous 2018-02-01 18:42

When I didn't know any better, I thought C was good for low-level programming because I confused low-level with C, not knowing anything else, but now I know the truth. I used C because I didn't know any better. They use C because they enjoy the bugs. They don't want to be helped or cured. They don't want better even if it's faster. These bugchasers are not content with giving themselves AIDS, they want 100% of the world's population to have it too.

Name: Anonymous 2018-02-01 21:10

Assembly

Name: Anonymous 2018-02-01 22:26

>>18
not a language, that's just machine code

Name: Anonymous 2018-02-02 0:09

>>19
um no sweetie

machine code is binary

Name: Anonymous 2018-02-02 0:47

>>1
Bitfiels, and...

instructions like circular shifts (often used in crypto) being directly supported instead of compiler folding your logical shifts and ORs into them - 'some architectures don't have this instruction' is a bad argument against this; some architectures don't have mul and yet C has * for multiplication instead of folding looped additions into one
Rolling your own, like >>13-san, will usually produce a rotate instruction for a somewhat optimizing compiler. Unfortunately no infix then...

explicit support for vector processing - see above for why 'compiler will fold your code into one' is stupid
<xmmintrin.h>?
for(int i = 0; i < 4; ++i)?

inlining being a command, not a suggestion
There's some extensions to force this but most compilers will do what you want with static inline. Or with a macro.

standards-defined way to use inline assembly (I'm not sure that C doesn't have it, I don't remember if it's da standard or GNU shit)
asm("implementation-specific"); is *almost* standard.



#include <stdint.h>
#include <limits.h>

static inline unsigned rotlu(unsigned x, unsigned n) {
const unsigned width = sizeof(unsigned) * CHAR_BIT;
return (x << n) | (x >> (width - n));
}

int main(int argc, char **argv) {
union {
unsigned u;
int i;
} x;
x.i = argc;
x.u = rotlu(x.u, 7);
return x.i;
}


will produce for x86 on gcc -O3:

main:
mov eax, edi
rol eax, 7
ret

Name: Anonymous 2018-02-02 2:07

>>1
There is no theoretical reason why a language can't do that which you desire. The reason is a practical reason, nobody cares enough to invest into that new language. As for me, my perfect language is Lisp with full support for macros and domain specific languages. In actual practice, I would write a DSL with Scheme.

Name: Anonymous 2018-02-02 5:24

>>21
asm("implementation-specific"); is *almost* standard.

It's only really available in gcc and compilers that try to emulate it (Clang). MSVC has never understood it.

Which is unfortunate really. Despite its ugliness, gcc extended asm is stupidly useful.

Name: Anonymous 2018-02-02 8:28

>>20
Binary is an encoding, not a programming language.

Name: Anonymous 2018-02-02 13:56

>>24
encode my anus

Name: Anonymous 2018-02-02 14:27

>>24
No, it's one way to represent an encoding.

Name: Anonymous 2018-02-02 18:38

Check these dubs.

Don't change these.
Name: Email:
Entire Thread Thread List