Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Writing an emulator for a slightly unusual abstract FSM

Name: Anonymous 2016-05-12 15:20

what would be a relatively hassle-free way of writing an elegant, intuitive code that emulates a finite state machine (no jumps or conditionals) with a self-modifying instruction set (or maybe just two instruction sets: one for modifying data, other for modifying the first instruction set)? I don't care about performance right now (I doubt it will get to the stage when I need to care about that but if it does, some fuckery with C function pointers will be inevitable; right now I care about readability and being able to prototype quickly) so functional is OK but I'm thinking of instructions having four-bit length and I'm not sure how good purefuncs are at handling that

Name: Anonymous 2016-05-14 6:58

>>26
Cannot be done like this. 2+x=4 can be uniquely solved, but y+x=4 cannot be. Even AND, OR, and XOR cannot be uniquely mapped to their original inputs based only on an output. Just thinking about it now, I'm don't see how any binary function can avoid information loss without preserving one of the inputs, but that would defeat the purpose of being reversible.

program consists of instructions and left hand operands. input data consists of right operands. output consists of results. when you have left operands, instructions and results and each instruction is a bijection, finding right operands is first-grade math.

think about this - program cosnsits of instructions which translate to things like:
add 0x01 to first byte of input
bitwise rotate right by 0x02 the second byte of input
xor 0x03 to the second byte of input
output results

when you have an output stream and a program, the program to calculate the initial input stream would look like:
substract 0x01 from the first byte of output
bitwise rotate left by 0x02 the second byte of output
xor 0x01 to the first byte of output
output results

That's a waste of time. A naive implementation of multiplication (the one your processor uses, not fancy shit in bignum libraries) is O(n2), while addition is O(n), where n is the number of bits. Addition (or any other simpler instruction, such as logical operations) is going to have to stall 240 operations to have the same operation as multiplication on a 16 bit number.

who said I need multiplication? this is not a general purpose computer and it's not supposed to be programmed by humans. it's designed with a specific goal in mind and this goal doesn't require multiplication.

Did you mean "self-modifying instruction set" literally, as in, the instructions that opcodes map to are changed?

yes, and the initial mappings are a part of each program.

Why not just change the program itself?

imagine you have a 2GB program (which is not that unlikely in this case) and each instruction modifies every instruction that happens after it. is nearly almost quadratic in relation to program size and modifications that happen early in the program take longer to execute than those which happen near the end.

now, imagine you have 16 opcdoes, each mapped to a single instruction. each instruction modifies the opcode array/list/whatever else data structure I pick. complexity is now linear in relation to program size and if the modifications are chosen carefully, each modification can happen in constant time.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List