Message from @DanielKO
Discord ID: 429490836133117962
had to look this part up
https://c9x.me/x86/html/file_module_x86_id_180.html
Most of the other commands are somewhat familiar
it's using single-precision floating point, so if you typed in different numbers it may have automatically chosen a format with more precision?
That's governed by how basic types arithmetic works in C++.
Replace the `float` by `double`, and call `::sqrt()` to see it use different instructions.
The `pxor`, `ucomis` and `ja` serve to check if the argument is positive; if so, it can just use the `sqrt` instruction; otherwise, it needs to call the `sqrt()` function from the standard library, which handles all the nasty NaN, Infinity, negative arguments.
okay. so the chip has its own primitive math
Yeah, it's a CISC architecture.
Switch to the MIPS gcc, and you'll get a very different result.
RISC dont have its own maths intructions?
ive only read a couple pages about MIPS so far
Reduced Instruction Set Computer, the whole point is to have so few instructions in the architecture that the circuitry is very small.
Being small means there's less need for synchronization, thus it can run faster, and there are more transistors that can be used for caches.
PowerISA stands for Performance Optimized With Enhanced Reduced Instruction Set Computer Instruction Set Architecture
So a typical RISC arch won't have any advanced instructions. No specialized math, no instructions that mix register operands with memory, etc.
they dont use the same design pillars when picking acronyms
If course, at some point, you end up with extra room in the silicon, so some complex instructions sneak back in, just because they can.
so all risc programmers need to have all their maths in standard libraries?
It's not like there's a circuit that does math functions in Intel chips. It also runs some software to calculate it.
It's just that it's built into the chip.
Downside is, if the manufacturer didn't pay much attention to details, you can get bad results. Fast, but wrong.
what do you call that? firmware? embedded process??
That would be the CPU's microcode.
ok
Intel CPUs were notorious for having bad trig instructions, when outside the normalized range.
"Bad" meaning they didn't calculate all the bits they promised.
I remember seeing a paper a while back, about how those math functions in CPUs had some unexpected precision problems that didn't even match what the manual promised.
IIRC, for sine/cosine, Intel uses a lookup table, then interpolates the values.
wild guess, did this come to light during the early 3D era?
Not really, it was a paper on scientific computing / simulations field.
Obviously you can't get infinite precision on a computer, but you can keep track of how big the error is. There are a few different approaches for that.
But it's all for nothing if the operations are not delivering the precision they promise.
One of the reasons people that do scientific computing hate Intel's compiler. It tends to gratuitously rewrite floating point expressions to make them faster.
Which, if you spent a significant amount of time making sure every evaluation is careful so you keep track of the error, is a kick in the balls.
Yeah, I think that's the article.
Ive actually been interested in Intel's c++ compiler because I've read it's one of the reasons PRBoom+ runs so much faster than GZdoom
I mean, it's just a game, so sure, worst that can happen is what? Render a pixel wrong?
projectiles will have inaccurate trajectories at long range i guess

