Not that I expect Mitch Alsup to approve!
The 6600 had several I/O processors with a 12-bit word length that
were really one processor, basicallty using SMT.
Well, if I have a processor with an ISA that involves register banks
of 32 registers each... an alternate instruction set involving
register banks of 8 registers each would let me allocate either one
compute thread or four threads with the I/O processor instruction set.
And what would the I/O processor instruction set look like?
Think of the PDP-11 or the 9900 but give more impiortance to
floating-point.
With modern technology allowing 32-128 CPUs on a single die--there is
no reason to limit the width of a PP to 12-bits (1965:: yes there was
ample reason:: 2024 no reason whatsoever.) There is little reason to
even do 32-bit PPs when it cost so little more to get a 64-bit core.
As Scott stated:: there does not seem to be any reason to need FP on a
core only doing I/O and kernel queueing services.
The 6600 had several I/O processors with a 12-bit word length that were really one processor, basicallty using SMT.
On Wed, 17 Apr 2024 23:32:20 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
With modern technology allowing 32-128 CPUs on a single die--there is
no reason to limit the width of a PP to 12-bits (1965:: yes there was
ample reason:: 2024 no reason whatsoever.) There is little reason to
even do 32-bit PPs when it cost so little more to get a 64-bit core.
Well, I'm not. The PP instruction set I propose uses 16-bit and 32-bit instructions, and so uses the same bus as the main instruction set.
As Scott stated:: there does not seem to be any reason to need FP on a
core only doing I/O and kernel queueing services.
That's true.
This isn't about cores, though. Instead, a core running the main ISA
of the processor will simply have the option to replace one
regular-ISA thread by four threads which use 8 registers instead of
32, allowing SMT with more threads.
So we're talking about the same core. The additional threads will get
to execute instructions 1/4 as often as regular threads, so their
performance is reduced, matching an ISA that gives them fewer
registers.
Since the design is reminiscent of the 6600 PPs, these threads might
be used for I/O tasks, but nothing stops them from being used for
other purposes for which access to the FP capabilities of the chip may
be relevant.
John Savard--- Synchronet 3.20a-Linux NewsLink 1.114
On Wed, 17 Apr 2024 15:19:03 -0600, John Savard wrote:
The 6600 had several I/O processors with a 12-bit word length that were
really one processor, basicallty using SMT.
Originally these “PPUs” (“Peripheral Processor Units”) were for running
the OS,
while the main CPU was primarily dedicated to running user
programs.
Aparently this idea did not work out so well, and in later versions of the OS, more code ran on the CPU instead of the PPUs.
Yes, exactly, and it is for those other purposes that you want these
device cores to operate on the same ISA as the big cores. This way if >anything goes wrong, you can simply lob the code back to a CPU centric
core and finish the job.
Each core can just switch between compute duty with N threads, and I/O >service duty with 4*N threads - or anywhere in between.
On Thu, 18 Apr 2024 23:42:15 -0600, John Savard <quadibloc@servername.invalid> wrote:
Each core can just switch between compute duty with N threads, and I/O >>service duty with 4*N threads - or anywhere in between.
So I hope it is clear now I'm talking about SMT threads, not cores.
Threads are orthogonal to cores.
But I did make one oversimplification that could be confusing.
The full instruction set assumes banks of 32 registers, one each for
integer and floats, the reduced instruction set assumes banks of 8
registers, one each for integer and floats.
So one thread of the full ISA can be replaced by four threads of the
reduced ISA, both use the same number of registes.
That's all right for an in-order design. But in real life, computers
are out-of-order. So the *rename* registers would have to be split up.
Since the reduced ISA threads are four times greater in number, their instructions have four times longer to finish executing before their
thread gets a chance to execute again.
So presumably reduced ISA
threads will need less agressive OoO, and 1/4 the rename registers
might be adequate, but there's obviously no guarantee that this would
indeed be an ideal fit.
John Savard--- Synchronet 3.20a-Linux NewsLink 1.114
So how does a 32-register thread "call" an 8 register thread ?? or vice
versa ??
John Savard wrote:
So presumably reduced ISA
threads will need less agressive OoO, and 1/4 the rename registers
might be adequate, but there's obviously no guarantee that this would
indeed be an ideal fit.
LoL.
On Fri, 19 Apr 2024 18:40:45 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
John Savard wrote:
So presumably reduced ISA
threads will need less agressive OoO, and 1/4 the rename registers
might be adequate, but there's obviously no guarantee that this would
indeed be an ideal fit.
LoL.
Well, yes. The fact that pretty much all serious high-performance
designs these days _are_ OoO basically means that my brilliant idea is
DoA.
Of course, instead of replacing 1 full-ISA thread with 4 light-ISA
threads, one could use a different number, based on what is optimum
for a given implementation. But that ratio would now vary from one
chip to another, being model-dependent.
So it's not *totally* destroyed, but this is still a major blow.
On Sat, 20 Apr 2024 01:09:53 -0600, John Savard <quadibloc@servername.invalid> wrote:
And, hey, I'm not the first guy to get sunk because of forgetting what
lies under the tip of the iceberg that's above the water.
That also happened to the captain of the _Titanic_.
John Savard--- Synchronet 3.20a-Linux NewsLink 1.114
John Savard wrote:
On Sat, 20 Apr 2024 01:09:53 -0600, John Savard
<quadibloc@servername.invalid> wrote:
And, hey, I'm not the first guy to get sunk because of forgetting what
lies under the tip of the iceberg that's above the water.
That also happened to the captain of the _Titanic_.
Concer-tina-tanic !?!
On 4/20/2024 12:07 PM, MitchAlsup1 wrote:
John Savard wrote:
On Sat, 20 Apr 2024 01:09:53 -0600, John Savard
<quadibloc@servername.invalid> wrote:
And, hey, I'm not the first guy to get sunk because of forgetting what
lies under the tip of the iceberg that's above the water.
That also happened to the captain of the _Titanic_.
Concer-tina-tanic !?!
Seems about right.
Seems like a whole lot of flailing with designs that seem needlessly complicated...
Meanwhile, has looked around and noted:
In some ways, RISC-V is sort of like MIPS with the field order reversed,
and (ironically) actually smaller immediate fields (MIPS was using a lot
of Imm16 fields. whereas RISC-V mostly used Imm12).
But, seemed to have more wonk:
A mode with 32x 32-bit GPRs; // unnecessary
A mode with 32x 64-bit GPRs;
Apparently a mode with 32x 32-bit GPRs that can be paired to 16x 64-bits
as needed for 64-bit operations?...
Integer operations (on 64-bit registers) that give UB or trap if values
are outside of signed Int32 range;
Other operations that sign-extend the values but are ironically called "unsigned" (apparently, similar wonk to RISC-V by having signed-extended Unsigned Int);
Branch operations are bit-sliced;
....
I had preferred a different strategy in some areas:
Assume non-trapping operations by default;
Sign-extend signed values, zero-extend unsigned values.
Though, this is partly the source of some operations in my case assuming
33 bit sign-extended: This can represent both the signed and unsigned
32-bit ranges.
One could argue that sign-extending both could save 1 bit in some cases. But, this creates wonk in other cases, such as requiring an explicit
zero extension for "unsigned int" to "long long" casts; and more cases
where separate instructions are needed for Int32 and Int64 cases (say,
for example, RISC-V needed around 4x as many Int<->Float conversion operators due to its design choices in this area).
Say:My 66000:
RV64:
Int32<->Binary32, UInt32<->Binary32
Int64<->Binary32, UInt64<->Binary32
Int32<->Binary64, UInt32<->Binary64
Int64<->Binary64, UInt64<->Binary64
BJX2:
Int64<->Binary64, UInt64<->Binary64
With the Uint64 case mostly added because otherwise one needs a wonky
edge case to deal with this (but is rare in practice).
The separate 32-bit cases were avoided by tending to normalize
everything to Binary64 in registers (with Binary32 only existing in SIMD form or in memory).
Annoyingly, I did end up needing to add logic for all of these cases to
deal with RV64G.
Currently no plans to implement RISC-V's Privileged ISA stuff, mostly because it would likely be unreasonably expensive.
It is in theory
possible to write an OS to run in RISC-V mode, but it would need to deal with the different OS level and hardware-level interfaces (in much the
same way, as I needed to use a custom linker script for GCC, as my stuff uses a different memory map from the one GCC had assumed; namely that of
RAM starting at the 64K mark, rather than at the 16MB mark).
In some cases in my case, there are distinctions between 32-bit and
64-bit compare-and-branch ops. I am left thinking this distinction may
be unnecessary, and one may only need 64 bit compare and branch.
In the emulator, the current difference ended up mostly that the 32-bit version sees if the 32-bit and 64-bit version would give a different
result and faulting if so, since this generally means that there is a
bug elsewhere (such as other code is producing out-of-range values).
For a few newer cases (such as the 3R compare ops, which produce a 1-bit output in a register), had only defined 64-bit versions.
One could just ignore the distinction between 32 and 64 bit compare in hardware, but had still burnt the encoding space on this. In a new ISA design, I would likely drop the existence of 32-bit compare and use exclusively 64-bit compare.
In many cases, the distinction between 32-bit and 64-bit operations, or between 2R and 3R cases, had ended up less significant than originally thought (and now have ended up gradually deprecating and disabling some
of the 32-bit 2R encodings mostly due to "lack of relevance").
Though, admittedly, part of the reason for a lot of separate 2R cases existing was that I had initially had the impression that there may have been a performance cost difference between 2R and 3R instructions. This ended up not really the case, as the various units ended up typically
using 3R internally anyways.
So, say, one needs an ALU with, say:you forgot carry, and inversion to perform subtraction.
2 inputs, one output;
Ability to bit-invert the second input
along with inverting carry-in, ...
Ability to sign or zero extend the output.
So, say, operations:
ADD / SUB (Add, 64-bit)
ADDSL / SUBSL (Add, 32-bit, sign extent) // nope
ADDUL / SUBUL (Add, 32-bit, zero extent) // nope
AND
OR
XOR
CMPEQ // 1 ICMP inst
CMPNE
CMPGT (CMPLT implicit)
CMPGE (CMPLE implicit)
CMPHI (unsigned GT)
CMPHS (unsigned GE)
....
Where, internally compare works by performing a subtract and then
producing a result based on some status bits (Z,C,S,O). As I see it,
ideally these bits should not be exposed at the ISA level though (much
pain and hair results from the existence of architecturally visible ALU status-flag bits).
Some other features could still be debated though, along with how much simplification could be possible.
If I did a new design, would probably still keep predication and jumbo prefixes.
Explicit bundling vs superscalar could be argued either way, as
superscalar isn't as expensive as initially thought, but in a simpler
form is comparably weak (the compiler has an advantage that it can
invest more expensive analysis into this, reorder instructions, etc; but this only goes so far as the compiler understands the CPU's pipeline,
ties the code to a specific pipeline structure, and becomes effectively
moot with OoO CPU designs).
So, a case could be made that a "general use" ISA be designed without
the use of explicit bundling. In my case, using the bundle flags also requires the code to use an instruction to signal to the CPU what configuration of pipeline it expects to run on, with the CPU able to
fall back to scalar (or superscalar) execution if it does not match.
For the most part, thus far nearly everything has ended up as "Mode 2", namely:
3 lanes;
Lane 1 does everything;
Lane 2 does Basic ALU ops, Shift, Convert (CONV), ...
Lane 3 only does Basic ALU ops and a few CONV ops and similar.
Lane 3 originally also did Shift, dropped to reduce cost.
Mem ops may eat Lane 3, ...
Where, say:Modeless.
Mode 0 (Default):
Only scalar code is allowed, CPU may use superscalar (if available).
Mode 1:
2 lanes:
Lane 1 does everything;
Lane 2 does ALU, Shift, and CONV.
Mem ops take up both lanes.
Effectively scalar for Load/Store.
Later defined that 128-bit MOV.X is allowed in a Mode 1 core.
Had defined wider modes, and ones that allow dual-lane IO and FPU instructions, but these haven't seen use (too expensive to support in hardware).
Had ended up with the ambiguous "extension" to the Mode 2 rules of
allowing an FPU instruction to be executed from Lane 2 if there was not
an FPU instruction in Lane 1, or allowing co-issuing certain FPU instructions if they effectively combine into a corresponding SIMD op.
In my current configurations, there is only a single memory access port.
A second memory access port would help with performance, but is
comparably a rather expensive feature (and doesn't help enough to
justify its fairly steep cost).
For lower-end cores, a case could be made for assuming a 1-wide CPU with
a 2R1W register file, but designing the whole ISA around this limitation
and not allowing for anything more is limiting (and mildly detrimental
to performance). If we can assume cores with an FPU, we can probably
also assume cores with more than two register read ports available.
....--- Synchronet 3.20a-Linux NewsLink 1.114
BGB wrote:
Sign-extend signed values, zero-extend unsigned values.
Another mistake I mad in Mc 88100.
On Fri, 19 Apr 2024 18:40:45 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
So how does a 32-register thread "call" an 8 register thread ?? or vice >>versa ??
That sort of thing would be done by supervisor mode instructions,
similar to the ones used to start additional threads on a given core,
or start threads on a new core.
Since the lightweight ISA has the benefit of having fewer registers >allocated, it's not the same as, slay, a "thumb mode" which offers
more compact code as its benefit. Instead, this is for use in classes
of threads that are separate from ordinary code.
I/O processing threads being one example of this.
On Sat, 20 Apr 2024 22:03:21 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
BGB wrote:
Sign-extend signed values, zero-extend unsigned values.
Another mistake I mad in Mc 88100.
As that is a mistake the IBM 360 made, I make it too. But I make it
the way the 360 did: there are no signed and unsigned values, in the
sense of a Burroughs machine, there are just Load, Load Unsigned - and
Insert - instructions.
On Sat, 20 Apr 2024 22:03:21 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
BGB wrote:
Sign-extend signed values, zero-extend unsigned values.
Another mistake I mad in Mc 88100.
As that is a mistake the IBM 360 made, I make it too. But I make it
the way the 360 did: there are no signed and unsigned values, in the
sense of a Burroughs machine, there are just Load, Load Unsigned - and
Insert - instructions.
Index and base register values are assumed to be unsigned.
John Savard--- Synchronet 3.20a-Linux NewsLink 1.114
BGB wrote:
On 4/20/2024 12:07 PM, MitchAlsup1 wrote:
John Savard wrote:
On Sat, 20 Apr 2024 01:09:53 -0600, John Savard
<quadibloc@servername.invalid> wrote:
And, hey, I'm not the first guy to get sunk because of forgetting what >>>> lies under the tip of the iceberg that's above the water.
That also happened to the captain of the _Titanic_.
Concer-tina-tanic !?!
Seems about right.
Seems like a whole lot of flailing with designs that seem needlessly
complicated...
Meanwhile, has looked around and noted:
In some ways, RISC-V is sort of like MIPS with the field order reversed,
They, in effect, Litle-Endian-ed the fields.
and (ironically) actually smaller immediate fields (MIPS was using a
lot of Imm16 fields. whereas RISC-V mostly used Imm12).
Yes, RISC-V took a step back with the 12-bit immediates. My 66000, on
the other hand, only has 12-bit immediates for shift instructions--
allowing all shifts to reside in one Major OpCode; the rest inst[31]=1
have 16-bit immediates (universally sign extended).
But, seemed to have more wonk:
A mode with 32x 32-bit GPRs; // unnecessary
A mode with 32x 64-bit GPRs;
Apparently a mode with 32x 32-bit GPRs that can be paired to 16x
64-bits as needed for 64-bit operations?...
Repeating the mistake I made on Mc 88100....
Integer operations (on 64-bit registers) that give UB or trap if
values are outside of signed Int32 range;
Isn't it just wonderful ??
Other operations that sign-extend the values but are ironically called
"unsigned" (apparently, similar wonk to RISC-V by having
signed-extended Unsigned Int);
Branch operations are bit-sliced;
....
I had preferred a different strategy in some areas:
Assume non-trapping operations by default;
Assume trap/"do the expected thing" under a user accessible flag.
Sign-extend signed values, zero-extend unsigned values.
Another mistake I mad in Mc 88100.
Do you sign extend the 16-bit displacement on an unsigned LD ??
Though, this is partly the source of some operations in my case
assuming 33 bit sign-extended: This can represent both the signed and
unsigned 32-bit ranges.
These are some of the reasons My 66000 is 64-bit register/calculation only.
One could argue that sign-extending both could save 1 bit in some
cases. But, this creates wonk in other cases, such as requiring an
explicit zero extension for "unsigned int" to "long long" casts; and
more cases where separate instructions are needed for Int32 and Int64
cases (say, for example, RISC-V needed around 4x as many Int<->Float
conversion operators due to its design choices in this area).
It also gets difficult when you consider EADD Rd,Rdouble,Rexponent ??
is it a FP calculation or an integer calculation ?? If Rdouble is a
constant is the constant FP or int, if Rexponent is a constant is it
double or int,..... Does it raise FP overflow or integer overflow ??
Say:My 66000:
RV64:
Int32<->Binary32, UInt32<->Binary32
Int64<->Binary32, UInt64<->Binary32
Int32<->Binary64, UInt32<->Binary64
Int64<->Binary64, UInt64<->Binary64
BJX2:
Int64<->Binary64, UInt64<->Binary64
int64_t -> { uint64_t, float, double }
uint64_t -> { int64_t, float, double }
float -> { uint64_t, int64_t, double }
double -> { uint64_t, int64_t, float }
With the Uint64 case mostly added because otherwise one needs a wonky
edge case to deal with this (but is rare in practice).
The separate 32-bit cases were avoided by tending to normalize
everything to Binary64 in registers (with Binary32 only existing in
SIMD form or in memory).
I saved LD and ST instructions by leaving float 32-bits in the registers.
Annoyingly, I did end up needing to add logic for all of these cases
to deal with RV64G.
No rest for the wicked.....
Currently no plans to implement RISC-V's Privileged ISA stuff, mostly
because it would likely be unreasonably expensive.
The sea of control registers or the sequencing model applied thereon ??
My 66000 allows access to all control registers via memory mapped I/O
space.
It is in theory
possible to write an OS to run in RISC-V mode, but it would need to
deal with the different OS level and hardware-level interfaces (in
much the same way, as I needed to use a custom linker script for GCC,
as my stuff uses a different memory map from the one GCC had assumed;
namely that of RAM starting at the 64K mark, rather than at the 16MB
mark).
In some cases in my case, there are distinctions between 32-bit and
64-bit compare-and-branch ops. I am left thinking this distinction may
be unnecessary, and one may only need 64 bit compare and branch.
No 32-bit stuff, thereby no 32-bit distinctions needed.
In the emulator, the current difference ended up mostly that the
32-bit version sees if the 32-bit and 64-bit version would give a
different result and faulting if so, since this generally means that
there is a bug elsewhere (such as other code is producing out-of-range
values).
Saving vast amounts of power {{{not}}}
For a few newer cases (such as the 3R compare ops, which produce a
1-bit output in a register), had only defined 64-bit versions.
Oh what a tangled web we.......
One could just ignore the distinction between 32 and 64 bit compare in
hardware, but had still burnt the encoding space on this. In a new ISA
design, I would likely drop the existence of 32-bit compare and use
exclusively 64-bit compare.
In many cases, the distinction between 32-bit and 64-bit operations,
or between 2R and 3R cases, had ended up less significant than
originally thought (and now have ended up gradually deprecating and
disabling some of the 32-bit 2R encodings mostly due to "lack of
relevance").
I deprecated all of them.
Though, admittedly, part of the reason for a lot of separate 2R cases
existing was that I had initially had the impression that there may
have been a performance cost difference between 2R and 3R
instructions. This ended up not really the case, as the various units
ended up typically using 3R internally anyways.
So, say, one needs an ALU with, say:you forgot carry, and inversion to perform subtraction.
2 inputs, one output;
Ability to bit-invert the second input
along with inverting carry-in, ...
Ability to sign or zero extend the output.
So, My 66000 integer adder has 3 carry inputs, and I discovered a way to perform these that takes no more gates of delay than the typical 1-carry
in 64-bit integer adder. This gives me a = -b -c; for free.
So, say, operations:
ADD / SUB (Add, 64-bit)
ADDSL / SUBSL (Add, 32-bit, sign extent) // nope
ADDUL / SUBUL (Add, 32-bit, zero extent) // nope
AND
OR
XOR
CMPEQ // 1 ICMP inst
CMPNE
CMPGT (CMPLT implicit)
CMPGE (CMPLE implicit)
CMPHI (unsigned GT)
CMPHS (unsigned GE)
....
Where, internally compare works by performing a subtract and then
producing a result based on some status bits (Z,C,S,O). As I see it,
ideally these bits should not be exposed at the ISA level though (much
pain and hair results from the existence of architecturally visible
ALU status-flag bits).
I agree that these flags should not be exposed through ISA; and I did not.
On the other hand multi-precision arithmetic demands at least carry {or
some other means which is even more powerful--such as CARRY.....}
Some other features could still be debated though, along with how much
simplification could be possible.
If I did a new design, would probably still keep predication and jumbo
prefixes.
I kept predication but not the way most predication works.
My work on Mc 88120 and K9 taught me the futility of things in the instruction stream that provide artificial boundaries. I have a suspicion that if you have the FPGA capable of allowing you to build a 8-wide
machine, you would do the jumbo stuff differently, too.
Explicit bundling vs superscalar could be argued either way, as
superscalar isn't as expensive as initially thought, but in a simpler
form is comparably weak (the compiler has an advantage that it can
invest more expensive analysis into this, reorder instructions, etc;
but this only goes so far as the compiler understands the CPU's pipeline,
Compilers are notoriously unable to outguess a good branch predictor.
ties the code to a specific pipeline structure, and becomes
effectively moot with OoO CPU designs).
OoO exists, in a practical sense, to abstract the pipeline out of the compiler; or conversely, to allow multiple implementations to run the
same compiled code optimally on each implementation.
So, a case could be made that a "general use" ISA be designed without
the use of explicit bundling. In my case, using the bundle flags also
requires the code to use an instruction to signal to the CPU what
configuration of pipeline it expects to run on, with the CPU able to
fall back to scalar (or superscalar) execution if it does not match.
Sounds like a bridge too far for your 8-wide GBOoO machine.
For the most part, thus far nearly everything has ended up as "Mode
2", namely:
3 lanes;
Lane 1 does everything;
Lane 2 does Basic ALU ops, Shift, Convert (CONV), ...
Lane 3 only does Basic ALU ops and a few CONV ops and similar.
Lane 3 originally also did Shift, dropped to reduce cost.
Mem ops may eat Lane 3, ...
Try 6-lanes:
1,2,3 Memory ops + integer ADD and Shifts
4 FADD ops + integer ADD and FMisc
5 FMAC ops + integer ADD
6 CMP-BR ops + integer ADD
Where, say:Modeless.
Mode 0 (Default):
Only scalar code is allowed, CPU may use superscalar (if available).
Mode 1:
2 lanes:
Lane 1 does everything;
Lane 2 does ALU, Shift, and CONV.
Mem ops take up both lanes.
Effectively scalar for Load/Store.
Later defined that 128-bit MOV.X is allowed in a Mode 1 core.
Had defined wider modes, and ones that allow dual-lane IO and FPU
instructions, but these haven't seen use (too expensive to support in
hardware).
Had ended up with the ambiguous "extension" to the Mode 2 rules of
allowing an FPU instruction to be executed from Lane 2 if there was
not an FPU instruction in Lane 1, or allowing co-issuing certain FPU
instructions if they effectively combine into a corresponding SIMD op.
In my current configurations, there is only a single memory access port.
This should imply that your 3-wide pipeline is running at 90%-95% memory/cache saturation.
A second memory access port would help with performance, but is
comparably a rather expensive feature (and doesn't help enough to
justify its fairly steep cost).
For lower-end cores, a case could be made for assuming a 1-wide CPU
with a 2R1W register file, but designing the whole ISA around this
limitation and not allowing for anything more is limiting (and mildly
detrimental to performance). If we can assume cores with an FPU, we
can probably also assume cores with more than two register read ports
available.
If you design around the notion of a 3R1W register file, FMAC and INSERT
fall out of the encoding easily. Done right, one can switch it into a 4R
or 4W register file for ENTER and EXIT--lessening the overhead of call/ret.
....
On 4/20/2024 5:03 PM, MitchAlsup1 wrote:
BGB wrote:
Compilers are notoriously unable to outguess a good branch predictor.
Errm, assuming the compiler is capable of things like general-case
inlining and loop-unrolling.
I was thinking of simpler things, like shuffling operators between independent (sub)expressions to limit the number of register-register dependencies.
Like, in-order superscalar isn't going to do crap if nearly every instruction depends on every preceding instruction. Even pipelining
can't help much with this.
The compiler can shuffle the instructions into an order to limit the
number of register dependencies and better fit the pipeline. But, then,
most of the "hard parts" are already done (so it doesn't take much more
for the compiler to flag which instructions can run in parallel).
Meanwhile, a naive superscalar may miss cases that could be run in
parallel, if it is evaluating the rules "coarsely" (say, evaluating what
is safe or not safe to run things in parallel based on general groupings
of opcodes rather than the rules of specific opcodes; or, say, false-positive register alias if, say, part of the Imm field of a 3RI instruction is interpreted as a register ID, ...).
Granted, seemingly even a naive approach is able to get around 20% ILP
out of "GCC -O3" output for RV64G...
But, the GCC output doesn't seem to be quite as weak as some people are claiming either.
ties the code to a specific pipeline structure, and becomes
effectively moot with OoO CPU designs).
OoO exists, in a practical sense, to abstract the pipeline out of the
compiler; or conversely, to allow multiple implementations to run the
same compiled code optimally on each implementation.
Granted, but OoO isn't cheap.
So, a case could be made that a "general use" ISA be designed without
the use of explicit bundling. In my case, using the bundle flags also
requires the code to use an instruction to signal to the CPU what
configuration of pipeline it expects to run on, with the CPU able to
fall back to scalar (or superscalar) execution if it does not match.
Sounds like a bridge too far for your 8-wide GBOoO machine.
For sake of possible fancier OoO stuff, I upheld a basic requirement for
the instruction stream:
The semantics of the instructions as executed in bundled order needs to
be equivalent to that of the instructions as executed in sequential order.
In this case, the OoO CPU can entirely ignore the bundle hints, and
treat "WEXMD" as effectively a NOP.
This would have broken down for WEX-5W and WEX-6W (where enforcing a parallel==sequential constraint effectively becomes unworkable, and/or renders the wider pipeline effectively moot), but these designs are
likely dead anyways.
And, with 3-wide, the parallel==sequential order constraint remains in effect.
For the most part, thus far nearly everything has ended up as "Mode
2", namely:
3 lanes;
Lane 1 does everything;
Lane 2 does Basic ALU ops, Shift, Convert (CONV), ...
Lane 3 only does Basic ALU ops and a few CONV ops and similar.
Lane 3 originally also did Shift, dropped to reduce cost.
Mem ops may eat Lane 3, ...
Try 6-lanes:
1,2,3 Memory ops + integer ADD and Shifts
4 FADD ops + integer ADD and FMisc
5 FMAC ops + integer ADD
6 CMP-BR ops + integer ADD
As can be noted, my thing is more a "LIW" rather than a "true VLIW".
So, MEM/BRA/CMP/... all end up in Lane 1.
Lanes 2/3 effectively ending up used for fold over most of the ALU ops turning Lane 1 mostly into a wall of Load and Store instructions.
Where, say:
Mode 0 (Default):
Only scalar code is allowed, CPU may use superscalar (if available).
Mode 1:
2 lanes:
Lane 1 does everything;
Lane 2 does ALU, Shift, and CONV.
Mem ops take up both lanes.
Effectively scalar for Load/Store.
Later defined that 128-bit MOV.X is allowed in a Mode 1 core. >> Modeless.
Had defined wider modes, and ones that allow dual-lane IO and FPUThis should imply that your 3-wide pipeline is running at 90%-95%
instructions, but these haven't seen use (too expensive to support in
hardware).
Had ended up with the ambiguous "extension" to the Mode 2 rules of
allowing an FPU instruction to be executed from Lane 2 if there was
not an FPU instruction in Lane 1, or allowing co-issuing certain FPU
instructions if they effectively combine into a corresponding SIMD op.
In my current configurations, there is only a single memory access port. >>
memory/cache saturation.
If you mean that execution is mostly running end-to-end memory
operations, yeah, this is basically true.
Comparably, RV code seems to end up running a lot of non-memory ops in
Lane 1, whereas BJX2 is mostly running lots of memory ops, with Lane 2 handling most of the ALU ops and similar (and Lane 3, occasionally).
If you design around the notion of a 3R1W register file, FMAC and INSERT
fall out of the encoding easily. Done right, one can switch it into a 4R
or 4W register file for ENTER and EXIT--lessening the overhead of call/ret. >>
Possibly.
It looks like some savings could be possible in terms of prologs and epilogs.
As-is, these are generally like:
MOV LR, R18
MOV GBR, R19
ADD -192, SP
MOV.X R18, (SP, 176) //save GBR and LR
MOV.X ... //save registers
WEXMD 2 //specify that we want 3-wide execution here
//Reload GBR, *1
MOV.Q (GBR, 0), R18
MOV 0, R0 //special reloc here
MOV.Q (GBR, R0), R18
MOV R18, GBR
//Generate Stack Canary, *2
MOV 0x5149, R18 //magic number (randomly generated)
VSKG R18, R18 //Magic (combines input with SP and magic numbers)
MOV.Q R18, (SP, 144)
...
function-specific stuff
...
MOV 0x5149, R18
MOV.Q (SP, 144), R19
VSKC R18, R19 //Validate canary
...
*1: This part ties into the ABI, and mostly exists so that each PE image
can get GBR reloaded back to its own ".data"/".bss" sections (with
multiple program instances in a single address space). But, does mean
that pretty much every non-leaf function ends up needing to go through
this ritual.
*2: Pretty much any function that has local arrays or similar, serves to protect register save area. If the magic number can't regenerate a
matching canary at the end of the function, then a fault is generated.
The cost of some of this starts to add up.
In isolation, not much, but if all this happens, say, 500 or 1000 times
or more in a program, this can add up.
--- Synchronet 3.20a-Linux NewsLink 1.114....
BGB wrote:
On 4/20/2024 5:03 PM, MitchAlsup1 wrote:
BGB wrote:
Compilers are notoriously unable to outguess a good branch predictor.
Errm, assuming the compiler is capable of things like general-case
inlining and loop-unrolling.
I was thinking of simpler things, like shuffling operators between
independent (sub)expressions to limit the number of register-register
dependencies.
Like, in-order superscalar isn't going to do crap if nearly every
instruction depends on every preceding instruction. Even pipelining
can't help much with this.
Pipelining CREATED this (back to back dependencies). No amount of
pipelining can eradicate RAW data dependencies.
The compiler can shuffle the instructions into an order to limit the
number of register dependencies and better fit the pipeline. But,
then, most of the "hard parts" are already done (so it doesn't take
much more for the compiler to flag which instructions can run in
parallel).
Compiler scheduling works for exactly 1 pipeline implementation and
is suboptimal for all others.
Meanwhile, a naive superscalar may miss cases that could be run in
parallel, if it is evaluating the rules "coarsely" (say, evaluating
what is safe or not safe to run things in parallel based on general
groupings of opcodes rather than the rules of specific opcodes; or,
say, false-positive register alias if, say, part of the Imm field of a
3RI instruction is interpreted as a register ID, ...).
Granted, seemingly even a naive approach is able to get around 20% ILP
out of "GCC -O3" output for RV64G...
But, the GCC output doesn't seem to be quite as weak as some people
are claiming either.
ties the code to a specific pipeline structure, and becomes
effectively moot with OoO CPU designs).
OoO exists, in a practical sense, to abstract the pipeline out of the
compiler; or conversely, to allow multiple implementations to run the
same compiled code optimally on each implementation.
Granted, but OoO isn't cheap.
But it does get the job done.
So, a case could be made that a "general use" ISA be designed
without the use of explicit bundling. In my case, using the bundle
flags also requires the code to use an instruction to signal to the
CPU what configuration of pipeline it expects to run on, with the
CPU able to fall back to scalar (or superscalar) execution if it
does not match.
Sounds like a bridge too far for your 8-wide GBOoO machine.
For sake of possible fancier OoO stuff, I upheld a basic requirement
for the instruction stream:
The semantics of the instructions as executed in bundled order needs
to be equivalent to that of the instructions as executed in sequential
order.
In this case, the OoO CPU can entirely ignore the bundle hints, and
treat "WEXMD" as effectively a NOP.
This would have broken down for WEX-5W and WEX-6W (where enforcing a
parallel==sequential constraint effectively becomes unworkable, and/or
renders the wider pipeline effectively moot), but these designs are
likely dead anyways.
And, with 3-wide, the parallel==sequential order constraint remains in
effect.
For the most part, thus far nearly everything has ended up as "Mode
2", namely:
3 lanes;
Lane 1 does everything;
Lane 2 does Basic ALU ops, Shift, Convert (CONV), ...
Lane 3 only does Basic ALU ops and a few CONV ops and similar. >>>> Lane 3 originally also did Shift, dropped to reduce cost. >>>> Mem ops may eat Lane 3, ...
Try 6-lanes:
1,2,3 Memory ops + integer ADD and Shifts
4 FADD ops + integer ADD and FMisc
5 FMAC ops + integer ADD
6 CMP-BR ops + integer ADD
As can be noted, my thing is more a "LIW" rather than a "true VLIW".
Mine is neither LIW or VLIW but it definitely is LBIO through GBOoO
So, MEM/BRA/CMP/... all end up in Lane 1.
Lanes 2/3 effectively ending up used for fold over most of the ALU ops
turning Lane 1 mostly into a wall of Load and Store instructions.
Where, say:
Mode 0 (Default):
Only scalar code is allowed, CPU may use superscalar (if
available).
Mode 1:
2 lanes:
Lane 1 does everything;
Lane 2 does ALU, Shift, and CONV.
Mem ops take up both lanes.
Effectively scalar for Load/Store.
Later defined that 128-bit MOV.X is allowed in a Mode 1 core. >>> Modeless.
Had defined wider modes, and ones that allow dual-lane IO and FPU
instructions, but these haven't seen use (too expensive to support
in hardware).
Had ended up with the ambiguous "extension" to the Mode 2 rules of
allowing an FPU instruction to be executed from Lane 2 if there was
not an FPU instruction in Lane 1, or allowing co-issuing certain FPU
instructions if they effectively combine into a corresponding SIMD op.
In my current configurations, there is only a single memory access
port.
This should imply that your 3-wide pipeline is running at 90%-95%
memory/cache saturation.
If you mean that execution is mostly running end-to-end memory
operations, yeah, this is basically true.
Comparably, RV code seems to end up running a lot of non-memory ops in
Lane 1, whereas BJX2 is mostly running lots of memory ops, with Lane 2
handling most of the ALU ops and similar (and Lane 3, occasionally).
One of the things that I notice with My 66000 is when you get all the constants you ever need at the calculation OpCodes, you end up with
FEWER instructions that "go random places" such as instructions that
<well> paste constants together. This leave you with a data dependent
string of calculations with occasional memory references. That is::
universal constants gets rid of the easy to pipeline extra instructions leaving the meat of the algorithm exposed.
If you design around the notion of a 3R1W register file, FMAC and INSERT >>> fall out of the encoding easily. Done right, one can switch it into a 4R >>> or 4W register file for ENTER and EXIT--lessening the overhead of
call/ret.
Possibly.
It looks like some savings could be possible in terms of prologs and
epilogs.
As-is, these are generally like:
MOV LR, R18
MOV GBR, R19
ADD -192, SP
MOV.X R18, (SP, 176) //save GBR and LR
MOV.X ... //save registers
Why not an instruction that saves LR and GBR without wasting instructions
to place them side by side prior to saving them ??
WEXMD 2 //specify that we want 3-wide execution here
//Reload GBR, *1
MOV.Q (GBR, 0), R18
MOV 0, R0 //special reloc here
MOV.Q (GBR, R0), R18
MOV R18, GBR
MOV.Q (R18, R0), R18
It is gorp like that that lead me to do it in HW with ENTER and EXIT.
Save registers to the stack, setup FP if desired, allocate stack on SP,
and decide if EXIT also does RET or just reloads the file. This would require 2 free registers if done in pure SW, along with several MOVs...
//Generate Stack Canary, *2
MOV 0x5149, R18 //magic number (randomly generated)
VSKG R18, R18 //Magic (combines input with SP and magic numbers) >> MOV.Q R18, (SP, 144)
...
function-specific stuff
...
MOV 0x5149, R18
MOV.Q (SP, 144), R19
VSKC R18, R19 //Validate canary
...
*1: This part ties into the ABI, and mostly exists so that each PE
image can get GBR reloaded back to its own ".data"/".bss" sections (with
Universal displacements make GBR unnecessary as a memory reference can
be accompanied with a 16-bit, 32-bit, or 64-bit displacement. Yes, you
can read GOT[#i] directly without a pointer to it.
multiple program instances in a single address space). But, does mean
that pretty much every non-leaf function ends up needing to go through
this ritual.
Universal constant solves the underlying issue.
*2: Pretty much any function that has local arrays or similar, serves
to protect register save area. If the magic number can't regenerate a
matching canary at the end of the function, then a fault is generated.
My 66000 can place the callee save registers in a place where user cannot access them with LDs or modify them with STs. So malicious code cannot
damage the contract between ABI and core.
The cost of some of this starts to add up.
In isolation, not much, but if all this happens, say, 500 or 1000
times or more in a program, this can add up.
Was thinking about that last night. H&P "book" statistics say that call/ret represents 2% of instructions executed. But if you add up the prologue and epilogue instructions you find 8% of instructions are related to calling
and returning--taking the problem from (at 2%) ignorable to (at 8%) a big ticket item demanding something be done.
8% represents saving/restoring only 3 registers vis stack and associated SP arithmetic. So, it can easily go higher.
....
On 4/21/2024 1:57 PM, MitchAlsup1 wrote:
BGB wrote:
One of the things that I notice with My 66000 is when you get all the
constants you ever need at the calculation OpCodes, you end up with
FEWER instructions that "go random places" such as instructions that
<well> paste constants together. This leave you with a data dependent
string of calculations with occasional memory references. That is::
universal constants gets rid of the easy to pipeline extra instructions
leaving the meat of the algorithm exposed.
Possibly true.
RISC-V tends to have a lot of extra instructions due to lack of big constants and lack of indexed addressing.
And, BJX2 has a lot of frivolous register-register MOV instructions.
If you design around the notion of a 3R1W register file, FMAC and INSERT >>>> fall out of the encoding easily. Done right, one can switch it into a 4R >>>> or 4W register file for ENTER and EXIT--lessening the overhead of
call/ret.
Possibly.
It looks like some savings could be possible in terms of prologs and
epilogs.
As-is, these are generally like:
MOV LR, R18
MOV GBR, R19
ADD -192, SP
MOV.X R18, (SP, 176) //save GBR and LR
MOV.X ... //save registers
Why not an instruction that saves LR and GBR without wasting instructions
to place them side by side prior to saving them ??
I have an optional MOV.C instruction, but would need to restructure the
code for generating the prologs to make use of them in this case.
Say:
MOV.C GBR, (SP, 184)
MOV.C LR, (SP, 176)
Though, MOV.C is considered optional.
There is a "MOV.C Lite" option, which saves some cost by only allowing
it for certain CR's (mostly LR and GBR), which also sort of overlaps
with (and is needed) by RISC-V mode, because these registers are in GPR
land for RV.
But, in any case, current compiler output shuffles them to R18 and R19 before saving them.
WEXMD 2 //specify that we want 3-wide execution here
//Reload GBR, *1
MOV.Q (GBR, 0), R18
MOV 0, R0 //special reloc here
MOV.Q (GBR, R0), R18
MOV R18, GBR
Correction:
MOV.Q (R18, R0), R18
It is gorp like that that lead me to do it in HW with ENTER and EXIT.
Save registers to the stack, setup FP if desired, allocate stack on SP,
and decide if EXIT also does RET or just reloads the file. This would
require 2 free registers if done in pure SW, along with several MOVs...
Possibly.
The partial reason it loads into R0 and uses R0 as an index, was that I defined this mechanism before jumbo prefixes existed, and hadn't updated
it to allow for jumbo prefixes.
Well, and if I used a direct displacement for GBR (which, along with PC,
is always BYTE Scale), this would have created a hard limit of 64 DLL's
per process-space (I defined it as Disp24, which allows a more
reasonable hard upper limit of 2M DLLs per process-space).
Granted, nowhere near even the limit of 64 as of yet. But, I had noted
that Windows programs would often easily exceed this limit, with even a fairly simple program pulling in a fairly large number of random DLLs,
so in any case, a larger limit was needed.
One potential optimization here is that the main EXE will always be 0 in
the process, so this sequence could be reduced to, potentially:
MOV.Q (GBR, 0), R18
MOV.C (R18, 0), GBR
Early on, I did not have the constraint that main EXE was always 0, and
had initially assumed it would be treated equivalently to a DLL.
//Generate Stack Canary, *2
MOV 0x5149, R18 //magic number (randomly generated)
VSKG R18, R18 //Magic (combines input with SP and magic numbers) >>> MOV.Q R18, (SP, 144)
...
function-specific stuff
...
MOV 0x5149, R18
MOV.Q (SP, 144), R19
VSKC R18, R19 //Validate canary
...
*1: This part ties into the ABI, and mostly exists so that each PEUniversal displacements make GBR unnecessary as a memory reference can
image can get GBR reloaded back to its own ".data"/".bss" sections (with >>
be accompanied with a 16-bit, 32-bit, or 64-bit displacement. Yes, you
can read GOT[#i] directly without a pointer to it.
If I were doing a more conventional ABI, I would likely use (PC,
Disp33s) for accessing global variables.
Problem is:
What if one wants multiple logical instances of a given PE image in a
single address space?
PC REL breaks in this case, unless you load N copies of each PE image,
which is a waste of memory (well, or use COW mappings, mandating the use
of an MMU).
ELF FDPIC had used a different strategy, but then effectively turned
each function call into something like (in SH):
MOV R14, R2 //R14=GOT
MOV disp, R0 //offset into GOT
ADD R0, R2 //adjust by offset
//R2=function pointer
MOV.L (R2, 0), R1 //function address
MOV.L (R2, 4), R3 //GOT
JSR R1
In the callee:
... save registers ...
MOV R3, R14 //put GOT into a callee-save register
...
In the BJX2 ABI, had rolled this part into the callee, reasoning that handling it in the callee (per-function) was less overhead than handling
it in the caller (per function call).
Though, on the RISC-V side, it has the relative advantage of compiling
for absolute addressing, albeit still loses in terms of performance.
I don't imagine an FDPIC version of RISC-V would win here, but this is
only assuming there exists some way to get GCC to output FDPIC binaries (most I could find, was people debating whether to add FDPIC support for RISC-V).
PIC or PIE would also sort of work, but these still don't really allow
for multiple program instances in a single address space.
multiple program instances in a single address space). But, does mean
that pretty much every non-leaf function ends up needing to go through
this ritual.
Universal constant solves the underlying issue.
I am not so sure that they could solve the "map multiple instances of
the same binary into a single address space" issue, which is sort of the whole thing for why GBR is being used.
Otherwise, I would have been using PC-REL...
*2: Pretty much any function that has local arrays or similar, serves
to protect register save area. If the magic number can't regenerate a
matching canary at the end of the function, then a fault is generated.
My 66000 can place the callee save registers in a place where user cannot
access them with LDs or modify them with STs. So malicious code cannot
damage the contract between ABI and core.
Possibly. I am using a conventional linear stack.
Downside: There is a need either for bounds checking or canaries.
Canaries are the cheaper option in this case.
The cost of some of this starts to add up.
In isolation, not much, but if all this happens, say, 500 or 1000
times or more in a program, this can add up.
Was thinking about that last night. H&P "book" statistics say that call/ret >> represents 2% of instructions executed. But if you add up the prologue and >> epilogue instructions you find 8% of instructions are related to calling
and returning--taking the problem from (at 2%) ignorable to (at 8%) a big
ticket item demanding something be done.
8% represents saving/restoring only 3 registers vis stack and associated SP >> arithmetic. So, it can easily go higher.
I guess it could make sense to add a compiler stat for this...
The save/restore can get folded off, but generally only done for
functions with a larger number of registers being saved/restored (and
does not cover secondary things like GBR reload or stack canary stuff,
which appears to possibly be a significant chunk of space).
Goes and adds a stat for averages:
Prolog: 8% (avg= 24 bytes)
Epilog: 4% (avg= 12 bytes)
Body : 88% (avg=260 bytes)
With 959 functions counted (excluding empty functions/prototypes).
--- Synchronet 3.20a-Linux NewsLink 1.114....
BGB wrote:
Like, in-order superscalar isn't going to do crap if nearly every
instruction depends on every preceding instruction. Even pipelining
can't help much with this.
Pipelining CREATED this (back to back dependencies). No amount of
pipelining can eradicate RAW data dependencies.
On Sun, 21 Apr 2024 18:57:27 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
BGB wrote:
Like, in-order superscalar isn't going to do crap if nearly every
instruction depends on every preceding instruction. Even pipelining
can't help much with this.
Pipelining CREATED this (back to back dependencies). No amount of
pipelining can eradicate RAW data dependencies.
This is quite true. However, in case an unsophisticated individual
might read this thread, I think that I shall clarify.
Without pipelining, it is not a problem if each instruction depends on
the one immediately previous, and so people got used to writing
programs that way, as it was simple to write the code to do one thing
before starting to write the code to begin doing another thing.
This remained true when the simplest original form of pipelining was
brought in - where fetching one instruction from memory was overlapped
with decoding the previous instruction, and executing the instruction
before that.
It's only when what was originally called "superpipelining" came
along, where the execute stages of multiple successive instructions
could be overlapped, that it was necessary to do something about
dependencies in order to take advantage of the speedup that could
provide.
John Savard
BGB wrote:
On 4/20/2024 5:03 PM, MitchAlsup1 wrote:
Like, in-order superscalar isn't going to do crap if nearly every
instruction depends on every preceding instruction. Even pipelining
can't help much with this.
Pipelining CREATED this (back to back dependencies). No amount of
pipelining can eradicate RAW data dependencies.
The compiler can shuffle the instructions into an order to limit the
number of register dependencies and better fit the pipeline. But,
then, most of the "hard parts" are already done (so it doesn't take
much more for the compiler to flag which instructions can run in
parallel).
Compiler scheduling works for exactly 1 pipeline implementation and
is suboptimal for all others.
MitchAlsup1 wrote:
BGB wrote:
On 4/20/2024 5:03 PM, MitchAlsup1 wrote:
Like, in-order superscalar isn't going to do crap if nearly every
instruction depends on every preceding instruction. Even pipelining
can't help much with this.
Pipelining CREATED this (back to back dependencies). No amount of
pipelining can eradicate RAW data dependencies.
The compiler can shuffle the instructions into an order to limit the
number of register dependencies and better fit the pipeline. But,
then, most of the "hard parts" are already done (so it doesn't take
much more for the compiler to flag which instructions can run in
parallel).
Compiler scheduling works for exactly 1 pipeline implementation and
is suboptimal for all others.
Well, yeah.
OTOH, if your (definitely not my!) compiler can schedule a 4-wide static ordering of operations, then it will be very nearly optimal on 2-wide
and 3-wide as well. (The difference is typically in a bit more loop
setup and cleanup code than needed.)
Hand-optimizing Pentium asm code did teach me to "think like a cpu",
which is probably the only part of the experience which is still kind of relevant. :-)
Terje
BGB wrote:
On 4/21/2024 1:57 PM, MitchAlsup1 wrote:
BGB wrote:
One of the things that I notice with My 66000 is when you get all the
constants you ever need at the calculation OpCodes, you end up with
FEWER instructions that "go random places" such as instructions that
<well> paste constants together. This leave you with a data dependent
string of calculations with occasional memory references. That is::
universal constants gets rid of the easy to pipeline extra instructions
leaving the meat of the algorithm exposed.
Possibly true.
RISC-V tends to have a lot of extra instructions due to lack of big
constants and lack of indexed addressing.
You forgot the "every one an his brother" design of the ISA>
And, BJX2 has a lot of frivolous register-register MOV instructions.
I empower you to get rid of them....
<snip>
If you design around the notion of a 3R1W register file, FMAC and
INSERT
fall out of the encoding easily. Done right, one can switch it into >>>>> a 4R
or 4W register file for ENTER and EXIT--lessening the overhead of
call/ret.
Possibly.
It looks like some savings could be possible in terms of prologs and
epilogs.
As-is, these are generally like:
MOV LR, R18
MOV GBR, R19
ADD -192, SP
MOV.X R18, (SP, 176) //save GBR and LR
MOV.X ... //save registers
Why not an instruction that saves LR and GBR without wasting
instructions
to place them side by side prior to saving them ??
I have an optional MOV.C instruction, but would need to restructure
the code for generating the prologs to make use of them in this case.
Say:
MOV.C GBR, (SP, 184)
MOV.C LR, (SP, 176)
Though, MOV.C is considered optional.
There is a "MOV.C Lite" option, which saves some cost by only allowing
it for certain CR's (mostly LR and GBR), which also sort of overlaps
with (and is needed) by RISC-V mode, because these registers are in
GPR land for RV.
But, in any case, current compiler output shuffles them to R18 and R19
before saving them.
WEXMD 2 //specify that we want 3-wide execution here
//Reload GBR, *1
MOV.Q (GBR, 0), R18
MOV 0, R0 //special reloc here
MOV.Q (GBR, R0), R18
MOV R18, GBR
Correction:
; MOV.Q (R18, R0), R18
It is gorp like that that lead me to do it in HW with ENTER and EXIT.
Save registers to the stack, setup FP if desired, allocate stack on
SP, and decide if EXIT also does RET or just reloads the file. This
would require 2 free registers if done in pure SW, along with several
MOVs...
Possibly.
The partial reason it loads into R0 and uses R0 as an index, was that
I defined this mechanism before jumbo prefixes existed, and hadn't
updated it to allow for jumbo prefixes.
No time like the present...
Well, and if I used a direct displacement for GBR (which, along with
PC, is always BYTE Scale), this would have created a hard limit of 64
DLL's per process-space (I defined it as Disp24, which allows a more
reasonable hard upper limit of 2M DLLs per process-space).
In my case, restricting myself to 32-bit IP relative addressing, GOT can
be anywhere within ±2GB of the accessing instruction and can be as big
as one desires.
Granted, nowhere near even the limit of 64 as of yet. But, I had noted
that Windows programs would often easily exceed this limit, with even
a fairly simple program pulling in a fairly large number of random
DLLs, so in any case, a larger limit was needed.
Due to the way linkages work in My 66000, each DLL gets its own GOT.
So there is essentially no bounds on how many can be present/in-use.
A LD of a GOT[entry] gets a pointer to the external variable.
A CALX of GOT[entry] is a call through the GOT table using std ABI.
{{There is no PLT}}
One potential optimization here is that the main EXE will always be 0
in the process, so this sequence could be reduced to, potentially:
MOV.Q (GBR, 0), R18
MOV.C (R18, 0), GBR
Early on, I did not have the constraint that main EXE was always 0,
and had initially assumed it would be treated equivalently to a DLL.
//Generate Stack Canary, *2
MOV 0x5149, R18 //magic number (randomly generated)
VSKG R18, R18 //Magic (combines input with SP and magic numbers)
MOV.Q R18, (SP, 144)
...
function-specific stuff
...
MOV 0x5149, R18
MOV.Q (SP, 144), R19
VSKC R18, R19 //Validate canary
...
*1: This part ties into the ABI, and mostly exists so that each PE
image can get GBR reloaded back to its own ".data"/".bss" sections
(with
Universal displacements make GBR unnecessary as a memory reference can
be accompanied with a 16-bit, 32-bit, or 64-bit displacement. Yes,
you can read GOT[#i] directly without a pointer to it.
If I were doing a more conventional ABI, I would likely use (PC,
Disp33s) for accessing global variables.
Even those 128GB away ??
Problem is:
What if one wants multiple logical instances of a given PE image in a
single address space?
Not a problem when each PE has a different set of mapping tables (at least the entries pointing at GOTs[*].
PC REL breaks in this case, unless you load N copies of each PE image,
which is a waste of memory (well, or use COW mappings, mandating the
use of an MMU).
ELF FDPIC had used a different strategy, but then effectively turned
each function call into something like (in SH):
MOV R14, R2 //R14=GOT
MOV disp, R0 //offset into GOT
ADD R0, R2 //adjust by offset
//R2=function pointer
MOV.L (R2, 0), R1 //function address
MOV.L (R2, 4), R3 //GOT
JSR R1
Which I do with::
CALX [IP,R0,#GOT+index<<3-.]
In the callee:
... save registers ...
MOV R3, R14 //put GOT into a callee-save register
...
In the BJX2 ABI, had rolled this part into the callee, reasoning that
handling it in the callee (per-function) was less overhead than
handling it in the caller (per function call).
Though, on the RISC-V side, it has the relative advantage of compiling
for absolute addressing, albeit still loses in terms of performance.
Compiling and linking to absolute addresses works "really well" when one needs to place different sections in different memory every time the application/kernel runs due to malicious codes trying to steal everything. ASLR.....
I don't imagine an FDPIC version of RISC-V would win here, but this is
only assuming there exists some way to get GCC to output FDPIC
binaries (most I could find, was people debating whether to add FDPIC
support for RISC-V).
PIC or PIE would also sort of work, but these still don't really allow
for multiple program instances in a single address space.
Once you share the code and some of the data, the overhead of using different
mappings for special stuff {GOT, local thread data,...} is
multiple program instances in a single address space). But, does
mean that pretty much every non-leaf function ends up needing to go
through this ritual.
Universal constant solves the underlying issue.
I am not so sure that they could solve the "map multiple instances of
the same binary into a single address space" issue, which is sort of
the whole thing for why GBR is being used.
Otherwise, I would have been using PC-REL...
*2: Pretty much any function that has local arrays or similar,
serves to protect register save area. If the magic number can't
regenerate a matching canary at the end of the function, then a
fault is generated.
My 66000 can place the callee save registers in a place where user
cannot
access them with LDs or modify them with STs. So malicious code cannot
damage the contract between ABI and core.
Possibly. I am using a conventional linear stack.
Downside: There is a need either for bounds checking or canaries.
Canaries are the cheaper option in this case.
The cost of some of this starts to add up.
In isolation, not much, but if all this happens, say, 500 or 1000
times or more in a program, this can add up.
Was thinking about that last night. H&P "book" statistics say that
call/ret
represents 2% of instructions executed. But if you add up the
prologue and
epilogue instructions you find 8% of instructions are related to
calling and returning--taking the problem from (at 2%) ignorable to
(at 8%) a big
ticket item demanding something be done.
8% represents saving/restoring only 3 registers vis stack and
associated SP
arithmetic. So, it can easily go higher.
I guess it could make sense to add a compiler stat for this...
The save/restore can get folded off, but generally only done for
functions with a larger number of registers being saved/restored (and
does not cover secondary things like GBR reload or stack canary stuff,
which appears to possibly be a significant chunk of space).
Goes and adds a stat for averages:
Prolog: 8% (avg= 24 bytes)
Epilog: 4% (avg= 12 bytes)
Body : 88% (avg=260 bytes)
With 959 functions counted (excluding empty functions/prototypes).
....
John Savard wrote:
And, hey, I'm not the first guy to get sunk because of forgetting what
lies under the tip of the iceberg that's above the water.
That also happened to the captain of the _Titanic_.
Concer-tina-tanic !?!
On Sat, 20 Apr 2024 17:07:11 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
John Savard wrote:
And, hey, I'm not the first guy to get sunk because of forgetting what
lies under the tip of the iceberg that's above the water.
That also happened to the captain of the _Titanic_.
Concer-tina-tanic !?!
Oh, dear. This discussion has inspired me to rework the basic design
of Concertina II _yet again_!
The new design, not yet online, will have the following features:
The code stream will continue to be divided into 256-bit blocks.
However, block headers wil be eliminated. Instead, this functionality
will be subsumed into the instruction set.
Case I:
Indicating that from 1 to 7 32-bit instruction slots in a block are
not used for instructions, but instead may contain pseudo-immediates
will be achieved by:
Placing a two-address register-to-register operate instruction in the
first instruction slot in a block. These instructions will have a
three-bit field which, if nonzero, indicates the amount of space
reserved.
To avoid waste, when such an instruction is present in any slot other
than the first, that field will have the following function:
If nonzero, it points to an instruction slot (slots 1 through 7, in
the second through eighth positions) and a duplicate copy of the
instruction in that slot will be placed in the instruction stream
immediately following the instruction with that field.
The following special conditions apply:
If the instruction slot contains a pair of 16-bit instructions, only
the first of those instructions is so inserted for execution.
The instruction slot may not be one that is reserved for
pseudo-immediates, except that it may be the _first_ such slot, in
which case, the first 16 bits of that slot are taken as a 16-bit
instruction, with the format indicated by the first bit (as opposed to
the usual 17th bit) of that instruction slot's contents.
So it's possible to reserve an odd multiple of 16 bits for
pseudo-immediates, so as to avoid waste.
Case II:
Instructions longer than 32 bits are specified by being of the form:
The first instruction slot:
11111
00
(3 bits) length in instruction slots, from 2 to 7
(22 bits) rest of the first part of the instruction
All remaining instruction slots:
11111
(3 bits) position within instruction, from 2 to 7
(24 bits) rest of this part of the instruction
This mechanism, however, will _also_ be used for VLIW functionality or
prefix functionality which was formerly in block headers.
In that case, the first instruction slot, and the remaining
instruction slots, no longer need to be contiguous; instead, ordinary
32-bit instructions or pairs of 16-bit instlructions can occur between
the portions of the ensemble of prefixed instructions formed by this
means.
And there is a third improvement.
When Case I above is in effect, the block in which space for >pseudo-immediates is reserved will be stored in an internal register
in the processor.
Subsequent blocks can contain operate instructions with
pseudo-immediate operands even if no space for pseudo-immediates is
reserved in those blocks. In that case, the retained copy of the last
block encountered in which pseudo-immediates were reserved shall be >referenced instead.
I think these changes will improve code density... or, at least, they
will make it appear that no space is obviously forced to be wasted,
even if no real improvement in code density results.
On Mon, 22 Apr 2024 14:13:41 -0600, John Savard ><quadibloc@servername.invalid> wrote:
The first instruction slot:
11111
00
(3 bits) length in instruction slots, from 2 to 7
(22 bits) rest of the first part of the instruction
All remaining instruction slots:
11111
(3 bits) position within instruction, from 2 to 7
(24 bits) rest of this part of the instruction
The page has now been updated to reflect this modified design.
On Sat, 20 Apr 2024 17:07:11 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
John Savard wrote:
And, hey, I'm not the first guy to get sunk because of forgetting what
lies under the tip of the iceberg that's above the water.
That also happened to the captain of the _Titanic_.
Concer-tina-tanic !?!
Oh, dear. This discussion has inspired me to rework the basic design
of Concertina II _yet again_!
The new design, not yet online, will have the following features:
The code stream will continue to be divided into 256-bit blocks.
However, block headers wil be eliminated. Instead, this functionality--- Synchronet 3.20a-Linux NewsLink 1.114
will be subsumed into the instruction set.
Case I:
Indicating that from 1 to 7 32-bit instruction slots in a block are
not used for instructions, but instead may contain pseudo-immediates
will be achieved by:
Placing a two-address register-to-register operate instruction in the
first instruction slot in a block. These instructions will have a
three-bit field which, if nonzero, indicates the amount of space
reserved.
To avoid waste, when such an instruction is present in any slot other
than the first, that field will have the following function:
If nonzero, it points to an instruction slot (slots 1 through 7, in
the second through eighth positions) and a duplicate copy of the
instruction in that slot will be placed in the instruction stream
immediately following the instruction with that field.
The following special conditions apply:
If the instruction slot contains a pair of 16-bit instructions, only
the first of those instructions is so inserted for execution.
The instruction slot may not be one that is reserved for
pseudo-immediates, except that it may be the _first_ such slot, in
which case, the first 16 bits of that slot are taken as a 16-bit
instruction, with the format indicated by the first bit (as opposed to
the usual 17th bit) of that instruction slot's contents.
So it's possible to reserve an odd multiple of 16 bits for
pseudo-immediates, so as to avoid waste.
Case II:
Instructions longer than 32 bits are specified by being of the form:
The first instruction slot:
11111
00
(3 bits) length in instruction slots, from 2 to 7
(22 bits) rest of the first part of the instruction
All remaining instruction slots:
11111
(3 bits) position within instruction, from 2 to 7
(24 bits) rest of this part of the instruction
This mechanism, however, will _also_ be used for VLIW functionality or
prefix functionality which was formerly in block headers.
In that case, the first instruction slot, and the remaining
instruction slots, no longer need to be contiguous; instead, ordinary
32-bit instructions or pairs of 16-bit instlructions can occur between
the portions of the ensemble of prefixed instructions formed by this
means.
And there is a third improvement.
When Case I above is in effect, the block in which space for pseudo-immediates is reserved will be stored in an internal register
in the processor.
Subsequent blocks can contain operate instructions with
pseudo-immediate operands even if no space for pseudo-immediates is
reserved in those blocks. In that case, the retained copy of the last
block encountered in which pseudo-immediates were reserved shall be referenced instead.
I think these changes will improve code density... or, at least, they
will make it appear that no space is obviously forced to be wasted,
even if no real improvement in code density results.
John Savard
I suggest it is time for Concertina III.......
Why not a whole cache line ??
I can tease out a couple of extra bits, so that I have a 22-bit
starting word, but 26 bits in each following one, by replacing the
three bit "position" field with a field that just contains 0 in every >instruction slot but the last one, indicated with a 1.
With 26 bits, to get 33 bits - all I need for a nice expansion of the >instruction set to its "full" form - I need to add seven bits to each
one, so that now does allow one starting word to prefix three
instructions.
Still not great, but adequate. And the first word doesn't really need
a length field either, it just needs to indicate it's the first one.
Which is how I had worked something like this before.
Address arithmetic is ADD only and does not care about signs or
overflow. There is no concept of a negative base register or a
negative index register (or, for that matter, a negative displace-
ment), overflow, underflow, carry, ...
But fully half the opcode space is allocated to 16-bit instructions.
EVen though that half doesn't really play nice with other things, it's
too tempting a target to ignore. But the price would be losing the
fully parallel nature of decoding.
On Sun, 21 Apr 2024 00:43:21 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
Address arithmetic is ADD only and does not care about signs or
overflow. There is no concept of a negative base register or a
negative index register (or, for that matter, a negative displace-
ment), overflow, underflow, carry, ...
Stack frame pointers often point to the middle of the frame and need
to access data using both positive and negative displacements.
Some GC schemes use negative displacements to access object headers.
On Mon, 22 Apr 2024 20:22:12 -0600, John Savard <quadibloc@servername.invalid> wrote:
But fully half the opcode space is allocated to 16-bit instructions.
EVen though that half doesn't really play nice with other things, it's
too tempting a target to ignore. But the price would be losing the
fully parallel nature of decoding.
After heading out to buy groceries, my head cleared enough to discard
the various complicated and bizarre schemes I was considering to deal
with the issue, and instead to drastically reduce the overhead for the instructions longer than 32 bits, now that this had become a major
concern due to also usiing this format for prefixed instructions as
well, in a simple and straightforward manner.
John Savard
On 4/23/2024 1:54 AM, John Savard wrote:
On Mon, 22 Apr 2024 20:22:12 -0600, John Savard
<quadibloc@servername.invalid> wrote:
But fully half the opcode space is allocated to 16-bit instructions.
EVen though that half doesn't really play nice with other things, it's
too tempting a target to ignore. But the price would be losing the
fully parallel nature of decoding.
After heading out to buy groceries, my head cleared enough to discard
the various complicated and bizarre schemes I was considering to deal
with the issue, and instead to drastically reduce the overhead for the
instructions longer than 32 bits, now that this had become a major
concern due to also usiing this format for prefixed instructions as
well, in a simple and straightforward manner.
You know, one could just be like, say:
xxxx-xxxx-xxxx-xxx0 //16-bit op
xxxx-xxxx-xxxx-xxxx xxxx-xxxx-xxxx-xx01 //32-bit op
xxxx-xxxx-xxxx-xxxx xxxx-xxxx-xxxx-x011 //32-bit op
xxxx-xxxx-xxxx-xxxx xxxx-xxxx-xxxx-0111 //32-bit op
xxxx-xxxx-xxxx-xxxx xxxx-xxxx-xxxx-1111 //jumbo prefix (64+)
And call it "good enough"...
Then, say (6b registers):
zzzz-mmmm-nnnn-zzz0 //16-bit op (2R)
zzzz-tttt-ttss-ssss nnnn-nnpp-zzzz-xxx1 //32-bit op (3R)
iiii-iiii-iiss-ssss nnnn-nnpp-zzzz-xxx1 //32-bit op (3RI, Imm10)
iiii-iiii-iiii-iiii nnnn-nnpp-zzzz-xxx1 //32-bit op (2RI, Imm16)
iiii-iiii-iiii-iiii iiii-iipp-zzzz-xxx1 //32-bit op (Branch)
Or (5b registers):
zzzz-mmmm-nnnn-zzz0 //16-bit op (2R)
zzzz-zttt-ttzs-ssss nnnn-nzpp-zzzz-xxx1 //32-bit op (3R)
iiii-iiii-iiis-ssss nnnn-nzpp-zzzz-xxx1 //32-bit op (3RI, Imm11)
iiii-iiii-iiii-iiii nnnn-nzpp-zzzz-xxx1 //32-bit op (2RI, Imm16)
iiii-iiii-iiii-iiii iiii-iipp-zzzz-xxx1 //32-bit op (Branch)
....
--- Synchronet 3.20a-Linux NewsLink 1.114John Savard
Since there was only one set of arithmetic instrucions, that meant that
when you wrote code to operate on unsigned values, you had to remember
that the normal names of the condition code values were oriented around signed arithmetic.
On Sat, 20 Apr 2024 18:06:22 -0600, John Savard wrote:
Since there was only one set of arithmetic instrucions, that meant that
when you wrote code to operate on unsigned values, you had to remember
that the normal names of the condition code values were oriented around
signed arithmetic.
I thought architectures typically had separate condition codes for “carry”
versus “overflow”. That way, you didn’t need signed versus unsigned versions of add, subtract and compare; it was just a matter of looking at the right condition codes on the result.
Lawrence D'Oliveiro wrote:
On Sat, 20 Apr 2024 18:06:22 -0600, John Savard wrote:
Since there was only one set of arithmetic instrucions, that meant
that when you wrote code to operate on unsigned values, you had to
remember that the normal names of the condition code values were
oriented around signed arithmetic.
I thought architectures typically had separate condition codes for
“carry” versus “overflow”. That way, you didn’t need signed versus >> unsigned versions of add, subtract and compare; it was just a matter of
looking at the right condition codes on the result.
Maybe now with 4-or-5-bit condition codes yes,
But the early machines (360) with 2-bit codes were already constricted.
On Sat, 20 Apr 2024 18:06:22 -0600, John Savard wrote:
Since there was only one set of arithmetic instrucions, that meant that
when you wrote code to operate on unsigned values, you had to remember
that the normal names of the condition code values were oriented around
signed arithmetic.
I thought architectures typically had separate condition codes for carry >versus overflow. That way, you didnt need signed versus unsigned
versions of add, subtract and compare; it was just a matter of looking at >the right condition codes on the result.
George Neuner wrote:
On Sun, 21 Apr 2024 00:43:21 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
Address arithmetic is ADD only and does not care about signs or
overflow. There is no concept of a negative base register or a
negative index register (or, for that matter, a negative displace-
ment), overflow, underflow, carry, ...
Stack frame pointers often point to the middle of the frame and need
to access data using both positive and negative displacements.
Yes, one accesses callee saved registers with positive displacements
and local variables with negative accesses. One simply needs to know
where the former stops and the later begins. ENTER and EXIT know this
by the register count and by the stack allocation size.
Some GC schemes use negative displacements to access object headers.
Those are negative displacements not negative bases or indexes.
On Tue, 23 Apr 2024 17:58:41 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
George Neuner wrote:
On Sun, 21 Apr 2024 00:43:21 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
Address arithmetic is ADD only and does not care about signs or
overflow. There is no concept of a negative base register or a
negative index register (or, for that matter, a negative displace-
ment), overflow, underflow, carry, ...
Stack frame pointers often point to the middle of the frame and need
to access data using both positive and negative displacements.
Yes, one accesses callee saved registers with positive displacements
and local variables with negative accesses. One simply needs to know
where the former stops and the later begins. ENTER and EXIT know this
by the register count and by the stack allocation size.
Some GC schemes use negative displacements to access object headers.
Those are negative displacements not negative bases or indexes.
I was reacting to your message (quoted fully above) which,
paraphrased, says "address arithmetic is add only and there is no
concept of a negative displacement".
In one sense you are correct: the result of the calculation has to be considered as unsigned in the range 0..max_memory ... ie. there is no
concept of negative *address*.
However, the components being added to form the address, I believe are
a different matter.
I agree that negative base is meaningless.
However, negative index and negative displacement both do have
meaning. The inclusion of specialized index registers is debatable
[I'm in the GPR camp], but I do believe that index and displacement
*values* both always should be considered as signed.
YMMV.
On 4/25/2024 4:01 PM, George Neuner wrote:
On Tue, 23 Apr 2024 17:58:41 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
Agreed in the sense that negative displacements exist.
However, can note that positive displacements tend to be significantly
more common than negative ones. Whether or not it makes sense to have a negative displacement, depending mostly on the probability of greater
than half of the missed displacements being negative.
From what I can tell, this seems to be:
~ 10 bits, scaled.
~ 13 bits, unscaled.
So, say, an ISA like RISC-V might have had a slightly hit rate with
unsigned displacements than with signed displacements, but if one added
1 or 2 bits, signed would have still been a clear winner (or, with 1 or
2 fewer bits, unsigned a clear winner).
I ended up going with signed displacements for XG2, but it was pretty
close to break-even in this case (when expanding from the 9-bit unsigned displacements in Baseline).
Granted, all signed or all-unsigned might be better from an ISA design consistency POV.
If one had 16-bit displacements, then unscaled displacements would make sense; otherwise scaled displacements seem like a win (misaligned displacements being much less common than aligned displacements).
But, admittedly, main reason I went with unscaled for GBR-rel and PC-rel Load/Store, was because using scaled displacements here would have
required more relocation types (nevermind if the hit rate for unscaled
9-bit displacements is "pretty weak").
Though, did end up later adding specialized Scaled GBR-Rel Load/Store
ops (to improve code density), so it might have been better in
retrospect had I instead just went the "keep it scaled and add more
reloc types to compensate" option.
....
--- Synchronet 3.20a-Linux NewsLink 1.114YMMV.
What we need is ~16-bit displacements where 82½%-91¼% are positive.
How does one use a frame pointer without negative displacements ??
BGB wrote:
On 4/25/2024 4:01 PM, George Neuner wrote:
On Tue, 23 Apr 2024 17:58:41 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
Agreed in the sense that negative displacements exist.
However, can note that positive displacements tend to be significantly
more common than negative ones. Whether or not it makes sense to have
a negative displacement, depending mostly on the probability of
greater than half of the missed displacements being negative.
From what I can tell, this seems to be:
~ 10 bits, scaled.
~ 13 bits, unscaled.
So, say, an ISA like RISC-V might have had a slightly hit rate with
unsigned displacements than with signed displacements, but if one
added 1 or 2 bits, signed would have still been a clear winner (or,
with 1 or 2 fewer bits, unsigned a clear winner).
I ended up going with signed displacements for XG2, but it was pretty
close to break-even in this case (when expanding from the 9-bit
unsigned displacements in Baseline).
Granted, all signed or all-unsigned might be better from an ISA design
consistency POV.
If one had 16-bit displacements, then unscaled displacements would
make sense; otherwise scaled displacements seem like a win (misaligned
displacements being much less common than aligned displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive.
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
But, admittedly, main reason I went with unscaled for GBR-rel and
PC-rel Load/Store, was because using scaled displacements here would
have required more relocation types (nevermind if the hit rate for
unscaled 9-bit displacements is "pretty weak").
Though, did end up later adding specialized Scaled GBR-Rel Load/Store
ops (to improve code density), so it might have been better in
retrospect had I instead just went the "keep it scaled and add more
reloc types to compensate" option.
....
YMMV.
BGB wrote:
If one had 16-bit displacements, then unscaled displacements would
make sense; otherwise scaled displacements seem like a win (misaligned
displacements being much less common than aligned displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive.
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
mitchalsup@aol.com (MitchAlsup1) writes:
What we need is ~16-bit displacements where 82½%-91¼% are positive.
What are these funny numbers about?
Do you mean that you want number ranges like -11468..54067 (82.5%
positive) or -5734..59801 (91.25% positive)? Which one of those? And
why not, say -8192..57343 (87.5% positive)?
How does one use a frame pointer without negative displacements ??
You let it point to the lowest address you want to access. That moves
the problem to unwinding frame pointer chains where the unwinder does
not know the frame-specific difference between the frame pointer and
the pointer of the next frame.
An alternative is to have a frame-independent difference that leaves
enough room that, say 90% (or 99%, or whatever) of the frames don't
need negative offsets from that frame.
Likewise, if you have signed displacements, and are unhappy about the
skewed usage, you can let the frame pointer point at an offset from
the pointer to the next fram such that the usage is less skewed.
- anton--- Synchronet 3.20a-Linux NewsLink 1.114
On 4/26/2024 8:25 AM, MitchAlsup1 wrote:
BGB wrote:
On 4/25/2024 4:01 PM, George Neuner wrote:
On Tue, 23 Apr 2024 17:58:41 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
Agreed in the sense that negative displacements exist.
However, can note that positive displacements tend to be significantly
more common than negative ones. Whether or not it makes sense to have
a negative displacement, depending mostly on the probability of
greater than half of the missed displacements being negative.
From what I can tell, this seems to be:
~ 10 bits, scaled.
~ 13 bits, unscaled.
So, say, an ISA like RISC-V might have had a slightly hit rate with
unsigned displacements than with signed displacements, but if one
added 1 or 2 bits, signed would have still been a clear winner (or,
with 1 or 2 fewer bits, unsigned a clear winner).
I ended up going with signed displacements for XG2, but it was pretty
close to break-even in this case (when expanding from the 9-bit
unsigned displacements in Baseline).
Granted, all signed or all-unsigned might be better from an ISA design
consistency POV.
If one had 16-bit displacements, then unscaled displacements would
make sense; otherwise scaled displacements seem like a win (misaligned
displacements being much less common than aligned displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive.
I was seeing stats more like 99.8% positive, 0.2% negative.
There was enough of a bias that, below 10 bits, if one takes all the remaining cases, zero extending would always win, until reaching 10
bits, when the number of missed reaches 50% negative (along with
positive displacements larger than 512).
So, one can make a choice: -512..511, or 0..1023, ...
In XG2, I ended up with -512..511, for pros or cons (for some programs,
this choice is optimal, for others it is not).
Where, when scaled for QWORD, this is +/- 4K.
If one had a 16-bit displacement, it would be a choice between +/- 32K,
or (scaled) +/- 256K, or 0..512K, ...
For the special purpose "LEA.Q (GBR, Disp16), Rn" instruction, I ended
up going unsigned, where for a lot of the programs I am dealing with,
this is big enough to cover ".data" and part of ".bss", generally used
for arrays which need the larger displacements (the compiler lays things
out so that most of the commonly used variables are closer to the start
of ".data", so can use smaller displacements).
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
In my case, all of these are [SP+Disp], granted, there is no frame
pointer and stack frames are fixed-size in BGBCC.
This is typically with a frame layout like:
Argument/Spill space
-- Frame Top
Register Save
(Stack Canary)
Local arrays/structs
Local variables
Argument/Spill Space
-- Frame Bottom
Contrast with traditional x86 layout, which puts saved registers and
local variables near the frame-pointer, which points near the top of the stack frame.
Though, in a majority of functions, the MOV.L and MOV.Q functions have a
big enough displacement to cover the whole frame (excludes functions
which have a lot of local arrays or similar, though overly large local arrays are auto-folded to using heap allocation, but at present this
logic is based on the size of individual arrays rather than on the total combined size of the stack frame).
Adding a frame pointer (with negative displacements) wouldn't make a big difference in XG2 Mode, but would be more of an issue for (pure)
Baseline, where options are either to load the displacement into a
register, or use a jumbo prefix.
--- Synchronet 3.20a-Linux NewsLink 1.114But, admittedly, main reason I went with unscaled for GBR-rel and
PC-rel Load/Store, was because using scaled displacements here would
have required more relocation types (nevermind if the hit rate for
unscaled 9-bit displacements is "pretty weak").
Though, did end up later adding specialized Scaled GBR-Rel Load/Store
ops (to improve code density), so it might have been better in
retrospect had I instead just went the "keep it scaled and add more
reloc types to compensate" option.
....
YMMV.
On 4/26/2024 8:25 AM, MitchAlsup1 wrote:
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
In my case, all of these are [SP+Disp], granted, there is no frame
pointer and stack frames are fixed-size in BGBCC.
This is typically with a frame layout like:
Argument/Spill space
-- Frame Top
Register Save
(Stack Canary)
Local arrays/structs
Local variables
Argument/Spill Space
-- Frame Bottom
Local Descriptors -------------------\Local Variables |
My Argument/Result space
Contrast with traditional x86 layout, which puts saved registers and
local variables near the frame-pointer, which points near the top of the stack frame.
Though, in a majority of functions, the MOV.L and MOV.Q functions have a
big enough displacement to cover the whole frame (excludes functions
which have a lot of local arrays or similar, though overly large local arrays are auto-folded to using heap allocation, but at present this
logic is based on the size of individual arrays rather than on the total combined size of the stack frame).
BGB wrote:
On 4/26/2024 8:25 AM, MitchAlsup1 wrote:
BGB wrote:
On 4/25/2024 4:01 PM, George Neuner wrote:
On Tue, 23 Apr 2024 17:58:41 +0000, mitchalsup@aol.com (MitchAlsup1) >>>>> wrote:
Agreed in the sense that negative displacements exist.
However, can note that positive displacements tend to be
significantly more common than negative ones. Whether or not it
makes sense to have a negative displacement, depending mostly on the
probability of greater than half of the missed displacements being
negative.
From what I can tell, this seems to be:
~ 10 bits, scaled.
~ 13 bits, unscaled.
So, say, an ISA like RISC-V might have had a slightly hit rate with
unsigned displacements than with signed displacements, but if one
added 1 or 2 bits, signed would have still been a clear winner (or,
with 1 or 2 fewer bits, unsigned a clear winner).
I ended up going with signed displacements for XG2, but it was
pretty close to break-even in this case (when expanding from the
9-bit unsigned displacements in Baseline).
Granted, all signed or all-unsigned might be better from an ISA
design consistency POV.
If one had 16-bit displacements, then unscaled displacements would
make sense; otherwise scaled displacements seem like a win
(misaligned displacements being much less common than aligned
displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive.
I was seeing stats more like 99.8% positive, 0.2% negative.
After pulling out the calculator and thinking about the frames, My 66000 needs no more than 18 DW of negative addressing. This is just
over 0.2% as you indicate.
There was enough of a bias that, below 10 bits, if one takes all the
remaining cases, zero extending would always win, until reaching 10
bits, when the number of missed reaches 50% negative (along with
positive displacements larger than 512).
So, one can make a choice: -512..511, or 0..1023, ...
In XG2, I ended up with -512..511, for pros or cons (for some
programs, this choice is optimal, for others it is not).
Where, when scaled for QWORD, this is +/- 4K.
If one had a 16-bit displacement, it would be a choice between +/-
32K, or (scaled) +/- 256K, or 0..512K, ...
We looked at this in Mc88100 (scaling of the displacement). The drawback
was that the ISA and linker were slightly mismatched: The linker wanted
to use a single upper 16-bit LUI <if it were> over several LD/STs of potentially different sizes, and scaling of the displacement failed in
those regards; so we dropped scaled displacements.
For the special purpose "LEA.Q (GBR, Disp16), Rn" instruction, I ended
up going unsigned, where for a lot of the programs I am dealing with,
this is big enough to cover ".data" and part of ".bss", generally used
for arrays which need the larger displacements (the compiler lays
things out so that most of the commonly used variables are closer to
the start of ".data", so can use smaller displacements).
Not even an issue when one has universal constants.
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
In my case, all of these are [SP+Disp], granted, there is no frame
pointer and stack frames are fixed-size in BGBCC.
This is typically with a frame layout like:
Argument/Spill space
-- Frame Top
Register Save
(Stack Canary)
Local arrays/structs
Local variables
Argument/Spill Space
-- Frame Bottom
Contrast with traditional x86 layout, which puts saved registers and
local variables near the frame-pointer, which points near the top of
the stack frame.
Though, in a majority of functions, the MOV.L and MOV.Q functions have
a big enough displacement to cover the whole frame (excludes functions
which have a lot of local arrays or similar, though overly large local
arrays are auto-folded to using heap allocation, but at present this
logic is based on the size of individual arrays rather than on the
total combined size of the stack frame).
Adding a frame pointer (with negative displacements) wouldn't make a
big difference in XG2 Mode, but would be more of an issue for (pure)
Baseline, where options are either to load the displacement into a
register, or use a jumbo prefix.
But, admittedly, main reason I went with unscaled for GBR-rel and
PC-rel Load/Store, was because using scaled displacements here would
have required more relocation types (nevermind if the hit rate for
unscaled 9-bit displacements is "pretty weak").
Though, did end up later adding specialized Scaled GBR-Rel
Load/Store ops (to improve code density), so it might have been
better in retrospect had I instead just went the "keep it scaled and
add more reloc types to compensate" option.
....
YMMV.
MitchAlsup1 wrote:
BGB wrote:
If one had 16-bit displacements, then unscaled displacements would
make sense; otherwise scaled displacements seem like a win
(misaligned displacements being much less common than aligned
displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive.
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
A sign extended 16-bit offsets would cover almost all such access needs
so I really don't see the need for funny business.
But if you really want a skewed range offset it could use something like excess-256 encoding which zero extends the immediate then subtract 256
(or whatever) from it, to give offsets in the range -256..+65535-256.
So an immediate value of 0 equals an offset of -256.
On 4/26/2024 1:59 PM, EricP wrote:
MitchAlsup1 wrote:
BGB wrote:
If one had 16-bit displacements, then unscaled displacements would
make sense; otherwise scaled displacements seem like a win
(misaligned displacements being much less common than aligned
displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive.
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
A sign extended 16-bit offsets would cover almost all such access needs
so I really don't see the need for funny business.
But if you really want a skewed range offset it could use something like
excess-256 encoding which zero extends the immediate then subtract 256
(or whatever) from it, to give offsets in the range -256..+65535-256.
So an immediate value of 0 equals an offset of -256.
Yeah, my thinking was that by the time one has 16 bits for Load/Store displacements, they could almost just go +/- 32K and call it done.
But, much smaller than this, there is an advantage to scaling the displacements.
In other news, got around to getting the RISC-V code to build in PIE
mode for Doom (by using "riscv64-unknown-linux-gnu-*").
Can note that RV64 code density takes a hit in this case:
RV64: 299K (.text)
XG2 : 284K (.text)
So, apparently using this version of GCC and using "-fPIE" works in my
favor regarding code density...
I guess a question is what FDPIC would do if GCC supported it, since
this would be the closest direct analog to my own ABI.
I guess some people are dragging their feet on FDPIC, as there is some debate as to whether or not NOMMU makes sense for RISC-V, along with its associated performance impact if used.
In my case, if I wanted to go over to simple base-relocatable images,
this would technically eliminate the need for GBR reloading.
Checks:
Simple base-relocatable case actually currently generates bigger
binaries, I suspect because in this case it is less space-efficient to
use PC-rel vs GBR-rel.
Went and added a "pbostatic" option, which sidesteps saving and
restoring GBR (making the simplifying assumption that functions will
never be called from outside the current binary).
This saves roughly 4K (Doom's ".text" shrinks to 280K).
....--- Synchronet 3.20a-Linux NewsLink 1.114
BGB wrote:
On 4/26/2024 1:59 PM, EricP wrote:
MitchAlsup1 wrote:
BGB wrote:
If one had 16-bit displacements, then unscaled displacements would
make sense; otherwise scaled displacements seem like a win
(misaligned displacements being much less common than aligned
displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive.
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
A sign extended 16-bit offsets would cover almost all such access needs
so I really don't see the need for funny business.
But if you really want a skewed range offset it could use something like >>> excess-256 encoding which zero extends the immediate then subtract 256
(or whatever) from it, to give offsets in the range -256..+65535-256.
So an immediate value of 0 equals an offset of -256.
Yeah, my thinking was that by the time one has 16 bits for Load/Store
displacements, they could almost just go +/- 32K and call it done.
But, much smaller than this, there is an advantage to scaling the
displacements.
In other news, got around to getting the RISC-V code to build in PIE
mode for Doom (by using "riscv64-unknown-linux-gnu-*").
Can note that RV64 code density takes a hit in this case:
RV64: 299K (.text)
XG2 : 284K (.text)
Is this indicative that your ISA and RISC-V are within spitting distance
of each other in terms of the number of instructions in .text ?? or not ??
So, apparently using this version of GCC and using "-fPIE" works in my
favor regarding code density...
I guess a question is what FDPIC would do if GCC supported it, since
this would be the closest direct analog to my own ABI.
What is FDPIC ?? Federal Deposit Processor Insurance Corporation ??
Final Dopey Position Independent Code ??
I guess some people are dragging their feet on FDPIC, as there is some
debate as to whether or not NOMMU makes sense for RISC-V, along with
its associated performance impact if used.
In my case, if I wanted to go over to simple base-relocatable images,
this would technically eliminate the need for GBR reloading.
Checks:
Simple base-relocatable case actually currently generates bigger
binaries, I suspect because in this case it is less space-efficient to
use PC-rel vs GBR-rel.
Went and added a "pbostatic" option, which sidesteps saving and
restoring GBR (making the simplifying assumption that functions will
never be called from outside the current binary).
This saves roughly 4K (Doom's ".text" shrinks to 280K).
Would you be willing to compile DOOM with Brian's LLVM compiler and
show the results ??
....
On 4/27/2024 3:37 PM, MitchAlsup1 wrote:
BGB wrote:
On 4/26/2024 1:59 PM, EricP wrote:
MitchAlsup1 wrote:
BGB wrote:
If one had 16-bit displacements, then unscaled displacements would >>>>>> make sense; otherwise scaled displacements seem like a win
(misaligned displacements being much less common than aligned
displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive. >>>>>
How does one use a frame pointer without negative displacements ??
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
A sign extended 16-bit offsets would cover almost all such access needs >>>> so I really don't see the need for funny business.
But if you really want a skewed range offset it could use something like >>>> excess-256 encoding which zero extends the immediate then subtract 256 >>>> (or whatever) from it, to give offsets in the range -256..+65535-256.
So an immediate value of 0 equals an offset of -256.
Yeah, my thinking was that by the time one has 16 bits for Load/Store
displacements, they could almost just go +/- 32K and call it done.
But, much smaller than this, there is an advantage to scaling the
displacements.
In other news, got around to getting the RISC-V code to build in PIE
mode for Doom (by using "riscv64-unknown-linux-gnu-*").
Can note that RV64 code density takes a hit in this case:
RV64: 299K (.text)
XG2 : 284K (.text)
Is this indicative that your ISA and RISC-V are within spitting distance
of each other in terms of the number of instructions in .text ?? or not ?? >>
It would appear that, with my current compiler output, both BJX2-XG2 and RISC-V RV64G are within a few percent of each other...
If adjusting for Jumbo prefixes (with the version that omits GBR reloads):
XG2: 270K (-10K of Jumbo Prefixes)
Implying RISC-V now has around 11% more instructions in this scenario.
It also has an additional 20K of ".rodata" that is likely constants,
which likely overlap significantly with the jumbo prefixes.
So, apparently using this version of GCC and using "-fPIE" works in my
favor regarding code density...
I guess a question is what FDPIC would do if GCC supported it, since
this would be the closest direct analog to my own ABI.
What is FDPIC ?? Federal Deposit Processor Insurance Corporation ??
Final Dopey Position Independent Code ??
Required a little digging: "Function Descriptor Position Independent Code".
But, I think the main difference is that, normal PIC does calls like like:
LD Rt, [GOT+Disp]
BSR Rt
Wheres, FDPIC was typically more like (pseudo ASM):
MOV SavedGOT, GOT
LEA Rt, [GOT+Disp]
MOV GOT, [Rt+8]
MOV Rt, [Rt+0]
BSR Rt
MOV GOT, SavedGOT
But, in my case, noting that function calls tend to be more common than
the functions themselves, and functions will know whether or not they
need to access global variables or call other functions, ... it made
more sense to move this logic into the callee.
No official RISC-V FDPIC ABI that I am aware of, though some proposals
did seem vaguely similar in some areas to what I was doing with PBO.
Where, they were accessing globals like:
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD Xd, Xt, 0
Granted, this is less efficient than, say:
MOV.Q (GBR, Disp33s), Rd
Though, people didn't really detail the call sequence or prolog/epilog sequences, so less sure how this would work.
Likely guess, something like:
MV Xs, GP
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD GP, Xt, 8
LD Xt, Xt, 0
JALR LR, Xt, 0
MV GP, Xs
Well, unless they have a better way to pull this off...
But, yeah, as far as I saw it, my "better solution" was to put this part into the callee.
Main tradeoff with my design is:
From any GBR, one needs to be able to get to every other GBR;
We need to have a way to know which table entry to reload (not
statically known at compile time).
In my PBO ABI, this was accomplished by using base relocs (but, this is
N/A for ELF, where PE/COFF style base relocs are not a thing).
One other option might be to use a PC-relative load to load the index.
Say:
AUIPC Xs, DispHi //"__global_pbo_offset$" ?
LD Xs, DispLo
LD Xt, GP, 0 //get table of offsets
ADD Xt, Xt, Xs
LD GP, Xt, 0
In this case, "__global_pbo_offset$" would be a magic constant variable
that gets fixed up by the ELF loader.
I guess some people are dragging their feet on FDPIC, as there is some
debate as to whether or not NOMMU makes sense for RISC-V, along with
its associated performance impact if used.
In my case, if I wanted to go over to simple base-relocatable images,
this would technically eliminate the need for GBR reloading.
Checks:
Simple base-relocatable case actually currently generates bigger
binaries, I suspect because in this case it is less space-efficient to
use PC-rel vs GBR-rel.
Went and added a "pbostatic" option, which sidesteps saving and
restoring GBR (making the simplifying assumption that functions will
never be called from outside the current binary).
This saves roughly 4K (Doom's ".text" shrinks to 280K).
Would you be willing to compile DOOM with Brian's LLVM compiler and
show the results ??
Will need to download and build this compiler...
Might need to look into this.
But, yeah, current standing for this is:
XG2 : 280K (static linked, Modified PDPCLIB + TestKern)
RV64G : 299K (static linked, Modified PDPCLIB + TestKern)
X86-64: 288K ("gcc -O3", dynamically linked GLIBC)
X64 : 1083K (VS2022, static linked MSVCRT)
But, MSVC is an outlier here for just how bad it is on this front.
To get more reference points, would need to install more compilers.
Could have provided an ARM reference point, except that the compiler
isn't compiling stuff at the moment (would need to beat on stuff a bit
more to try to get it to build; appears to be trying to build with static-linked Newlib but is missing symbols, ...).
But, yeah, for good comparison, one needs to have everything build with
the same C library, etc.
I am thinking it may be possible to save a little more space by folding
some of the stuff for "va_start()" into an ASM blob (currently, a lot of stuff is folded off into the function prolog, but probably doesn't need
to be done inline for every varargs function).
Mostly this would be the logic for spilling all of the argument
registers to a location on the stack and similar.
--- Synchronet 3.20a-Linux NewsLink 1.114....
BGB wrote:
On 4/27/2024 3:37 PM, MitchAlsup1 wrote:
BGB wrote:
On 4/26/2024 1:59 PM, EricP wrote:
MitchAlsup1 wrote:
BGB wrote:
If one had 16-bit displacements, then unscaled displacements
would make sense; otherwise scaled displacements seem like a win >>>>>>> (misaligned displacements being much less common than aligned
displacements).
What we need is ~16-bit displacements where 82½%-91¼% are positive. >>>>>>
How does one use a frame pointer without negative displacements ?? >>>>>>
[FP+disp] accesses callee save registers
[FP-disp] accesses local stack variables and descriptors
[SP+disp] accesses argument and result values
A sign extended 16-bit offsets would cover almost all such access
needs
so I really don't see the need for funny business.
But if you really want a skewed range offset it could use something >>>>> like
excess-256 encoding which zero extends the immediate then subtract 256 >>>>> (or whatever) from it, to give offsets in the range -256..+65535-256. >>>>> So an immediate value of 0 equals an offset of -256.
Yeah, my thinking was that by the time one has 16 bits for
Load/Store displacements, they could almost just go +/- 32K and call
it done.
But, much smaller than this, there is an advantage to scaling the
displacements.
In other news, got around to getting the RISC-V code to build in PIE
mode for Doom (by using "riscv64-unknown-linux-gnu-*").
Can note that RV64 code density takes a hit in this case:
RV64: 299K (.text)
XG2 : 284K (.text)
Is this indicative that your ISA and RISC-V are within spitting
distance of each other in terms of the number of instructions in
.text ?? or not ??
It would appear that, with my current compiler output, both BJX2-XG2
and RISC-V RV64G are within a few percent of each other...
If adjusting for Jumbo prefixes (with the version that omits GBR
reloads):
XG2: 270K (-10K of Jumbo Prefixes)
Implying RISC-V now has around 11% more instructions in this scenario.
Based on Brian's LLVM compiler; RISC-V has about 40% more instructions
than My 66000, or My 66000 has 70% the number of instructions that
RISC-V has (same compilation flags, same source code).
It also has an additional 20K of ".rodata" that is likely constants,
which likely overlap significantly with the jumbo prefixes.
My 66000 has vastly smaller .rodata because constants are part of .text
So, apparently using this version of GCC and using "-fPIE" works in
my favor regarding code density...
I guess a question is what FDPIC would do if GCC supported it, since
this would be the closest direct analog to my own ABI.
What is FDPIC ?? Federal Deposit Processor Insurance Corporation ??
Final Dopey Position Independent Code ??
Required a little digging: "Function Descriptor Position Independent
Code".
But, I think the main difference is that, normal PIC does calls like
like:
LD Rt, [GOT+Disp]
BSR Rt
CALX [IP,,#GOT+#disp-.]
It is unlikely that %GOT can be represented with 16-bit offset from IP
so the 32-bit displacement form (,,) is used.
Wheres, FDPIC was typically more like (pseudo ASM):
MOV SavedGOT, GOT
LEA Rt, [GOT+Disp]
MOV GOT, [Rt+8]
MOV Rt, [Rt+0]
BSR Rt
MOV GOT, SavedGOT
Since GOT is not in a register but is an address constant this is also::
CALX [IP,,#GOT+#disp-.]
But, in my case, noting that function calls tend to be more common
than the functions themselves, and functions will know whether or not
they need to access global variables or call other functions, ... it
made more sense to move this logic into the callee.
No official RISC-V FDPIC ABI that I am aware of, though some proposals
did seem vaguely similar in some areas to what I was doing with PBO.
Where, they were accessing globals like:
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD Xd, Xt, 0
Granted, this is less efficient than, say:
MOV.Q (GBR, Disp33s), Rd
LDD Rd,[IP,,#GOT+#disp-.]
Though, people didn't really detail the call sequence or prolog/epilog
sequences, so less sure how this would work.
Likely guess, something like:
MV Xs, GP
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD GP, Xt, 8
LD Xt, Xt, 0
JALR LR, Xt, 0
MV GP, Xs
Well, unless they have a better way to pull this off...
CALX [IP,,#GOT+#disp-.]
But, yeah, as far as I saw it, my "better solution" was to put this
part into the callee.
Main tradeoff with my design is:
From any GBR, one needs to be able to get to every other GBR;
We need to have a way to know which table entry to reload (not
statically known at compile time).
Resolved by linker or accessed through GOT in mine. Each dynamic
module gets its own GOT.
In my PBO ABI, this was accomplished by using base relocs (but, this
is N/A for ELF, where PE/COFF style base relocs are not a thing).
One other option might be to use a PC-relative load to load the index.
Say:
AUIPC Xs, DispHi //"__global_pbo_offset$" ?
LD Xs, DispLo
LD Xt, GP, 0 //get table of offsets
ADD Xt, Xt, Xs
LD GP, Xt, 0
In this case, "__global_pbo_offset$" would be a magic constant
variable that gets fixed up by the ELF loader.
LDD Rd,[IP,,#GOT+#disp-.]
I guess some people are dragging their feet on FDPIC, as there is
some debate as to whether or not NOMMU makes sense for RISC-V, along
with its associated performance impact if used.
In my case, if I wanted to go over to simple base-relocatable
images, this would technically eliminate the need for GBR reloading.
Checks:
Simple base-relocatable case actually currently generates bigger
binaries, I suspect because in this case it is less space-efficient
to use PC-rel vs GBR-rel.
Went and added a "pbostatic" option, which sidesteps saving and
restoring GBR (making the simplifying assumption that functions will
never be called from outside the current binary).
This saves roughly 4K (Doom's ".text" shrinks to 280K).
Would you be willing to compile DOOM with Brian's LLVM compiler and
show the results ??
Will need to download and build this compiler...
Might need to look into this.
Please do.
But, yeah, current standing for this is:
XG2 : 280K (static linked, Modified PDPCLIB + TestKern)
RV64G : 299K (static linked, Modified PDPCLIB + TestKern)
X86-64: 288K ("gcc -O3", dynamically linked GLIBC)
X64 : 1083K (VS2022, static linked MSVCRT)
But, MSVC is an outlier here for just how bad it is on this front.
To get more reference points, would need to install more compilers.
Could have provided an ARM reference point, except that the compiler
isn't compiling stuff at the moment (would need to beat on stuff a bit
more to try to get it to build; appears to be trying to build with
static-linked Newlib but is missing symbols, ...).
But, yeah, for good comparison, one needs to have everything build
with the same C library, etc.
I am thinking it may be possible to save a little more space by
folding some of the stuff for "va_start()" into an ASM blob
(currently, a lot of stuff is folded off into the function prolog, but
probably doesn't need to be done inline for every varargs function).
Mostly this would be the logic for spilling all of the argument
registers to a location on the stack and similar.
Part of ENTER already does this: A typical subroutine will use::
ENTER R27,R0,#local_stack_size
Where the varargs subroutine will use::
ENTER R27,R8,#local_stack_size
ADD Rva_ptr,SP,#local_stack_size+64
notice all we had to do was to specify 8 more registers to be stored;
and exit with::
EXIT R27,R0,#local_stack_size+64
Here we skip over the 8 register variable arguments without reloading
them.
....
On 4/27/2024 8:45 PM, MitchAlsup1 wrote:
BGB wrote:
I guess some people are dragging their feet on FDPIC, as there is
some debate as to whether or not NOMMU makes sense for RISC-V,
along with its associated performance impact if used.
In my case, if I wanted to go over to simple base-relocatable
images, this would technically eliminate the need for GBR reloading.
Checks:
Simple base-relocatable case actually currently generates bigger
binaries, I suspect because in this case it is less space-efficient >>>>> to use PC-rel vs GBR-rel.
Went and added a "pbostatic" option, which sidesteps saving and
restoring GBR (making the simplifying assumption that functions
will never be called from outside the current binary).
This saves roughly 4K (Doom's ".text" shrinks to 280K).
Would you be willing to compile DOOM with Brian's LLVM compiler and
show the results ??
Will need to download and build this compiler...
Might need to look into this.
Please do.
Extracting the ZIP file and "git clone llvm-project" etc, have thus far taken hours...
Well, and then the commands to CMake were not working, tried invoking
cmake more minimally, and it gives a message complaining about the
version being too old, ...
Seems I have to build it with a different / newer WSL instance (well, I guess it was either this or try to rebuild CMake from source).
Checks, download for compiler (+ git cloned LLVM) is a little over 6GB.
Well, OK, now LLVM is building... I guess, will see if it compiles and doesn't explode in the process. Probably going to be a while it seems.
On 4/28/2024 12:56 AM, BGB wrote:
On 4/27/2024 8:45 PM, MitchAlsup1 wrote:
BGB wrote:
...
I guess some people are dragging their feet on FDPIC, as there is >>>>>> some debate as to whether or not NOMMU makes sense for RISC-V,
along with its associated performance impact if used.
In my case, if I wanted to go over to simple base-relocatable
images, this would technically eliminate the need for GBR reloading. >>>>>
Checks:
Simple base-relocatable case actually currently generates bigger
binaries, I suspect because in this case it is less
space-efficient to use PC-rel vs GBR-rel.
Went and added a "pbostatic" option, which sidesteps saving and
restoring GBR (making the simplifying assumption that functions
will never be called from outside the current binary).
This saves roughly 4K (Doom's ".text" shrinks to 280K).
Would you be willing to compile DOOM with Brian's LLVM compiler and
show the results ??
Will need to download and build this compiler...
Might need to look into this.
Please do.
Extracting the ZIP file and "git clone llvm-project" etc, have thus
far taken hours...
Well, and then the commands to CMake were not working, tried invoking
cmake more minimally, and it gives a message complaining about the
version being too old, ...
Seems I have to build it with a different / newer WSL instance (well,
I guess it was either this or try to rebuild CMake from source).
Checks, download for compiler (+ git cloned LLVM) is a little over 6GB.
Well, OK, now LLVM is building... I guess, will see if it compiles and
doesn't explode in the process. Probably going to be a while it seems.
A little over an hour later and it still hasn't broken 50% yet...
I think LLVM rebuilds may have actually gotten slower than in the past...
Well, at least my 112GB of RAM means it isn't swapping too much...
Computer is a little sluggish and the "System" process seems kinda
pegged out though...
I guess I will know sometime later whether or not all of this builds...
Still watching LLVM build (several hours later), kinda of an interesting meta aspect in its behaviors.
BGB <cr88192@gmail.com> schrieb:
Still watching LLVM build (several hours later), kinda of an interesting
meta aspect in its behaviors.
Don't build it in debug mode.
On 4/28/2024 4:05 AM, Thomas Koenig wrote:
BGB <cr88192@gmail.com> schrieb:
Still watching LLVM build (several hours later), kinda of an interesting >>> meta aspect in its behaviors.
Don't build it in debug mode.
I was building it in MinSizeRel mode...
But, yeah, need to go to sleep... May poke with it tomorrow if all goes well...
On 4/27/2024 8:45 PM, MitchAlsup1 wrote:
But, I think the main difference is that, normal PIC does calls like
like:
LD Rt, [GOT+Disp]
BSR Rt
CALX [IP,,#GOT+#disp-.]
It is unlikely that %GOT can be represented with 16-bit offset from IP
so the 32-bit displacement form (,,) is used.
Wheres, FDPIC was typically more like (pseudo ASM):
MOV SavedGOT, GOT
LEA Rt, [GOT+Disp]
MOV GOT, [Rt+8]
MOV Rt, [Rt+0]
BSR Rt
MOV GOT, SavedGOT
Since GOT is not in a register but is an address constant this is also::
CALX [IP,,#GOT+#disp-.]
So... Would this also cause GOT to point to a new address on the callee
side (that is dependent on the GOT on the caller side, and *not* on the
PC address at the destination) ?...
In effect, the context dependent GOT daisy-chaining is a fundamental
aspect of FDPIC that is different from conventional PIC.
But, in my case, noting that function calls tend to be more common
than the functions themselves, and functions will know whether or not
they need to access global variables or call other functions, ... it
made more sense to move this logic into the callee.
No official RISC-V FDPIC ABI that I am aware of, though some proposals
did seem vaguely similar in some areas to what I was doing with PBO.
Where, they were accessing globals like:
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD Xd, Xt, 0
Granted, this is less efficient than, say:
MOV.Q (GBR, Disp33s), Rd
LDD Rd,[IP,,#GOT+#disp-.]
As noted, BJX2 can handle this in a single 64-bit instruction, vs 4 instructions.
Though, people didn't really detail the call sequence or prolog/epilog
sequences, so less sure how this would work.
Likely guess, something like:
MV Xs, GP
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD GP, Xt, 8
LD Xt, Xt, 0
JALR LR, Xt, 0
MV GP, Xs
Well, unless they have a better way to pull this off...
CALX [IP,,#GOT+#disp-.]
Well, can you explain the semantics of this one...
But, yeah, as far as I saw it, my "better solution" was to put this
part into the callee.
Main tradeoff with my design is:
From any GBR, one needs to be able to get to every other GBR;
We need to have a way to know which table entry to reload (not
statically known at compile time).
Resolved by linker or accessed through GOT in mine. Each dynamic
module gets its own GOT.
The important thing is not associating a GOT with an ELF module, but
with an instance of said module.
So, say, one copy of an ELF image, can have N separate GOTs and data sections (each associated with a program instance).
In my PBO ABI, this was accomplished by using base relocs (but, this
is N/A for ELF, where PE/COFF style base relocs are not a thing).
One other option might be to use a PC-relative load to load the index.
Say:
AUIPC Xs, DispHi //"__global_pbo_offset$" ?
LD Xs, DispLo
LD Xt, GP, 0 //get table of offsets
ADD Xt, Xt, Xs
LD GP, Xt, 0
In this case, "__global_pbo_offset$" would be a magic constant
variable that gets fixed up by the ELF loader.
LDD Rd,[IP,,#GOT+#disp-.]
Still going to need to explain the semantics here...
"--target my66000-none-elf" or similar just gets it to complain about an unknown triple, not sure how to query for known targets/triples with clang.
BGB <cr88192@gmail.com> schrieb:
"--target my66000-none-elf" or similar just gets it to complain about an
unknown triple, not sure how to query for known targets/triples with clang.
Grepping around the CMakeCache.txt file in my build directory, I find
//Semicolon-separated list of experimental targets to build. LLVM_EXPERIMENTAL_TARGETS_TO_BUILD:STRING=My66000
This is documented in llvm/lib/Target/My66000/README .
BGB wrote:
On 4/27/2024 8:45 PM, MitchAlsup1 wrote:
But, I think the main difference is that, normal PIC does calls like
like:
LD Rt, [GOT+Disp]
BSR Rt
CALX [IP,,#GOT+#disp-.]
It is unlikely that %GOT can be represented with 16-bit offset from IP
so the 32-bit displacement form (,,) is used.
Wheres, FDPIC was typically more like (pseudo ASM):
MOV SavedGOT, GOT
LEA Rt, [GOT+Disp]
MOV GOT, [Rt+8]
MOV Rt, [Rt+0]
BSR Rt
MOV GOT, SavedGOT
Since GOT is not in a register but is an address constant this is also:: >>>
CALX [IP,,#GOT+#disp-.]
So... Would this also cause GOT to point to a new address on the
callee side (that is dependent on the GOT on the caller side, and
*not* on the PC address at the destination) ?...
The module on the calling side has its GOT and the module on the called
side
has its own GOT where offsets to/in GOT are determined by linker making the module. There may be cases where multiple link edits on a final module have some of the functions in this module accessed via GOT in this module and in these cases one uses
CALA [IP,,#GOT+#disp-.] // LDD ip changes to LDA ip
In effect, the context dependent GOT daisy-chaining is a fundamental
aspect of FDPIC that is different from conventional PIC.
Yes, understood, and it happens.
But, in my case, noting that function calls tend to be more common
than the functions themselves, and functions will know whether or
not they need to access global variables or call other functions,
... it made more sense to move this logic into the callee.
No official RISC-V FDPIC ABI that I am aware of, though some
proposals did seem vaguely similar in some areas to what I was doing
with PBO.
Where, they were accessing globals like:
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD Xd, Xt, 0
Granted, this is less efficient than, say:
MOV.Q (GBR, Disp33s), Rd
LDD Rd,[IP,,#GOT+#disp-.]
As noted, BJX2 can handle this in a single 64-bit instruction, vs 4
instructions.
Though, people didn't really detail the call sequence or
prolog/epilog sequences, so less sure how this would work.
Likely guess, something like:
MV Xs, GP
LUI Xt, DispHi
ADD Xt, Xt, DispLo
ADD Xt, Xt, GP
LD GP, Xt, 8
LD Xt, Xt, 0
JALR LR, Xt, 0
MV GP, Xs
Well, unless they have a better way to pull this off...
CALX [IP,,#GOT+#disp-.]
Well, can you explain the semantics of this one...
But, yeah, as far as I saw it, my "better solution" was to put this
part into the callee.
Main tradeoff with my design is:
From any GBR, one needs to be able to get to every other GBR;
We need to have a way to know which table entry to reload (not
statically known at compile time).
Resolved by linker or accessed through GOT in mine. Each dynamic
module gets its own GOT.
The important thing is not associating a GOT with an ELF module, but
with an instance of said module.
Yes.
So, say, one copy of an ELF image, can have N separate GOTs and data
sections (each associated with a program instance).
In my PBO ABI, this was accomplished by using base relocs (but, this
is N/A for ELF, where PE/COFF style base relocs are not a thing).
One other option might be to use a PC-relative load to load the index. >>>> Say:
AUIPC Xs, DispHi //"__global_pbo_offset$" ?
LD Xs, DispLo
LD Xt, GP, 0 //get table of offsets
ADD Xt, Xt, Xs
LD GP, Xt, 0
In this case, "__global_pbo_offset$" would be a magic constant
variable that gets fixed up by the ELF loader.
LDD Rd,[IP,,#GOT+#disp-.]
Still going to need to explain the semantics here...
IP+&GOT+disp-IP is a 64-bit pointer into GOT where the external linkage pointer resides.
John Savard wrote:
On Sat, 20 Apr 2024 22:03:21 +0000, mitchalsup@aol.com (MitchAlsup1)
wrote:
BGB wrote:
Sign-extend signed values, zero-extend unsigned values.
Another mistake I mad in Mc 88100.
As that is a mistake the IBM 360 made, I make it too. But I make it
the way the 360 did: there are no signed and unsigned values, in the
sense of a Burroughs machine, there are just Load, Load Unsigned - and
Insert - instructions.
Index and base register values are assumed to be unsigned.
I would use the term signless as opposed to unsigned.
Address arithmetic is ADD only and does not care about signs or
overflow. There is no concept of a negative base register or a
negative index register (or, for that matter, a negative displace-
ment), overflow, underflow, carry, ...
On 4/28/2024 2:24 PM, MitchAlsup1 wrote:
Still going to need to explain the semantics here...
IP+&GOT+disp-IP is a 64-bit pointer into GOT where the external linkage
pointer resides.
OK.
Not sure I follow here what exactly is going on...
As noted, if I did a similar thing to the RISC-V example, but with my
own ISA (with the MOV.C extension):
MOV.Q (PC, Disp33), R0 // What data does this access ?
MOV.Q (GBR, 0), R18
MOV.C (R18, R0), GBR
Differing mostly in that it doesn't require base relocs.
The normal version in my case avoids the extra memory load, but uses a
base reloc for the table index.
....
Though, the reloc format is at least semi-dense, eg, for a block of relocs:
{ DWORD rvaPage; //address of page (4K)
DWORD szRelocs; //size of relocs in block
}
With each reloc encoded as a 16-bit entry:
(15:12): Reloc Type
(11: 0): Address within Page (4K)
One downside is this format is less efficient for sparse relocs (current situation), where often there are only 1 or 2 relocs per page (typically
the PBO index fixups and similar).
One situation could be to have a modified format that partially omits
the block structuring, say:
0ddd: Advance current page position by ddd pages (4K);
0000: Effectively a NOP (as before)
1ddd..Cddd: Apply the given reloc.
These represent typical relocs, target dependent.
HI16, LO16, DIR32, HI32ADJ, ...
8ddd: Was assigned for PBO fixups;
Addd: Fixup for a 64-bit address, also semi common.
Dzzz/Ezzz: Extended Relocs
These ones are configurable from a larger set of reloc types.
Fzzz: Command-Escape
...
Where, say, rather than needing 1 block per 4K page, it is 1 block per
PE section.
Though, base relocs are a relatively small part of the size of the binary.
To some extent, the PBO reloc is magic in that it works by
pattern-matching the instruction that it finds at the given address. So,
in effect, is only defined for a limited range of instructions.
Contrast with, say, the 1/2/3/4/A relocs, which expect raw 16/32/64 bit values. Though, a lot of these are not currently used for BJX2 (does not
use 16-bit addressing nides, ...).
Here:
5/6/7/8/9/B/C, ended up used for BJX2 relocs in BJX2 mode.
For other targets, they would have other meanings.
D/E/F were reserved as expanded/escape-case relocs, in case I need to
add more. These would differ partly in that the reloc sub-type would be assigned as a sort of state-machine.
BGB wrote:
On 4/28/2024 2:24 PM, MitchAlsup1 wrote:
Still going to need to explain the semantics here...
IP+&GOT+disp-IP is a 64-bit pointer into GOT where the external linkage
pointer resides.
OK.
Not sure I follow here what exactly is going on...
While I am sure I don't understand what is going on....
As noted, if I did a similar thing to the RISC-V example, but with my
own ISA (with the MOV.C extension):
MOV.Q (PC, Disp33), R0 // What data does this access ?
MOV.Q (GBR, 0), R18
MOV.C (R18, R0), GBR
It appears to me that you are placing an array of GOT pointers at the
first entry of any particular GOT ?!?
Whereas My 66000 uses IP relative access to the GOT the linker (or
LD.so) setup avoiding the indirection.
Then My 66000 does not have or need a pointer to GOT since it can
synthesize such a pointer at link time and then just use a IP relative
plus DISP32 to access said GOT.
So, say we have some external variables::
extern uint64_t fred, wilma, barney, betty;
AND we postulate that the linker found all 4 externs in the same module
so that it can access them all via 1 pointer. The linker assigns an
index into GOT and setups a relocation to that memory segment and when
LD.so runs, it stores a proper pointer in that index of GOT, call this
index fred_index.
And we access one of these::
if( fred at_work )
The compiler will obtain the pointer to the area fred is positioned via:
LDD Rfp,[IP,,#GOT+fred_index<<3] // *
and from here one can access barney, betty and wilma using the pointer
to fred and standard offsetting.
LDD Rfred,[Rfp,#0] // fred
LDD Rbarn,[Rfp,#16] // barney
LDD Rbett,[Rfp,#24] // betty
LDD Rwilm,[Rfp,#8] // wilma
These offsets are known at link time and possibly not at compile time.
(*) if the LDD through GOT takes a page fault, we have a procedure setup
so LD.so can run figure out which entry is missing, look up where it is (possibly load and resolve it) and insert the required data into GOT.
When control returns to LDD, the entry is now present, and we now have access to fred, wilma, barney and betty.
Differing mostly in that it doesn't require base relocs.
The normal version in my case avoids the extra memory load, but uses a
base reloc for the table index.
....
{{ // this looks like stuff that should be accessible to LD.so
Though, the reloc format is at least semi-dense, eg, for a block of
relocs:
{ DWORD rvaPage; //address of page (4K)
DWORD szRelocs; //size of relocs in block
}
With each reloc encoded as a 16-bit entry:
(15:12): Reloc Type
(11: 0): Address within Page (4K)
One downside is this format is less efficient for sparse relocs
(current situation), where often there are only 1 or 2 relocs per page
(typically the PBO index fixups and similar).
One situation could be to have a modified format that partially omits
the block structuring, say:
0ddd: Advance current page position by ddd pages (4K);
0000: Effectively a NOP (as before)
1ddd..Cddd: Apply the given reloc.
These represent typical relocs, target dependent.
HI16, LO16, DIR32, HI32ADJ, ...
8ddd: Was assigned for PBO fixups;
Addd: Fixup for a 64-bit address, also semi common.
Dzzz/Ezzz: Extended Relocs
These ones are configurable from a larger set of reloc types.
Fzzz: Command-Escape
...
Where, say, rather than needing 1 block per 4K page, it is 1 block per
PE section.
Though, base relocs are a relatively small part of the size of the
binary.
To some extent, the PBO reloc is magic in that it works by
pattern-matching the instruction that it finds at the given address.
So, in effect, is only defined for a limited range of instructions.
Contrast with, say, the 1/2/3/4/A relocs, which expect raw 16/32/64
bit values. Though, a lot of these are not currently used for BJX2
(does not use 16-bit addressing nides, ...).
Here:
5/6/7/8/9/B/C, ended up used for BJX2 relocs in BJX2 mode.
For other targets, they would have other meanings.
D/E/F were reserved as expanded/escape-case relocs, in case I need to
add more. These would differ partly in that the reloc sub-type would
be assigned as a sort of state-machine.
but not the program itself}}
Meanwhile, got the My66000 LLVM/Clang compiler built so far as that it
at least seems to try to build something (and seems to know that the
target exists).
But, also tends to die in s storm of error messages, eg:
/tmp/m_swap-822054.s:6: Error: no such instruction: `bitr r1,r1,<8:48>'
Lawrence D'Oliveiro wrote:
On Sat, 20 Apr 2024 18:06:22 -0600, John Savard wrote:
Since there was only one set of arithmetic instrucions, that meant that
when you wrote code to operate on unsigned values, you had to remember
that the normal names of the condition code values were oriented around
signed arithmetic.
I thought architectures typically had separate condition codes for “carry”
versus “overflow”. That way, you didn’t need signed versus unsigned >> versions of add, subtract and compare; it was just a matter of looking at >> the right condition codes on the result.
Maybe now with 4-or-5-bit condition codes yes,
But the early machines (360) with 2-bit codes were already constricted.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 916 |
Nodes: | 10 (1 / 9) |
Uptime: | 51:32:46 |
Calls: | 12,172 |
Calls today: | 2 |
Files: | 186,522 |
Messages: | 2,234,753 |