On 9/19/2025 4:50 AM, Anton Ertl wrote:
BGB <cr88192@gmail.com> writes:
And, for many uses, performance is "good enough";
In that case, better buy a cheaper AMD64 CPU rather than a
particularly fast CPU with a different architecture X and then run a
dynamic AMD64->X translator on it.
Possibly, it depends.
The question is what could Intel or AMD do if the wind blew in that >direction.
Likewise, x86 tends to need a lot of the "big CPU" stuff to perform
well, whereas something like a RISC style ISA can get better performance >>> on a comparably smaller and cheaper core, and with a somewhat better
"performance per watt" metric.
Evidence?
No hard numbers, but experience here:
ASUS Eee (with an in-order Intel Atom) vs original RasPi (with 700MHz
ARM11 cores).
The RasPi basically runs circles around the Eee...
I see the difference between CISC and RISC as in the micro-architecture,
changing from a single sequential state machine view to multiple concurrent >machines view, and from Clocks Per Instruction to Instructions Per Clock.
The monolithic microcoded machine, which covers 360, 370, PDP-11, VAX,
386, 486 and Pentium, is like a single threaded program which
operates sequentially on a single global set of state variables.
While there is some variation and fuzziness around the edges,
the heart of each of these are single sequential execution engines.
One can take an Alpha ISA and implement it with a microcoded sequencer
but that should not be called RISC
RISC changes that design to one like a multi-threaded program with
messages passing between them called uOps, where the dynamic state
of each instruction is mostly carried with the uOp message,
and each thread does something very simple and passes the uOp on.
Where global resources are required, they are temporarily dynamically >allocated to the uOp by the various threads, carried with the uOp,
and returned later when the uOp message is passed to the Retire thread.
The Retire thread is the only one which updates the visible global state.
The RISC design guidelines described by various papers, rather than
go/no-go decisions, are mostly engineering compromises for consideration
of things which would make an MST-MPA more expensive to implement or >otherwise interfere with maximizing the active concurrency of all threads.
This is why I think it would have been possible to build a risc-style
PDP-11 in 1975 TTL, or a VAX if they had just left the instructions of
the same complexity as PDP-11 ISA (53 opcodes, max one immediate,
max one mem op per instruction).
On 9/19/2025 9:33 AM, Anton Ertl wrote:
BGB <cr88192@gmail.com> writes:
Like, most of the ARM chips don't exactly have like a 150W TDP or similar...
And most Intel and AMD chips have 150W TDP, either, although the
shenanigans they play with TDP are not nice. The usual TDP for
Desktop chips is 65W (with the power limits temporarily or permanently
higher). The Zen5 laptop chips (Strix Point, Krackan Point) have a
configurable TDP of 15-54W. Lunar Lake (4 P-cores, 4 LP-E-cores) has
a configurable TDP of 8-37W.
Seems so...
Seems the CPU I am running as a 105W TDP, I had thought I remembered
150W, oh well...
Seems 150-200W is more Threadripper territory, and not the generic
desktop CPUs.
Like, if an ARM chip uses 1/30th the power, unless it is more than 30x
slower, it may still win in Perf/W and similar...
No TDP numbers are given for Oryon. For Apple's M4, the numbers are
M4 4P 6E 22W
M4 Pro 8P 4E 38W
M4 Pro 10P 4E 46W
M4 Max 10P 4E 62W
M4 Max 12P 4E 70W
Not quite 1/30th of the power, although I think that Apple does not
play the same shenanigans as Intel and AMD.
A lot of the ARM SoC's I had seen had lower TDPs, though more often with Cortex A53 or A55/A78 cores or similar:
Say (MediaTek MT6752):
https://unite4buy.com/cpu/MediaTek-MT6752/
Has a claimed TDP here of 7W and has 8x A53.
Or, for a slightly newer chip (2020):
https://www.cpu-monkey.com/en/cpu-mediatek_mt8188j
TDP 5W, has A55 and A78 cores.
Some amount of the HiSilicon numbers look similar...
But, yeah, I guess if using these as data-points:
A55: ~ 5/8W, or ~ 0.625W (very crude)
Zen+: ~ 105/16W, ~ 6.56W
So, more like 10x here, but ...
Then, I guess it becomes a question of the relative performance
difference, say, between a 2.0 GHz A55 vs a 3.7 GHz Zen+ core...
Judging based on my cellphone (with A53 cores), and previously running
my emulator in Termux, there is a performance difference, but nowhere
near 10x.
EricP <ThatWouldBeTelling@thevillage.com> writes:
I see the difference between CISC and RISC as in the micro-architecture,
But the microarchitecture is not an architectural criterion.
changing from a single sequential state machine view to multiple concurrent >> machines view, and from Clocks Per Instruction to Instructions Per Clock.
People changed from talking CPI to IPC when CPI started to go below 1.
That's mainly a distinction between single-issue and superscalar CPUs.
The monolithic microcoded machine, which covers 360, 370, PDP-11, VAX,
386, 486 and Pentium, is like a single threaded program which
operates sequentially on a single global set of state variables.
While there is some variation and fuzziness around the edges,
the heart of each of these are single sequential execution engines.
The same holds true for the MIPS R2000, the ARM1/2 (and probably many successors), probably early SPARCs and early HPPA CPUs, all of which
are considered as RISCs. Documents about them also talk about CPI.
And the 486 is already pipelined and can perform straight-line code at
1 CPI; the Pentium is superscalar, and can have up to 2 IPC (in
straight-line code).
Anton Ertl wrote:
EricP <ThatWouldBeTelling@thevillage.com> writes:
I see the difference between CISC and RISC as in the micro-architecture,
But the microarchitecture is not an architectural criterion.
changing from a single sequential state machine view to multiple concurrent >>> machines view, and from Clocks Per Instruction to Instructions Per Clock. >>People changed from talking CPI to IPC when CPI started to go below 1.
That's mainly a distinction between single-issue and superscalar CPUs.
The monolithic microcoded machine, which covers 360, 370, PDP-11, VAX,
386, 486 and Pentium, is like a single threaded program which
operates sequentially on a single global set of state variables.
While there is some variation and fuzziness around the edges,
the heart of each of these are single sequential execution engines.
The same holds true for the MIPS R2000, the ARM1/2 (and probably many
successors), probably early SPARCs and early HPPA CPUs, all of which
are considered as RISCs. Documents about them also talk about CPI.
And the 486 is already pipelined and can perform straight-line code at
1 CPI; the Pentium is superscalar, and can have up to 2 IPC (in
straight-line code).
Maybe relevant:
Performance optimizers writing asm regularly hit that 1 IPC on the 486
and (with more difficulty) 2 IPC on the Pentium.
When we did get there, the final performance was typically 3X compiled C code.
That 3X gap almost went away (maybe 1.2 to 1.5X for many algorithms) on
the PPro and later OoO CPUs.
Yes, organizing the interconnect in a hierarchical way can help reduce
the increase in interconnect cost, but I expect that there is a reason
why Intel did not do that for its server CPUs with P-Cores, by e.g.,
forming clusters of 4, and then continuing with the ring; instead,
they opted for a grid interconnect.
- anton
Terje Mathisen <terje.mathisen@tmsw.no> schrieb:
Anton Ertl wrote:
EricP <ThatWouldBeTelling@thevillage.com> writes:
I see the difference between CISC and RISC as in the micro-architecture, >>>But the microarchitecture is not an architectural criterion.
changing from a single sequential state machine view to multiple concurrentPeople changed from talking CPI to IPC when CPI started to go below 1.
machines view, and from Clocks Per Instruction to Instructions Per Clock. >>>
That's mainly a distinction between single-issue and superscalar CPUs.
The monolithic microcoded machine, which covers 360, 370, PDP-11, VAX, >>>> 386, 486 and Pentium, is like a single threaded program which
operates sequentially on a single global set of state variables.
While there is some variation and fuzziness around the edges,
the heart of each of these are single sequential execution engines.
The same holds true for the MIPS R2000, the ARM1/2 (and probably many
successors), probably early SPARCs and early HPPA CPUs, all of which
are considered as RISCs. Documents about them also talk about CPI.
And the 486 is already pipelined and can perform straight-line code at
1 CPI; the Pentium is superscalar, and can have up to 2 IPC (in
straight-line code).
Maybe relevant:
Performance optimizers writing asm regularly hit that 1 IPC on the 486
and (with more difficulty) 2 IPC on the Pentium.
When we did get there, the final performance was typically 3X compiled C
code.
That 3X gap almost went away (maybe 1.2 to 1.5X for many algorithms) on
the PPro and later OoO CPUs.
And then came back with SIMD, I presume? :-)
BGB <cr88192@gmail.com> wrote:
On 9/19/2025 9:33 AM, Anton Ertl wrote:
BGB <cr88192@gmail.com> writes:
Like, most of the ARM chips don't exactly have like a 150W TDP or similar...
And most Intel and AMD chips have 150W TDP, either, although the
shenanigans they play with TDP are not nice. The usual TDP for
Desktop chips is 65W (with the power limits temporarily or permanently
higher). The Zen5 laptop chips (Strix Point, Krackan Point) have a
configurable TDP of 15-54W. Lunar Lake (4 P-cores, 4 LP-E-cores) has
a configurable TDP of 8-37W.
Seems so...
Seems the CPU I am running as a 105W TDP, I had thought I remembered
150W, oh well...
Seems 150-200W is more Threadripper territory, and not the generic
desktop CPUs.
Like, if an ARM chip uses 1/30th the power, unless it is more than 30x >>>> slower, it may still win in Perf/W and similar...
No TDP numbers are given for Oryon. For Apple's M4, the numbers are
M4 4P 6E 22W
M4 Pro 8P 4E 38W
M4 Pro 10P 4E 46W
M4 Max 10P 4E 62W
M4 Max 12P 4E 70W
Not quite 1/30th of the power, although I think that Apple does not
play the same shenanigans as Intel and AMD.
A lot of the ARM SoC's I had seen had lower TDPs, though more often with
Cortex A53 or A55/A78 cores or similar:
Say (MediaTek MT6752):
https://unite4buy.com/cpu/MediaTek-MT6752/
Has a claimed TDP here of 7W and has 8x A53.
Or, for a slightly newer chip (2020):
https://www.cpu-monkey.com/en/cpu-mediatek_mt8188j
TDP 5W, has A55 and A78 cores.
Some amount of the HiSilicon numbers look similar...
But, yeah, I guess if using these as data-points:
A55: ~ 5/8W, or ~ 0.625W (very crude)
Zen+: ~ 105/16W, ~ 6.56W
So, more like 10x here, but ...
Then, I guess it becomes a question of the relative performance
difference, say, between a 2.0 GHz A55 vs a 3.7 GHz Zen+ core...
Judging based on my cellphone (with A53 cores), and previously running
my emulator in Termux, there is a performance difference, but nowhere
near 10x.
Single core in Orange Pi Zero 3 (Allwinner H618 at about 1.2 GHz) benchmarks to 4453.45 DMIPS (dhrystone MIPS). Single core in my desktop bencharks to about 50000 DMIPS. Dhrystone contain string operations which benefit
from SSE/AVX, but I would expect that on media load speed ratio would
be even more favourable to desktop core. On jumpy code ratio is probably lower. 1GHz RISCV in Milkv-Duo benchmarks to 1472 DMIPS.
It is hard to compare performance per watt: Orange Pi Zero 3 has low
power draw (of order 100 mA from 5V USB charger with one core active) and
it is not clear how it is distributed between CPU-s and Etherent interface. RISCV in Milkv-Duo has even lower power draw. OTOH desktop cores
normally seem to run at at fraction of rated power too (but I have
no way to directly measure CPU power draw).
Of course, there is a catch: desktop CPU is made on more advanced
process than small processors. So it is hard to separate effects
from architecture and from the process.
BGB <cr88192@gmail.com> writes:
On 9/19/2025 4:50 AM, Anton Ertl wrote:
BGB <cr88192@gmail.com> writes:
And, for many uses, performance is "good enough";
In that case, better buy a cheaper AMD64 CPU rather than a
particularly fast CPU with a different architecture X and then run a
dynamic AMD64->X translator on it.
Possibly, it depends.
The question is what could Intel or AMD do if the wind blew in that
direction.
What direction?
Likewise, x86 tends to need a lot of the "big CPU" stuff to perform
well, whereas something like a RISC style ISA can get better performance >>>> on a comparably smaller and cheaper core, and with a somewhat better
"performance per watt" metric.
Evidence?
No hard numbers, but experience here:
ASUS Eee (with an in-order Intel Atom) vs original RasPi (with 700MHz
ARM11 cores).
The RasPi basically runs circles around the Eee...
That's probably a software problem. Different Eee PC models have
different CPUs, Celeron M @571Mhz, 900MHz, or 630MHz, Atoms with 1330-1860Mhz, or AMD C-50 or E350. All of them are quite a bit faster
than the 700Mhz ARM11. While I don't have a Raspi1 result on https://www.complang.tuwien.ac.at/franz/latex-bench, I have a Raspi 3
result (and the Raspi 3 with its 1200MHz 2-wide core is quite a bit
faster than the 700Mhz ARM11), and also some CPUs similar to those
used in the Eee PC; numbers are times in seconds:
- Raspberry Pi 3, Cortex A53 1.2GHz Raspbian 8 5.46
- Celeron 800, , PC133 SDRAM, RedHat 7.1 (expi2) 2.89
- Intel Atom 330, 1.6GHz, 512K L2 Zotac ION A, Knoppix 6.1 32bit 2.323
- AMD E-450 1650MHz (Lenovo Thinkpad X121e), Ubuntu 11.10 64-bit 1.216
So all of these CPUs clearly beat the one in the Raspi3, which I
expect to be clearly faster than the ARM11.
Now imagine running the software that made the Eee PC so slow with
dynamic translation on a Raspi1. How slow would that be?
- anton
On 9/20/2025 8:10 AM, Waldek Hebisch wrote:
BGB <cr88192@gmail.com> wrote:
On 9/19/2025 9:33 AM, Anton Ertl wrote:
BGB <cr88192@gmail.com> writes:
Like, most of the ARM chips don't exactly have like a 150W TDP or similar...
And most Intel and AMD chips have 150W TDP, either, although the
shenanigans they play with TDP are not nice. The usual TDP for
Desktop chips is 65W (with the power limits temporarily or permanently >>>> higher). The Zen5 laptop chips (Strix Point, Krackan Point) have a
configurable TDP of 15-54W. Lunar Lake (4 P-cores, 4 LP-E-cores) has
a configurable TDP of 8-37W.
Seems so...
Seems the CPU I am running as a 105W TDP, I had thought I remembered
150W, oh well...
Seems 150-200W is more Threadripper territory, and not the generic
desktop CPUs.
Like, if an ARM chip uses 1/30th the power, unless it is more than 30x >>>>> slower, it may still win in Perf/W and similar...
No TDP numbers are given for Oryon. For Apple's M4, the numbers are
M4 4P 6E 22W
M4 Pro 8P 4E 38W
M4 Pro 10P 4E 46W
M4 Max 10P 4E 62W
M4 Max 12P 4E 70W
Not quite 1/30th of the power, although I think that Apple does not
play the same shenanigans as Intel and AMD.
A lot of the ARM SoC's I had seen had lower TDPs, though more often with >>> Cortex A53 or A55/A78 cores or similar:
Say (MediaTek MT6752):
https://unite4buy.com/cpu/MediaTek-MT6752/
Has a claimed TDP here of 7W and has 8x A53.
Or, for a slightly newer chip (2020):
https://www.cpu-monkey.com/en/cpu-mediatek_mt8188j
TDP 5W, has A55 and A78 cores.
Some amount of the HiSilicon numbers look similar...
But, yeah, I guess if using these as data-points:
A55: ~ 5/8W, or ~ 0.625W (very crude)
Zen+: ~ 105/16W, ~ 6.56W
So, more like 10x here, but ...
Then, I guess it becomes a question of the relative performance
difference, say, between a 2.0 GHz A55 vs a 3.7 GHz Zen+ core...
Judging based on my cellphone (with A53 cores), and previously running
my emulator in Termux, there is a performance difference, but nowhere
near 10x.
Single core in Orange Pi Zero 3 (Allwinner H618 at about 1.2 GHz) benchmarks >> to 4453.45 DMIPS (dhrystone MIPS). Single core in my desktop bencharks to >> about 50000 DMIPS. Dhrystone contain string operations which benefit
from SSE/AVX, but I would expect that on media load speed ratio would
be even more favourable to desktop core. On jumpy code ratio is probably
lower. 1GHz RISCV in Milkv-Duo benchmarks to 1472 DMIPS.
It is hard to compare performance per watt: Orange Pi Zero 3 has low
power draw (of order 100 mA from 5V USB charger with one core active) and
it is not clear how it is distributed between CPU-s and Etherent interface. >> RISCV in Milkv-Duo has even lower power draw. OTOH desktop cores
normally seem to run at at fraction of rated power too (but I have
no way to directly measure CPU power draw).
Of course, there is a catch: desktop CPU is made on more advanced
process than small processors. So it is hard to separate effects
from architecture and from the process.
I had noted before that when I compiled Dhrystone on my Ryzen using
MSVC, it is around 10M, or 5691 DMIPs, or around 1.53 DMIPs/MHz.
Curiously, the score is around 4x higher (around 40M) if Dhrystone is compiled with GCC (and around 2.5x with Clang).
For most other things, the performance scores seem closer.
I don't really trust GCC's and Clang's Dhrystone scores as they seem basically out-of-line with most other things I can measure.
Noting my BJX2 core seems to perform at 90K at 50MHz, or 1.02 DMIPS/MHz.
If assuming MSVC as the reference, this would imply (after normalizing
for clock-speeds) that the Ryzen only gets around 50% more IPC.
I noted when compiling my BJX2 emulator:
My Ryzen can emulate it at roughly 70MHz;
My cell-phone can manage it at roughly 30MHz.
This isn't *that* much larger than the difference in CPU clock speeds.
It is like, I seemingly live in a world where a lot of my own benchmark attempts tend to be largely correlated with the relative different in
clock speeds and similar.
Well, except for my old laptop (from 2003), and an ASUS Eee, which seem
to perform somewhat below that curve.
Though, in the case of the laptop, it may be a case of not getting all
that much memory bandwidth from a 100MHz DDR1 SO-DIMM (a lot of the performance on some tests seems highly correlated with "memcpy()"
speeds, and on that laptop, its memcpy speeds are kinda crap if compared with CPU clock-speed).
Well, and the Eee has, IIRC, an Intel Atom N270 down-clocked to 630 MHz.
Thing ran Quake and Quake 2 pretty OK, but not much else.
Though, if running the my emulator on the laptop, it is more back on the curve of relative clock-speed, rather than on the
relative-memory-bandwidth curve.
It seems both my neural-net stuff and most of my data compression stuff, more follow the memory bandwidth curve (though, for the laptop, it seems
NN stuff can get a big boost here by using BFloat16 and getting a little clever with the repacking).
Well, and then my BJX2 core seems to punch slightly outside its weight
class (MHz wise) by having disproportionately high memory bandwidth.
...
On Tue, 16 Sep 2025 00:03:51 -0000 (UTC), John Savard ><quadibloc@invalid.invalid> wrote:
On Mon, 15 Sep 2025 23:54:12 +0000, John Savard wrote:
Although it's called "inverse hyperthreading", this technique could be
combined with SMT - put the chunks into different threads on the same
core, rather than on different cores, and then one wouldn't need to add
extra connections between cores to make it work.
On further reflection, this may be equivalent to re-inventing out-of-order >>execution.
John Savard
Sounds more like dynamic micro-threading.
Over the years I've seen a handful of papers about compile time >micro-threading: that is the compiler itself identifies separable
dependency chains in serial code and rewrites them into deliberate
threaded code to be executed simultaneously.
It is not easy to do under the best of circumstances and I've never
seen anything about doing it dynamically at run time.
To make a thread worth rehosting to another core, it would need to be
(at least) many 10s of instructions in length. To figure this out >dynamically at run time, it seems like you'd need the decode window to
be 1000s of instructions and a LOT of "figure-it-out" circuitry.
MMV, but to me it doesn't seem worth the effort.
But, AFAIK the ARM cores tend to use significantly less power when
emulating x86 than a typical Intel or AMD CPU, even if slower.
But, AFAIK the ARM cores tend to use significantly less power when
emulating x86 than a typical Intel or AMD CPU, even if slower.
AFAIK datacenters still use a lot of x86 CPUs, even though most of them
run software that's just as easily available for ARM. And many
datacenters care more about "perf per watt" than raw performance.
So, I think the difference in power consumption does not favor ARM
nearly as significantly as you think.
On 22/09/2025 17:28, Stefan Monnier wrote:
But, AFAIK the ARM cores tend to use significantly less power when
emulating x86 than a typical Intel or AMD CPU, even if slower.
AFAIK datacenters still use a lot of x86 CPUs, even though most of them
run software that's just as easily available for ARM. And many
datacenters care more about "perf per watt" than raw performance.
So, I think the difference in power consumption does not favor ARM
nearly as significantly as you think.
Yes, I think that is correct.
A lot of it, as far as I have read, comes down to the type of
calculation you are doing. ARM cores can often be a lot more efficient
at general integer work and other common actions, as a result of a
better designed instruction set and register set. But once you are
using slightly more specific hardware features - vector processing,
floating point, acceleration for cryptography, etc., it's all much the
same. It takes roughly the same energy to do these things regardless of
the instruction set. Cache memory takes about the same power, as do PCI interfaces, memory interfaces, and everything else that takes up power
on a chip.
So when you have a relatively small device - such as what you need for a mobile phone - the instruction set and architecture makes a significant difference and ARM is a lot more power-efficient than x86. (If you go smaller - small embedded systems - x86 is totally non-existent because
an x86 microcontroller would be an order of magnitude bigger, more
expensive and power-consuming than an ARM core.) But when you have big processors for servers, and are using a significant fraction of the processor's computing power, the details of the core matter a lot less.
David Brown <david.brown@hesbynett.no> posted:
On 22/09/2025 17:28, Stefan Monnier wrote:
But, AFAIK the ARM cores tend to use significantly less power when
emulating x86 than a typical Intel or AMD CPU, even if slower.
AFAIK datacenters still use a lot of x86 CPUs, even though most of them
run software that's just as easily available for ARM. And many
datacenters care more about "perf per watt" than raw performance.
So, I think the difference in power consumption does not favor ARM
nearly as significantly as you think.
Yes, I think that is correct.
A lot of it, as far as I have read, comes down to the type of
calculation you are doing. ARM cores can often be a lot more efficient
at general integer work and other common actions, as a result of a
better designed instruction set and register set. But once you are
using slightly more specific hardware features - vector processing,
floating point, acceleration for cryptography, etc., it's all much the
same. It takes roughly the same energy to do these things regardless of
the instruction set. Cache memory takes about the same power, as do PCI
interfaces, memory interfaces, and everything else that takes up power
on a chip.
So when you have a relatively small device - such as what you need for a
mobile phone - the instruction set and architecture makes a significant
difference and ARM is a lot more power-efficient than x86. (If you go
smaller - small embedded systems - x86 is totally non-existent because
an x86 microcontroller would be an order of magnitude bigger, more
expensive and power-consuming than an ARM core.) But when you have big
processors for servers, and are using a significant fraction of the
processor's computing power, the details of the core matter a lot less.
Big servers have rather equal power in the peripherals {DISKs, SSDs, and NICs} and DRAM {plus power supplies and cooling} than in the cores.
David Brown <david.brown@hesbynett.no> posted:
On 22/09/2025 17:28, Stefan Monnier wrote:
But, AFAIK the ARM cores tend to use significantly less power
when emulating x86 than a typical Intel or AMD CPU, even if
slower.
AFAIK datacenters still use a lot of x86 CPUs, even though most
of them run software that's just as easily available for ARM.
And many datacenters care more about "perf per watt" than raw performance.
So, I think the difference in power consumption does not favor ARM
nearly as significantly as you think.
Yes, I think that is correct.
A lot of it, as far as I have read, comes down to the type of
calculation you are doing. ARM cores can often be a lot more
efficient at general integer work and other common actions, as a
result of a better designed instruction set and register set. But
once you are using slightly more specific hardware features -
vector processing, floating point, acceleration for cryptography,
etc., it's all much the same. It takes roughly the same energy to
do these things regardless of the instruction set. Cache memory
takes about the same power, as do PCI interfaces, memory
interfaces, and everything else that takes up power on a chip.
So when you have a relatively small device - such as what you need
for a mobile phone - the instruction set and architecture makes a significant difference and ARM is a lot more power-efficient than
x86. (If you go smaller - small embedded systems - x86 is totally non-existent because an x86 microcontroller would be an order of
magnitude bigger, more expensive and power-consuming than an ARM
core.) But when you have big processors for servers, and are using
a significant fraction of the processor's computing power, the
details of the core matter a lot less.
Big servers have rather equal power in the peripherals {DISKs, SSDs,
and NICs} and DRAM {plus power supplies and cooling} than in the
cores.
On Mon, 22 Sep 2025 19:36:05 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Big servers have rather equal power in the peripherals {DISKs, SSDs,
and NICs} and DRAM {plus power supplies and cooling} than in the
cores.
Still, CPU power often matters.
On Mon, 22 Sep 2025 19:36:05 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
David Brown <david.brown@hesbynett.no> posted:
On 22/09/2025 17:28, Stefan Monnier wrote:
But, AFAIK the ARM cores tend to use significantly less power
when emulating x86 than a typical Intel or AMD CPU, even if
slower.
AFAIK datacenters still use a lot of x86 CPUs, even though most
of them run software that's just as easily available for ARM.
And many datacenters care more about "perf per watt" than raw performance.
So, I think the difference in power consumption does not favor ARM nearly as significantly as you think.
Yes, I think that is correct.
A lot of it, as far as I have read, comes down to the type of calculation you are doing. ARM cores can often be a lot more
efficient at general integer work and other common actions, as a
result of a better designed instruction set and register set. But
once you are using slightly more specific hardware features -
vector processing, floating point, acceleration for cryptography,
etc., it's all much the same. It takes roughly the same energy to
do these things regardless of the instruction set. Cache memory
takes about the same power, as do PCI interfaces, memory
interfaces, and everything else that takes up power on a chip.
So when you have a relatively small device - such as what you need
for a mobile phone - the instruction set and architecture makes a significant difference and ARM is a lot more power-efficient than
x86. (If you go smaller - small embedded systems - x86 is totally non-existent because an x86 microcontroller would be an order of magnitude bigger, more expensive and power-consuming than an ARM
core.) But when you have big processors for servers, and are using
a significant fraction of the processor's computing power, the
details of the core matter a lot less.
Big servers have rather equal power in the peripherals {DISKs, SSDs,
and NICs} and DRAM {plus power supplies and cooling} than in the
cores.
Still, CPU power often matters.
Spec.org has special benchmark for that called SPECpower_ssj 2008.
It is old and java-oriented but I don't think that it is useless.
Right now the benchmark clearly shows that AMD offferings dominate
Intel's.
The best AMD score is 44168 ssj_ops/watt https://www.spec.org/power_ssj2008/results/res2025q2/power_ssj2008-20250407-01522.html
The best Intel score are 25526 ssj_ops/watt (Sierra Forest) and 25374 ssj_ops/watt (Granite Rapids). Both lag behind ~100 AMD scores,
They barely beats some old EPYC3 scores from 2021. https://www.spec.org/power_ssj2008/results/res2025q3/power_ssj2008-20250811-01533.html
https://www.spec.org/power_ssj2008/results/res2025q1/power_ssj2008-20250310-01505.html
There are very few non-x86 submissions. The only one that I found in
last 5 years was using Nvidia Grace CPU Superchip based on Arm Inc.
Neoverse V2 cores. It scored 13218 ssj_ops/watt https://www.spec.org/power_ssj2008/results/res2024q3/power_ssj2008-20240515-01413.html
Michael S <already5chosen@yahoo.com> posted:
A quick survey of the result database indicates only Oracle is
sending results to the data base.
Would be interesting to see the Apple/ARM comparisons.
On Wed, 24 Sep 2025 21:08:10 +0300, Michael S
<already5chosen@yahoo.com> wrote:
On Mon, 22 Sep 2025 19:36:05 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Big servers have rather equal power in the peripherals {DISKs,
SSDs, and NICs} and DRAM {plus power supplies and cooling} than in
the cores.
Still, CPU power often matters.
Yes ... and no.
80+% of the power used by datacenters is devoted to cooling the
computers - not to running them.
At the same time, most of the heat
generated by typical systems is due to the RAM - not the CPU(s).
On Wed, 24 Sep 2025 15:56:37 -0400
George Neuner <gneuner2@comcast.net> wrote:
On Wed, 24 Sep 2025 21:08:10 +0300, Michael S
<already5chosen@yahoo.com> wrote:
On Mon, 22 Sep 2025 19:36:05 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Big servers have rather equal power in the peripherals {DISKs,
SSDs, and NICs} and DRAM {plus power supplies and cooling} than in
the cores.
Still, CPU power often matters.
Yes ... and no.
80+% of the power used by datacenters is devoted to cooling the
computers - not to running them.
I think that it's less than 80%. But it does not matter and does not
change anything - power spent for coooling is approximately
proportional to power spent for runninng.
At the same time, most of the heat
generated by typical systems is due to the RAM - not the CPU(s).
Michael S <already5chosen@yahoo.com> writes:
On Wed, 24 Sep 2025 15:56:37 -0400
George Neuner <gneuner2@comcast.net> wrote:
80+% of the power used by datacenters is devoted to cooling the
computers - not to running them.
At the same time, most of the heat
generated by typical systems is due to the RAM - not the CPU(s).
A typical 16GB dimm module will dissipate 3-5 watts. So 128GB will
draw in the vincinity of 32 watts.
80+% of the power used by datacenters is devoted to cooling the
computers - not to running them.
At the same time, most of the heat generated by typical systems is due
to the RAM - not the CPU(s).
On Wed, 24 Sep 2025 21:04:03 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
<...>
Scott,
When you answer George Neuner's point, can you, please, reply to George >Neuner's post rather than to mine?
Once I've read an article and restarted my newsreader, I don't have
access to read articles (at least not easily).
Once I've read an article and restarted my newsreader, I don't have access
to read articles (at least not easily).
On Thu, 25 Sep 2025 14:23:04 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
Once I've read an article and restarted my newsreader, I don't have
access to read articles (at least not easily).
Does not it suck?
scott@slp53.sl.home (Scott Lurndal) writes:
Once I've read an article and restarted my newsreader, I don't have access >>to read articles (at least not easily).
I press the "Goto parent" button, and I think that already existed in >xrn-9.03,
On Wed, 24 Sep 2025 21:08:10 +0300, Michael S
<already5chosen@yahoo.com> wrote:
On Mon, 22 Sep 2025 19:36:05 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Big servers have rather equal power in the peripherals {DISKs, SSDs,
and NICs} and DRAM {plus power supplies and cooling} than in the
cores.
Still, CPU power often matters.
Yes ... and no.
80+% of the power used by datacenters is devoted to cooling the
computers - not to running them. At the same time, most of the heat
generated by typical systems is due to the RAM - not the CPU(s).
George Neuner wrote:
On Wed, 24 Sep 2025 21:08:10 +0300, Michael S
<already5chosen@yahoo.com> wrote:
On Mon, 22 Sep 2025 19:36:05 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Big servers have rather equal power in the peripherals {DISKs, SSDs,
and NICs} and DRAM {plus power supplies and cooling} than in the
cores.
Still, CPU power often matters.
Yes ... and no.
80+% of the power used by datacenters is devoted to cooling the
computers - not to running them. At the same time, most of the heat generated by typical systems is due to the RAM - not the CPU(s).
I am quite sure that number is simply bogus: The power factors we were quoted when building the largest new datacenter in Norway 10+ years ago,
was more like 6-10% of total power for cooling afair.
. a quick google...
https://engineering.fb.com/2011/04/14/core-infra/designing-a-very-efficient-data-center/
This one claims a 1.07 Power Usage Effectiveness.
Terje
Terje Mathisen <terje.mathisen@tmsw.no> posted:
I am quite sure that number is simply bogus: The power factors we were
quoted when building the largest new datacenter in Norway 10+ years ago,
was more like 6-10% of total power for cooling afair.
. a quick google...
https://engineering.fb.com/2011/04/14/core-infra/designing-a-very-efficient-data-center/
This one claims a 1.07 Power Usage Effectiveness.
All of this depends on where the "cold sink" is !! and how cold it is.
Pumping 6ºC sea water through water to air heat exchangers is a lot
more power efficient than using FREON and dumping the heat into 37ºC
air.
I still suspect that rectifying and delivering clean (low noise) D/C
to the chassis' takes a lot more energy that taking the resulting heat
away.
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
Terje Mathisen <terje.mathisen@tmsw.no> posted:
I am quite sure that number is simply bogus: The power factors we were
quoted when building the largest new datacenter in Norway 10+ years ago, >>> was more like 6-10% of total power for cooling afair.
. a quick google...
https://engineering.fb.com/2011/04/14/core-infra/designing-a-very-efficient-data-center/
This one claims a 1.07 Power Usage Effectiveness.
All of this depends on where the "cold sink" is !! and how cold it is.
Pumping 6ºC sea water through water to air heat exchangers is a lot
more power efficient than using FREON and dumping the heat into 37ºC
air.
I still suspect that rectifying and delivering clean (low noise) D/C
to the chassis' takes a lot more energy that taking the resulting heat
away.
The FB article above describes how they reduced the
losses due to voltage changes as well as rectification.
Consider that there are losses converting from the
primary (e.g. 22kv) to 480v (2%), and additional losses
converting to 208v (3%) to the UPS. That's before any
rectification losses (6% to 12%). With various optimizations,
they reduced total losses to 7.5%, including rectification
and transformation from the primary voltage.
George Neuner wrote:
On Wed, 24 Sep 2025 21:08:10 +0300, Michael S
<already5chosen@yahoo.com> wrote:
On Mon, 22 Sep 2025 19:36:05 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
Big servers have rather equal power in the peripherals {DISKs,
SSDs, and NICs} and DRAM {plus power supplies and cooling} than
in the cores.
Still, CPU power often matters.
Yes ... and no.
80+% of the power used by datacenters is devoted to cooling theI am quite sure that number is simply bogus: The power factors we
computers - not to running them. At the same time, most of the heat generated by typical systems is due to the RAM - not the CPU(s).
were quoted when building the largest new datacenter in Norway 10+
years ago, was more like 6-10% of total power for cooling afair.
.. a quick google...
https://engineering.fb.com/2011/04/14/core-infra/designing-a-very-efficient-data-center/
This one claims a 1.07 Power Usage Effectiveness.
Terje
Brings up a thought: 960VDC is a semi-common voltage in industrial applications IIRC.
What if, opposed to each computer using its own power-supply (from 120
or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
Or, 2-stage, say:
960V -> 192V (with 960V to each rack).
192V -> 12V (with 192V to each server).
Where the second stage drop could use slightly cheaper transistors,
BGB <cr88192@gmail.com> schrieb:
Brings up a thought: 960VDC is a semi-common voltage in industrial applications IIRC.
I've never encountered that voltage. Direct current motors are
also mostly being phased out (pun intended) by asynchronous motors
with frequency inverters.
What if, opposed to each computer using its own power-supply (from
120 or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
That makes little sense. If you're going to distribute power,
distribute it as AC so you save one transformer.
Or, 2-stage, say:
960V -> 192V (with 960V to each rack).
192V -> 12V (with 192V to each server).
Where the second stage drop could use slightly cheaper transistors,
Transistors?
On 9/25/2025 9:03 PM, Scott Lurndal wrote:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
Consider that there are losses converting from the
primary (e.g. 22kv) to 480v (2%), and additional losses
converting to 208v (3%) to the UPS. That's before any
rectification losses (6% to 12%). With various optimizations,
they reduced total losses to 7.5%, including rectification
and transformation from the primary voltage.
Hmm...
Brings up a thought: 960VDC is a semi-common voltage in industrial >applications IIRC.
What if, opposed to each computer using its own power-supply (from 120
or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
In those datacenters, the UPS distributes 48VDC to the rack components (computers, network switches, storage devices, etc).
On 9/26/25 7:28 AM, Scott Lurndal wrote:
In those datacenters, the UPS distributes 48VDC to the rack components
(computers, network switches, storage devices, etc).
Is it still -48V?
Historically, Bell System plant voltage, supplied by batteries.
BGB <cr88192@gmail.com> writes:
On 9/25/2025 9:03 PM, Scott Lurndal wrote:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
Consider that there are losses converting from the
primary (e.g. 22kv) to 480v (2%), and additional losses
converting to 208v (3%) to the UPS. That's before any
rectification losses (6% to 12%). With various optimizations,
they reduced total losses to 7.5%, including rectification
and transformation from the primary voltage.
Hmm...
Brings up a thought: 960VDC is a semi-common voltage in industrial
applications IIRC.
What if, opposed to each computer using its own power-supply (from 120
or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
In those datacenters, the UPS distributes 48VDC to the rack components (computers, network switches, storage devices, etc).
On Fri, 26 Sep 2025 12:10:41 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
BGB <cr88192@gmail.com> schrieb:
Brings up a thought: 960VDC is a semi-common voltage in industrial
applications IIRC.
I've never encountered that voltage. Direct current motors are
also mostly being phased out (pun intended) by asynchronous motors
with frequency inverters.
Are you sure?
Indeed, in industry, outside of transportation, asynchronous AC motors
were that most wide-spread motors by far up to 25-30 years ago. But my imressioon was that today various type of electric motors (DC, esp. brushlees, AC sync, AC async) enjoy similar popularity.
What if, opposed to each computer using its own power-supply (from
120 or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
That makes little sense. If you're going to distribute power,
distribute it as AC so you save one transformer.
I never was in big datacenter, but heard that they prefer DC.
Or, 2-stage, say:
960V -> 192V (with 960V to each rack).
192V -> 12V (with 192V to each server).
Where the second stage drop could use slightly cheaper transistors,
Transistors?
Yes, transistors. DC-to-DC convertors are made of FETs. FETs are
transistors.
Higher voltage would be needed with DC vs AC, as DC is more subject to resistive losses. Though, more efficiency on the AC side would be
possible by increasing line frequency, say, using 240Hz rather than
60Hz; but don't want to push the frequency too high as then the wires
would start working like antennas and radiating the power into space.
BGB <cr88192@gmail.com> posted: --------------------snip----------------------------------
Higher voltage would be needed with DC vs AC, as DC is more subject to
resistive losses. Though, more efficiency on the AC side would be
possible by increasing line frequency, say, using 240Hz rather than
60Hz; but don't want to push the frequency too high as then the wires
would start working like antennas and radiating the power into space.
The military routinely uses 400 Hz to reduce the weight of transformers.
On Fri, 26 Sep 2025 12:10:41 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
BGB <cr88192@gmail.com> schrieb:
Brings up a thought: 960VDC is a semi-common voltage in industrial
applications IIRC.
I've never encountered that voltage. Direct current motors are
also mostly being phased out (pun intended) by asynchronous motors
with frequency inverters.
Are you sure?
Indeed, in industry, outside of transportation, asynchronous AC motors
were that most wide-spread motors by far up to 25-30 years ago. But my imressioon was that today various type of electric motors (DC, esp. brushlees, AC sync, AC async) enjoy similar popularity.
What if, opposed to each computer using its own power-supply (from
120 or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
That makes little sense. If you're going to distribute power,
distribute it as AC so you save one transformer.
I never was in big datacenter, but heard that they prefer DC.
Or, 2-stage, say:
960V -> 192V (with 960V to each rack).
192V -> 12V (with 192V to each server).
Where the second stage drop could use slightly cheaper transistors,
Transistors?
Yes, transistors. DC-to-DC convertors are made of FETs. FETs are
transistors.
BGB <cr88192@gmail.com> schrieb:
Brings up a thought: 960VDC is a semi-common voltage in industrial
applications IIRC.
I've never encountered that voltage. Direct current motors are
also mostly being phased out (pun intended) by asynchronous motors
with frequency inverters.
What if, opposed to each computer using its own power-supply (from 120
or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
That makes little sense. If you're going to distribute power,
distribute it as AC so you save one transformer.
Or, 2-stage, say:
960V -> 192V (with 960V to each rack).
192V -> 12V (with 192V to each server).
Where the second stage drop could use slightly cheaper transistors,
Transistors?
Michael S <already5chosen@yahoo.com> schrieb:
On Fri, 26 Sep 2025 12:10:41 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
BGB <cr88192@gmail.com> schrieb:
Brings up a thought: 960VDC is a semi-common voltage in industrial
applications IIRC.
I've never encountered that voltage. Direct current motors are
also mostly being phased out (pun intended) by asynchronous motors
with frequency inverters.
Are you sure?
Indeed, in industry, outside of transportation, asynchronous AC motors
were that most wide-spread motors by far up to 25-30 years ago. But my
imressioon was that today various type of electric motors (DC, esp.
brushlees, AC sync, AC async) enjoy similar popularity.
I can only speak from poersonal experience about the industry I
work in (chemical). People used to use DC motors when they needed
variable motor speed, but have now switched to asynchronous (AC)
motors with frequency inverters, which usually have a 1:10 ratio
of speed. There are no DC network in chemical plants.
If you have high-voltage DC system (like in an electric car) then
using DC motors makes more sense.
Or, 2-stage, say:
960V -> 192V (with 960V to each rack).
192V -> 12V (with 192V to each server).
Where the second stage drop could use slightly cheaper transistors,
Transistors?
Yes, transistors. DC-to-DC convertors are made of FETs. FETs are
transistors.
I'm more used to thyristors in that role.
And whenever you have a frequency inverter, the input to the frequency
is first rectified to DC, then new AC waveforms are generated using PWM controlled semiconductor switches.
David Brown <david.brown@hesbynett.no> schrieb:
And whenever you have a frequency inverter, the input to the frequency
is first rectified to DC, then new AC waveforms are generated using PWM
controlled semiconductor switches.
If you have three phases (required for high-power industrial motors)
I believe people use the three phases directly to convert from three
phases to three phases.
The resulting waveforms are not pretty, and contribute to the
difficulty of measuing power input.
On 9/26/2025 9:28 AM, Scott Lurndal wrote:
In those datacenters, the UPS distributes 48VDC to the rack components
(computers, network switches, storage devices, etc).
48VDC also makes sense, as it is common in other contexts. I sorta
figured a higher voltage would have been used to reduce the wire
thickness needed.
I did realize after posting that, if the main power rails were organized
as a grid, the whole building could be done probably with 1.25" aluminum >bars.
Could power the grid of bars at each of the 4 corners, with maybe some >central diagonal bars (which cross and intersect with the central part
of the grid, and an additional square around the perimeter). Each corner >supply could drive 512A, and with this layout, no bar or segment should >exceed 128A.
BGB <cr88192@gmail.com> posted: >--------------------snip----------------------------------
Higher voltage would be needed with DC vs AC, as DC is more subject to
resistive losses. Though, more efficiency on the AC side would be
possible by increasing line frequency, say, using 240Hz rather than
60Hz; but don't want to push the frequency too high as then the wires
would start working like antennas and radiating the power into space.
The military routinely uses 400 Hz to reduce the weight of transformers.
Something like 400 or 480Hz should also work.
On 27/09/2025 10:14, Thomas Koenig wrote:
Michael S <already5chosen@yahoo.com> schrieb:
On Fri, 26 Sep 2025 12:10:41 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
BGB <cr88192@gmail.com> schrieb:
Brings up a thought: 960VDC is a semi-common voltage in industrial
applications IIRC.
I've never encountered that voltage. Direct current motors are
also mostly being phased out (pun intended) by asynchronous motors
with frequency inverters.
Are you sure?
Indeed, in industry, outside of transportation, asynchronous AC motors
were that most wide-spread motors by far up to 25-30 years ago. But my >>> imressioon was that today various type of electric motors (DC, esp.
brushlees, AC sync, AC async) enjoy similar popularity.
I can only speak from poersonal experience about the industry I
work in (chemical). People used to use DC motors when they needed
variable motor speed, but have now switched to asynchronous (AC)
motors with frequency inverters, which usually have a 1:10 ratio
of speed. There are no DC network in chemical plants.
If you have high-voltage DC system (like in an electric car) then
using DC motors makes more sense.
These are not "DC motors" in the traditional sense, like brushed DC motors. The motors you use in a car have (roughly) sine wave drive signals, generally 3 phases (but sometimes more). Even motors referred
to as "Brushless DC motors" - "BLDC" - use AC inputs, though the
waveforms are more trapezoidal than sinusoidal.
And whenever you have a frequency inverter, the input to the frequency
is first rectified to DC, then new AC waveforms are generated using PWM controlled semiconductor switches.
Really, the distinction between "DC motor" and "AC motor" is mostly meaningless, other than for the smallest and cheapest (or oldest)
brushed DC motors.
Bigger brushed DC motors, as you say, used to be used in situations
where you needed speed control and the alternative was AC motors driven
at fixed or geared speeds directly from the 50 Hz or 60 Hz supplies. And
as you say, these were replaced by AC motors driven from frequency inverters. Asynchronous motors (or "induction motors") were popular at first, but are not common choices now for most use-cases because
synchronous AC motors give better control and efficiencies. (There are,
of course, many factors to consider - and sometimes asynchronous motors
are still the best choice.)
Or, 2-stage, say:
960V -> 192V (with 960V to each rack).
192V -> 12V (with 192V to each server).
Where the second stage drop could use slightly cheaper transistors,
Transistors?
Yes, transistors. DC-to-DC convertors are made of FETs. FETs are
transistors.
I'm more used to thyristors in that role.
It's better, perhaps, to refer to "semiconductor switches" as a more
general term.
Thyristors are mostly outdated, and are only used now in very high power situations. Even then, they are not your granddad's thyristors, but
have more control for switching off as well as switching on - perhaps
even using light for the switching rather than electrical signals.
(Those are particularly nice for megavolt DC lines.)
You can happily switch multiple MW of power with a single IGBT module
for a could of thousand dollars. Or you can use SiC FETs for up to a
few hundred kW but with much faster PWM frequencies and thus better
control.
BGB <cr88192@gmail.com> writes:
On 9/25/2025 9:03 PM, Scott Lurndal wrote:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
Consider that there are losses converting from the
primary (e.g. 22kv) to 480v (2%), and additional losses
converting to 208v (3%) to the UPS. That's before any
rectification losses (6% to 12%). With various optimizations,
they reduced total losses to 7.5%, including rectification
and transformation from the primary voltage.
Hmm...
Brings up a thought: 960VDC is a semi-common voltage in industrial >applications IIRC.
What if, opposed to each computer using its own power-supply (from
120 or 240 VAC), it uses a buck converter, say, 960VDC -> 12VDC.
In those datacenters, the UPS distributes 48VDC to the rack components (computers, network switches, storage devices, etc).
On 9/27/2025 6:52 AM, David Brown wrote:
On 27/09/2025 10:14, Thomas Koenig wrote:
Michael S <already5chosen@yahoo.com> schrieb:
On Fri, 26 Sep 2025 12:10:41 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
BGB <cr88192@gmail.com> schrieb:
Brings up a thought: 960VDC is a semi-common voltage in industrial >>>>>> applications IIRC.
I've never encountered that voltage. Direct current motors are
also mostly being phased out (pun intended) by asynchronous motors
with frequency inverters.
Are you sure?
Indeed, in industry, outside of transportation, asynchronous AC motors >>>> were that most wide-spread motors by far up to 25-30 years ago. But my >>>> imressioon was that today various type of electric motors (DC, esp.
brushlees, AC sync, AC async) enjoy similar popularity.
I can only speak from poersonal experience about the industry I
work in (chemical). People used to use DC motors when they needed
variable motor speed, but have now switched to asynchronous (AC)
motors with frequency inverters, which usually have a 1:10 ratio
of speed. There are no DC network in chemical plants.
If you have high-voltage DC system (like in an electric car) then
using DC motors makes more sense.
These are not "DC motors" in the traditional sense, like brushed DC
motors. The motors you use in a car have (roughly) sine wave drive
signals, generally 3 phases (but sometimes more). Even motors
referred to as "Brushless DC motors" - "BLDC" - use AC inputs, though
the waveforms are more trapezoidal than sinusoidal.
Yes.
Typically one needs to generate a 3-phase waveform at the speed they
want to spin the motor at.
I had noted in some experience when writing some code to spin motors (typically on an MSP430, mostly experimentally) or similar:
Sine waves give low noise, but less power;
Square waves are noisier and only work well at low RPM,
but have higher torque.
Sawtooth waves seem to work well at higher RPMs.
Well, sorta, more like sawtooth with alternating sign.
Square-Root Sine: Intermediate between sign and square.
Gives torque more like a square wave, but quieter.
Trapezoid waves are similar to this, but more noise.
Seemingly, one "better" option might be to mutate the wave-shape between Square-Root-Sine and sawtooth depending on the target RPM. Also dropping
the wave amplitude at lower RPMs (at low RPMs motors pull more amperage
and thus generate a lot of heat otherwise).
And whenever you have a frequency inverter, the input to the frequency
is first rectified to DC, then new AC waveforms are generated using
PWM controlled semiconductor switches.
Yes:
Dual-phase: may use a "Dual H-Bridge" configuration
Where, the H-bridge is built using power transistors;
Three-phase: "Triple Half-Bridge"
Needs fewer transistors than dual phase.
It is slightly easier to build these drivers with BJTs or Darlington transistors, but these tend to handle less power and generate more heat,
but are more fault tolerant.
MOSFETs can handle more power, but one needs to be very careful not to exceed the Gain-Source voltage limit, otherwise they are insta-dead (and will behave as if they are shorted).
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,076 |
| Nodes: | 10 (1 / 9) |
| Uptime: | 64:57:36 |
| Calls: | 13,805 |
| Files: | 186,990 |
| D/L today: |
541 files (173M bytes) |
| Messages: | 2,442,779 |