On Sun, 26 Apr 2026 12:53:42 -0000 (UTC), Borax Man wrote:
I've done a conversion in the past, and this is what I experienced. I
got a bit of a speed up, but nowhere near what you'd get with a new EXT4
system. I think it is because a lot of the metadata structures are
still as they were with EXT3.
That's what I suspected as well. The conversion is not really complete.
I was very surprised at how fast the conversion took place. I expected
the process to require a long time as the metadata was restructured
but it was finished in just a minute or so.
The websites and other documents that provide the conversion information should at least mention that the conversion is only partial.
I had some disk partitions that were formatted as EXT3.
These partitions were converted to EXT4 according to the
instructions found here (and elsewhere):
<https://linuxconfig.org/how-to-convert-an-ext3-filesystem-partition-to-ext4>
Doing a filesystem check with e2fsck on these converted
partitions requires considerably more time than with
partitions that were directly formatted as EXT4 (i.e. no
conversion), even though the partitions are roughly the
same size.
Is this normal behavior or has the EXT3 --> EXT4 conversion
been done incorrectly?
Leroy H <lh@somewhere.net> wrote:
I had some disk partitions that were formatted as EXT3.
These partitions were converted to EXT4 according to the
instructions found here (and elsewhere):
<https://linuxconfig.org/how-to-convert-an-ext3-filesystem-partition-to-ext4>
Doing a filesystem check with e2fsck on these converted
partitions requires considerably more time than with
partitions that were directly formatted as EXT4 (i.e. no
conversion), even though the partitions are roughly the
same size.
Is this normal behavior or has the EXT3 --> EXT4 conversion
been done incorrectly?
While there have been several messages now indicating that a converted
FS very well make take longer to check, there is one more aspect that
has not been covered.
Are the converted EXT3 to EXT4 filesystems, and the native EXT4
filesystems that check faster, present on the same disk, or on
different disks?
If they are on physically different disks, then you may also be
noticing the fact that older drives were slower performing than newer
drives (a drive which was originally formatted as EXT3 is /likely/ an
older, and thereby possibly slower, disk).
A slower disk will check more slowly than a faster disk will, even if
the file systems on top of each were identical.
On 2026-04-27, Rich <rich@example.invalid> wrote:
Leroy H <lh@somewhere.net> wrote:
I had some disk partitions that were formatted as EXT3.
These partitions were converted to EXT4 according to the
instructions found here (and elsewhere):
<https://linuxconfig.org/how-to-convert-an-ext3-filesystem-partition-to-ext4>
Doing a filesystem check with e2fsck on these converted
partitions requires considerably more time than with
partitions that were directly formatted as EXT4 (i.e. no
conversion), even though the partitions are roughly the
same size.
Is this normal behavior or has the EXT3 --> EXT4 conversion
been done incorrectly?
While there have been several messages now indicating that a converted
FS very well make take longer to check, there is one more aspect that
has not been covered.
Are the converted EXT3 to EXT4 filesystems, and the native EXT4
filesystems that check faster, present on the same disk, or on
different disks?
If they are on physically different disks, then you may also be
noticing the fact that older drives were slower performing than newer
drives (a drive which was originally formatted as EXT3 is /likely/ an
older, and thereby possibly slower, disk).
A slower disk will check more slowly than a faster disk will, even if
the file systems on top of each were identical.
I've had a filesystem converted to EXT4 from EXT3, and then later backed
up, formatted and restored. So both a conversion and a new format, on
the same disk, with the same filetree, and the result was the same. A reduction in about half of the time to do a fsck upon conversion, and a fraction of the time when newly formatted.
fsck is mostly held back by seek time, rather than throughput.
On 2026-04-27, Rich <rich@example.invalid> wrote:
Leroy H <lh@somewhere.net> wrote:
I had some disk partitions that were formatted as EXT3.
These partitions were converted to EXT4 according to the
instructions found here (and elsewhere):
<https://linuxconfig.org/how-to-convert-an-ext3-filesystem-partition-to-ext4>
Doing a filesystem check with e2fsck on these converted
partitions requires considerably more time than with
partitions that were directly formatted as EXT4 (i.e. no
conversion), even though the partitions are roughly the
same size.
Is this normal behavior or has the EXT3 --> EXT4 conversion
been done incorrectly?
While there have been several messages now indicating that a converted
FS very well make take longer to check, there is one more aspect that
has not been covered.
Are the converted EXT3 to EXT4 filesystems, and the native EXT4
filesystems that check faster, present on the same disk, or on
different disks?
If they are on physically different disks, then you may also be
noticing the fact that older drives were slower performing than newer
drives (a drive which was originally formatted as EXT3 is /likely/ an
older, and thereby possibly slower, disk).
A slower disk will check more slowly than a faster disk will, even if
the file systems on top of each were identical.
I've had a filesystem converted to EXT4 from EXT3, and then later backed
up, formatted and restored. So both a conversion and a new format, on
the same disk, with the same filetree, and the result was the same. A reduction in about half of the time to do a fsck upon conversion, and a fraction of the time when newly formatted.
fsck is mostly held back by seek time, rather than throughput.
Borax Man <boraxman@geidiprime.invalid> wrote:
On 2026-04-27, Rich <rich@example.invalid> wrote:
Leroy H <lh@somewhere.net> wrote:
I had some disk partitions that were formatted as EXT3.
These partitions were converted to EXT4 according to the
instructions found here (and elsewhere):
<https://linuxconfig.org/how-to-convert-an-ext3-filesystem-partition-to-ext4>
Doing a filesystem check with e2fsck on these converted
partitions requires considerably more time than with
partitions that were directly formatted as EXT4 (i.e. no
conversion), even though the partitions are roughly the
same size.
Is this normal behavior or has the EXT3 --> EXT4 conversion
been done incorrectly?
While there have been several messages now indicating that a converted
FS very well make take longer to check, there is one more aspect that
has not been covered.
Are the converted EXT3 to EXT4 filesystems, and the native EXT4
filesystems that check faster, present on the same disk, or on
different disks?
If they are on physically different disks, then you may also be
noticing the fact that older drives were slower performing than newer
drives (a drive which was originally formatted as EXT3 is /likely/ an
older, and thereby possibly slower, disk).
A slower disk will check more slowly than a faster disk will, even if
the file systems on top of each were identical.
I've had a filesystem converted to EXT4 from EXT3, and then later backed
up, formatted and restored. So both a conversion and a new format, on
the same disk, with the same filetree, and the result was the same. A
reduction in about half of the time to do a fsck upon conversion, and a
fraction of the time when newly formatted.
Ok, so the 'converted' ext4 does, indeed, take much longer to fsck.
fsck is mostly held back by seek time, rather than throughput.
Yes, which is contained within the broad terms "slower" and "faster".
Newer disks are not only "faster" in data read/write but are also often "faster" in seek time (although seek time speed has not increased at
the same rate as bulk data read/write speed has increased).
And a "faster seek time" will make a "seek constained" task end sooner
that the same task performed with a "slower seek time".
On 4/29/26 11:57, Rich wrote:
Borax Man <boraxman@geidiprime.invalid> wrote:
On 2026-04-27, Rich <rich@example.invalid> wrote:
Leroy H <lh@somewhere.net> wrote:
I had some disk partitions that were formatted as EXT3.
These partitions were converted to EXT4 according to the
instructions found here (and elsewhere):
<https://linuxconfig.org/how-to-convert-an-ext3-filesystem-
partition-to-ext4>
Doing a filesystem check with e2fsck on these converted
partitions requires considerably more time than with
partitions that were directly formatted as EXT4 (i.e. no
conversion), even though the partitions are roughly the
same size.
Is this normal behavior or has the EXT3 --> EXT4 conversion
been done incorrectly?
While there have been several messages now indicating that a converted >>>> FS very well make take longer to check, there is one more aspect that
has not been covered.
Are the converted EXT3 to EXT4 filesystems, and the native EXT4
filesystems that check faster, present on the same disk, or on
different disks?
If they are on physically different disks, then you may also be
noticing the fact that older drives were slower performing than newer
drives (a drive which was originally formatted as EXT3 is /likely/ an
older, and thereby possibly slower, disk).
A slower disk will check more slowly than a faster disk will, even if
the file systems on top of each were identical.
I've had a filesystem converted to EXT4 from EXT3, and then later backed >>> up, formatted and restored. So both a conversion and a new format, on
the same disk, with the same filetree, and the result was the same. A
reduction in about half of the time to do a fsck upon conversion, and a
fraction of the time when newly formatted.
Ok, so the 'converted' ext4 does, indeed, take much longer to fsck.
fsck is mostly held back by seek time, rather than throughput.
Yes, which is contained within the broad terms "slower" and "faster".
Newer disks are not only "faster" in data read/write but are also often
"faster" in seek time (although seek time speed has not increased at
the same rate as bulk data read/write speed has increased).
And a "faster seek time" will make a "seek constained" task end sooner
that the same task performed with a "slower seek time".
Converted partitions DO take longer, that's my
personal experience. So, either put up with it
or re-do yer system entirely in EXT4 or some
other 'new' file system. Time, alas, does not
stand still.
IF yer checks aren't like Every Day - more like
every week or month - then the extra wait is not
really a biggie.
On 4/29/26 11:57, Rich wrote:
Borax Man <boraxman@geidiprime.invalid> wrote:
On 2026-04-27, Rich <rich@example.invalid> wrote:
Leroy H <lh@somewhere.net> wrote:
I had some disk partitions that were formatted as EXT3.
These partitions were converted to EXT4 according to the
instructions found here (and elsewhere):
<https://linuxconfig.org/how-to-convert-an-ext3-filesystem-partition-to-ext4>
Doing a filesystem check with e2fsck on these converted
partitions requires considerably more time than with
partitions that were directly formatted as EXT4 (i.e. no
conversion), even though the partitions are roughly the
same size.
Is this normal behavior or has the EXT3 --> EXT4 conversion
been done incorrectly?
While there have been several messages now indicating that a converted >>>> FS very well make take longer to check, there is one more aspect that
has not been covered.
Are the converted EXT3 to EXT4 filesystems, and the native EXT4
filesystems that check faster, present on the same disk, or on
different disks?
If they are on physically different disks, then you may also be
noticing the fact that older drives were slower performing than newer
drives (a drive which was originally formatted as EXT3 is /likely/ an
older, and thereby possibly slower, disk).
A slower disk will check more slowly than a faster disk will, even if
the file systems on top of each were identical.
I've had a filesystem converted to EXT4 from EXT3, and then later backed >>> up, formatted and restored. So both a conversion and a new format, on
the same disk, with the same filetree, and the result was the same. A
reduction in about half of the time to do a fsck upon conversion, and a
fraction of the time when newly formatted.
Ok, so the 'converted' ext4 does, indeed, take much longer to fsck.
fsck is mostly held back by seek time, rather than throughput.
Yes, which is contained within the broad terms "slower" and "faster".
Newer disks are not only "faster" in data read/write but are also often
"faster" in seek time (although seek time speed has not increased at
the same rate as bulk data read/write speed has increased).
And a "faster seek time" will make a "seek constained" task end sooner
that the same task performed with a "slower seek time".
Converted partitions DO take longer, that's my
personal experience. So, either put up with it
or re-do yer system entirely in EXT4 or some
other 'new' file system. Time, alas, does not
stand still.
IF yer checks aren't like Every Day - more like
every week or month - then the extra wait is not
really a biggie.
The time it would take to backup-format-reinstall would, I think,
greatly exceed the time saved waiting for that FSCK to complete the few
times a year you might needs to FSCK it.
I find it amusing that there are people who would still do that.
In article <slrn10v6j7b.acg.rotflol2@geidiprime.bvh>,
Borax Man <rotflol2@hotmail.com> wrote:
...
The time it would take to backup-format-reinstall would, I think,
greatly exceed the time saved waiting for that FSCK to complete the few >>times a year you might needs to FSCK it.
I find it amusing that there are people who would still do that.
I get what you are saying, and it is no doubt correct, but I also
understand the feeling that I think we've all experienced at several points in our time with computers. The feeling that something just isn't right; that is is not quite kosher and will probably develop other problems as
time goes by. And that starting over - reformatting/whatever - is a good thing to do, to get things back to pristine.
And it does look, based on this thread, that a disk partition (filesystem) converted from EXT3 to EXT4 (without being reformatted, etc) is not quite a real/kosher EXT4 filesystem. And thus that re-doing it from scratch might
be a good idea.
The time it would take to backup-format-reinstall would, I think,
greatly exceed the time saved waiting for that FSCK to complete the few
times a year you might needs to FSCK it.
I find it amusing that there are people who would still do that.
And it does look, based on this thread, that a disk partition (filesystem) converted from EXT3 to EXT4 (without being reformatted, etc) is not quite a real/kosher EXT4 filesystem. And thus that re-doing it from scratch might
be a good idea.
On 30/04/2026 15:32, Kenny McCormack wrote:
And it does look, based on this thread, that a disk partition
(filesystem)
converted from EXT3 to EXT4 (without being reformatted, etc) is not
quite a
real/kosher EXT4 filesystem. And thus that re-doing it from scratch
might
be a good idea.
It seems its less than 'not kosher' but more 'not defragged'.
The great thing about taking years of accumulated data that wont ever
change in future and putting it on a clean partition is that it all goes
on in one low seek time block
Of course with SSDs its almost orrelevant
I find it amusing that there are people who would still do that.
On Thu, 30 Apr 2026 12:36:59 -0000 (UTC), Borax Man wrote:
I find it amusing that there are people who would still do that.
fsck, you mean? It’s not something I do routinely, only if/when I have
to.
On 4/30/26 20:15, Lawrence D’Oliveiro wrote:
On Thu, 30 Apr 2026 12:36:59 -0000 (UTC), Borax Man wrote:
I find it amusing that there are people who would still do that.
fsck, you mean? It’s not something I do routinely, only if/when I have
to.
Well ... can't really hurt ......
Mystery problems CAN arise almost invisibly.
In article <slrn10v6j7b.acg.rotflol2@geidiprime.bvh>,
Borax Man <rotflol2@hotmail.com> wrote:
...
The time it would take to backup-format-reinstall would, I think,
greatly exceed the time saved waiting for that FSCK to complete the few >>times a year you might needs to FSCK it.
I find it amusing that there are people who would still do that.
I get what you are saying, and it is no doubt correct, but I also
understand the feeling that I think we've all experienced at several points in our time with computers. The feeling that something just isn't right; that is is not quite kosher and will probably develop other problems as
time goes by. And that starting over - reformatting/whatever - is a good thing to do, to get things back to pristine.
And it does look, based on this thread, that a disk partition (filesystem) converted from EXT3 to EXT4 (without being reformatted, etc) is not quite a real/kosher EXT4 filesystem. And thus that re-doing it from scratch might
be a good idea.
On 2026-04-30, Kenny McCormack wrote:
In article <slrn10v6j7b.acg.rotflol2@geidiprime.bvh>,
Borax Man <rotflol2@hotmail.com> wrote:
...
The time it would take to backup-format-reinstall would, I think,
greatly exceed the time saved waiting for that FSCK to complete the few >>>times a year you might needs to FSCK it.
I find it amusing that there are people who would still do that.
I get what you are saying, and it is no doubt correct, but I also
understand the feeling that I think we've all experienced at several points >> in our time with computers. The feeling that something just isn't right;
that is is not quite kosher and will probably develop other problems as
time goes by. And that starting over - reformatting/whatever - is a good
thing to do, to get things back to pristine.
And it does look, based on this thread, that a disk partition (filesystem) >> converted from EXT3 to EXT4 (without being reformatted, etc) is not quite a >> real/kosher EXT4 filesystem. And thus that re-doing it from scratch might >> be a good idea.
I'd guess there's probably a lot of room for variability between the placement of things and default options in a brand-new ext4 filesystem
and what is a valid ext filesystem (even if suboptimal). Does the
conversion try to get it close to e.g. the current defaults for mkfs, or
does it merely upgrade as little as possible to make it ext4-compatible?
Come to think of it, its quite late in the game to be still running
EXT3! EXT4 is already nearly two decades old. Personally, I'd
recommend BTRFS or XFS, especially for large storage/archival.
I don't trust btrfs ...
On 2026-05-03 14:29, Borax Man wrote:
Come to think of it, its quite late in the game to be still running
EXT3! EXT4 is already nearly two decades old. Personally, I'd
recommend BTRFS or XFS, especially for large storage/archival.
I don't trust btrfs, and xfs poses problems when used for root; for data
it is perfect.
I have a raid 6 with 8 active disks, LUKS encrypted and using btrfs with compression. It has developed an error, impossible to correct (I don't
know how, got advice about what to do, but took days and it aborted. Eventually will redo with XFS instead).
Fun fact, Oracle includes btrfs with its “Unbreakable” Linux.ZFS is the default in Oracle Solaris and is used in illumos. I do like
You know, this is the company that owns ZFS, which some people swear
by. But it won’t offer that with its own Linux distro. Vote of
confidence in your own technology, much?
And I've never heard of issues with XFS on root. I use XFS on Linux all
the time. That is in-kernel and has been around for a long time. SGI's
stuff I think? As long as your kernel supports it you should be fine.
On 2026-05-03, Carlos E.R. <robin_listas@es.invalid> wrote:
On 2026-05-03 14:29, Borax Man wrote:
Come to think of it, its quite late in the game to be still running
EXT3! EXT4 is already nearly two decades old. Personally, I'd
recommend BTRFS or XFS, especially for large storage/archival.
I don't trust btrfs, and xfs poses problems when used for root; for data
it is perfect.
I have a raid 6 with 8 active disks, LUKS encrypted and using btrfs with
compression. It has developed an error, impossible to correct (I don't
know how, got advice about what to do, but took days and it aborted.
Eventually will redo with XFS instead).
What issues are there with XFS for root?
I have had an issue with BTRFS in the past, an error which couldn't be corrected. The error didn't have any impact except failing to read one extended attribute from a file, which didn't matter that much, but it
did mean that one subvolume couldn't be sent properly (a fact I became
aware of too late). The filesystem seemed to work fine regardless.
What made me lost a little trust is I backed up then ran FSCK, and FSCK
made a complete dogs breakfast of the filesystem. This might have been
in 2015, thereabouts.
But that was over 10 years ago, and its been fine ever sense. As I--
mentioned before, the checksumming picked up issues that would have
otherwise gone unnoticed and led to lost data, so I do trust it but
still, I'd only use it if I really needed the features. If not, XFS
Lawrence D’Oliveiro <ldo@nz.invalid> writes:
Fun fact, Oracle includes btrfs with its “Unbreakable” Linux.ZFS is the default in Oracle Solaris and is used in illumos. I do
You know, this is the company that owns ZFS, which some people
swear by. But it won’t offer that with its own Linux distro. Vote
of confidence in your own technology, much?
like it, but unfortunately on Linux it's tripped up by licensing or
whatever so you need to use special magic to create your own
installer while doing back-flips through flaming hoops if you want
it on root (which is the entire purpose of having it). No one is
dealing with that.
On 2026-05-04 14:41, Borax Man wrote:
What issues are there with XFS for root?
I don't remember well, and I feel lazy about writing it up, so I
asked chatgpt.
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the filesystem
itself (unlike something like ext4 with certain GRUB setups). So
GRUB has fewer fallback options.
I have had an issue with BTRFS in the past, an error which couldn't be corrected.
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be
corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not
sure if it is even now. The Fedora box is btrfs except for /boot which is ext4.
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be
corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not
sure if it is even now. The Fedora box is btrfs except for /boot which is ext4.
On Mon, 4 May 2026 20:04:04 +0200, Carlos E.R. wrote:
On 2026-05-04 14:41, Borax Man wrote:
What issues are there with XFS for root?
I don't remember well, and I feel lazy about writing it up, so I
asked chatgpt.
*waves robot arms* Danger, danger!
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the filesystem
itself (unlike something like ext4 with certain GRUB setups). So
GRUB has fewer fallback options.
That statement doesn’t make any sense. There is no “embedding GRUB
core files inside the filesystem itself”. Those “GRUB core files” are not “embedded” in the filesystem in any special way, they are just
files like any other files, copied as necessary by the bootloader installation system to put the necessary stuff into the actual boot partition. Which is has its own layout, independent of any filesystem
volume format, and nothing to do with any filesystem volume format.
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be
corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not
sure if it is even now. The Fedora box is btrfs except for /boot which is
ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing
accounts of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a
pain to convince it otherwise.
There's not a damned thing wrong with EXT4 and at this
point it's ultra-reliable and very fixable.
With the sheer volume and frequent access to data these
days, as opposed to the old PC days, hyper-reliable is
absolutely necessary. Even yer humble laptop may read/write
a petabyte a year depending. It'd better do it RIGHT.
IMHO, use EXT4, or maybe XFS if you're moving Big Data.
ZFS is theoretically good, has some cool features, but
it's very obese.
On 2026-05-05 02:05, Lawrence D’Oliveiro wrote:
On Mon, 4 May 2026 20:04:04 +0200, Carlos E.R. wrote:
On 2026-05-04 14:41, Borax Man wrote:
What issues are there with XFS for root?
I don't remember well, and I feel lazy about writing it up, so I
asked chatgpt.
*waves robot arms* Danger, danger!
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the
filesystem itself (unlike something like ext4 with certain GRUB
setups). So GRUB has fewer fallback options.
That statement doesn’t make any sense. There is no “embedding GRUB
core files inside the filesystem itself”. Those “GRUB core files”
are not “embedded” in the filesystem in any special way, they are
just files like any other files, copied as necessary by the
bootloader installation system to put the necessary stuff into the
actual boot partition. Which is has its own layout, independent of
any filesystem volume format, and nothing to do with any filesystem
volume format.
No. Grub 2 writes code directly to the start of the partition. This is
known.
It is not files when you look at the partition, it is written
outside of the filesystem, but it corresponds with a file or files in
the /boot directory, from which the install process reads the files and
write them directly to the partition start.
On Tue, 5 May 2026 12:07:31 +0200, Carlos E.R. wrote:
On 2026-05-05 02:05, Lawrence D’Oliveiro wrote:
On Mon, 4 May 2026 20:04:04 +0200, Carlos E.R. wrote:
On 2026-05-04 14:41, Borax Man wrote:
What issues are there with XFS for root?
I don't remember well, and I feel lazy about writing it up, so I
asked chatgpt.
*waves robot arms* Danger, danger!
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the
filesystem itself (unlike something like ext4 with certain GRUB
setups). So GRUB has fewer fallback options.
That statement doesn’t make any sense. There is no “embedding GRUB
core files inside the filesystem itself”. Those “GRUB core files”
are not “embedded” in the filesystem in any special way, they are
just files like any other files, copied as necessary by the
bootloader installation system to put the necessary stuff into the
actual boot partition. Which is has its own layout, independent of
any filesystem volume format, and nothing to do with any filesystem
volume format.
No. Grub 2 writes code directly to the start of the partition. This is
known.
There is no “start of the partition”. There is an *entire* “boot partition” devoted to this purpose. This is entirely separate from any regular filesystem partition, whether ext4, XFS or whatever.
I am talking of the case when grub is installed in the "/"
partition, instead of in the MBR.
On 2026-05-05 06:52, c186282 wrote:
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be >>>> corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not
sure if it is even now. The Fedora box is btrfs except for /boot
which is
ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing
accounts of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a
pain to convince it otherwise.
There's not a damned thing wrong with EXT4 and at this
point it's ultra-reliable and very fixable.
The issue is that support thinks you have btrfs, and they tell you to
undo to a previous point in the filesystem. You can undo very easily a failed update.
Then you tell them "no, I am on ext4" and they grumble.
(thinking of openSUSE).
With the sheer volume and frequent access to data these
days, as opposed to the old PC days, hyper-reliable is
absolutely necessary. Even yer humble laptop may read/write
a petabyte a year depending. It'd better do it RIGHT.
IMHO, use EXT4, or maybe XFS if you're moving Big Data.
ZFS is theoretically good, has some cool features, but
it's very obese.
On Mon, 4 May 2026 20:04:04 +0200, Carlos E.R. wrote:
On 2026-05-04 14:41, Borax Man wrote:
What issues are there with XFS for root?
I don't remember well, and I feel lazy about writing it up, so I
asked chatgpt.
*waves robot arms* Danger, danger!
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the filesystem
itself (unlike something like ext4 with certain GRUB setups). So
GRUB has fewer fallback options.
That statement doesn’t make any sense. There is no “embedding GRUB
core files inside the filesystem itself”. Those “GRUB core files” are not “embedded” in the filesystem in any special way, they are just
files like any other files, copied as necessary by the bootloader installation system to put the necessary stuff into the actual boot partition. Which is has its own layout, independent of any filesystem
volume format, and nothing to do with any filesystem volume format.
On 5/5/26 06:11, Carlos E.R. wrote:
On 2026-05-05 06:52, c186282 wrote:"Very easily" ???
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't
be corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm
not sure if it is even now. The Fedora box is btrfs except for /boot
which is ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing accounts
of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a pain to convince
it otherwise.
There's not a damned thing wrong with EXT4 and at this point it's
ultra-reliable and very fixable.
The issue is that support thinks you have btrfs, and they tell you to
undo to a previous point in the filesystem. You can undo very easily a
failed update.
On 5/4/26 20:05, Lawrence D’Oliveiro wrote:
On Mon, 4 May 2026 20:04:04 +0200, Carlos E.R. wrote:
On 2026-05-04 14:41, Borax Man wrote:
What issues are there with XFS for root?
I don't remember well, and I feel lazy about writing it up, so I
asked chatgpt.
*waves robot arms* Danger, danger!
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the filesystem
itself (unlike something like ext4 with certain GRUB setups). So
GRUB has fewer fallback options.
That statement doesn’t make any sense. There is no “embedding GRUB
core files inside the filesystem itself”. Those “GRUB core files” are >> not “embedded” in the filesystem in any special way, they are just
files like any other files, copied as necessary by the bootloader
installation system to put the necessary stuff into the actual boot
partition. Which is has its own layout, independent of any filesystem
volume format, and nothing to do with any filesystem volume format.
Most installs DO still let you install GRUB inside
a random partition rather than the MBR. It works, but
there can be complications. For almost all users, the
MBR is the best place for GRUB.
As for XFS ... it is optimized for storing/moving rather
large files and does that very well. If you have a zillion
tiny files though it's not really your best choice.
There's no all-around panacea when it comes to file systems.
You have to match them with your real-world needs. Even
EXT4 ... do you need 2k/4k/8k blocks ? Again, "it depends".
On 5/5/26 06:11, Carlos E.R. wrote:
On 2026-05-05 06:52, c186282 wrote:
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be >>>>> corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not >>>> sure if it is even now. The Fedora box is btrfs except for /boot
which is
ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing
accounts of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a
pain to convince it otherwise.
There's not a damned thing wrong with EXT4 and at this
point it's ultra-reliable and very fixable.
The issue is that support thinks you have btrfs, and they tell you to
undo to a previous point in the filesystem. You can undo very easily a
failed update.
"Very easily" ??? :-)
Then you tell them "no, I am on ext4" and they grumble.
(thinking of openSUSE).
I've tried 'em all ... and am gonna stick to EXT4
for a very long time. Does absolutely everything
I want and need quick and easy and reliably. BTRFS
is not a panacea, not at all.
On 2026-05-05 23:56, Lawrence D’Oliveiro wrote:
On Tue, 5 May 2026 12:07:31 +0200, Carlos E.R. wrote:
On 2026-05-05 02:05, Lawrence D’Oliveiro wrote:
On Mon, 4 May 2026 20:04:04 +0200, Carlos E.R. wrote:
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the
filesystem itself (unlike something like ext4 with certain GRUB
setups). So GRUB has fewer fallback options.
That statement doesn’t make any sense. There is no “embedding GRUB >>>> core files inside the filesystem itself”. Those “GRUB core files” >>>> are not “embedded” in the filesystem in any special way, they are
just files like any other files, copied as necessary by the
bootloader installation system to put the necessary stuff into the
actual boot partition. Which is has its own layout, independent of
any filesystem volume format, and nothing to do with any filesystem
volume format.
No. Grub 2 writes code directly to the start of the partition. This
is known.
There is no “start of the partition”. There is an *entire* “boot
partition” devoted to this purpose. This is entirely separate from any
regular filesystem partition, whether ext4, XFS or whatever.
Man, focus. I am talking of the case when grub is installed in the "/" partition, instead of in the MBR. Sector zero has grub part 1. Then
goes grub 1.5 to the initial sectors.s
If you do not understand this, I suggest you study. I am out.
"Carlos E.R." <robin_listas@es.invalid> writes:
On 2026-05-05 23:56, Lawrence D’Oliveiro wrote:
On Tue, 5 May 2026 12:07:31 +0200, Carlos E.R. wrote:
On 2026-05-05 02:05, Lawrence D’Oliveiro wrote:
On Mon, 4 May 2026 20:04:04 +0200, Carlos E.R. wrote:
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the
filesystem itself (unlike something like ext4 with certain GRUB
setups). So GRUB has fewer fallback options.
“Embedding” is a confusing word to use here, since Grub uses it to refer to storing diskboot.img+core.img in the embedding area, i.e. after the
MBR but before the first partition.
That statement doesn’t make any sense. There is no “embedding GRUB >>>>> core files inside the filesystem itself”. Those “GRUB core files” >>>>> are not “embedded” in the filesystem in any special way, they are >>>>> just files like any other files, copied as necessary by the
bootloader installation system to put the necessary stuff into the
actual boot partition. Which is has its own layout, independent of
any filesystem volume format, and nothing to do with any filesystem
volume format.
No. Grub 2 writes code directly to the start of the partition. This
is known.
There is no “start of the partition”. There is an *entire* “boot
partition” devoted to this purpose. This is entirely separate from any >>> regular filesystem partition, whether ext4, XFS or whatever.
Carlos is correct that (in certain configurations) some code is written
to the start of the boot partition. However:
Man, focus. I am talking of the case when grub is installed in the "/"
partition, instead of in the MBR. Sector zero has grub part 1. Then
goes grub 1.5 to the initial sectors.s
If you do not understand this, I suggest you study. I am out.
There is no such thing as a ‘stage 1.5’ in Grub 2. As best I can tell from studying the documentation, the only code written into the boot partition (if any) is the single-sector diskboot.img. In such a configuration, core.img is an ordinary file, as Lawrence says.
AFAICT the way it works on an MBR system is:
1) boot.img is always in the MBR, with the disk address of diskboot.img
copied into it. It contains enough code to load diskboot.img and run
it.
2) diskboot.img can be anywhere, and contains the list of disk addresses
for core.img plus enough code to load core.img (using that list) and
run it.
3) core.img contains all the code for reading filesystems natively,
interacting with the user, loading a kernel etc. No more blocklists
are necessary at this point.
This is realized in two documented configurations:
1) boot.img in the MBR, diskboot.img+core.img the embedding area
i.e. after the MBR but before the first partition.
2) boot.img in the MBR, diskboot.img in the first sector of a partition
containing a real filesystem, and core.img as an ordinary file in
that filesystem.
This is not particularly well documented but it depends on the
filesystem reserving the first sector for use by boot loaders. This
This appears to correspond to the reserved_first_sector flag in the
implementation, which is zero for Grub’s XFS implementation,
consistent with XFS not being usable in this mode of operation.
See
https://www.gnu.org/software/grub/manual/grub/grub.html#BIOS-installation
for the documentation.
Fortunately none of this nonsense is necessary on modern platforms.
On Wed, 6 May 2026 02:41:56 +0200, Carlos E.R. wrote:
I am talking of the case when grub is installed in the "/"
partition, instead of in the MBR.
There is no “/” partition. That’s the name of a filesystem directory.
The bootloader has to be installed in a place where the BIOS or UEFI
can find it. BIOS/UEFI code knows nothing of ext4 or XFS or any other Linux-specific filesystem.
On Wed, 6 May 2026 00:46:23 -0400, c186282 wrote:
On 5/5/26 06:11, Carlos E.R. wrote:
On 2026-05-05 06:52, c186282 wrote:"Very easily" ???
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't >>>>>> be corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm
not sure if it is even now. The Fedora box is btrfs except for /boot >>>>> which is ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing accounts >>>> of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a pain to convince >>>> it otherwise.
There's not a damned thing wrong with EXT4 and at this point it's >>>> ultra-reliable and very fixable.
The issue is that support thinks you have btrfs, and they tell you to
undo to a previous point in the filesystem. You can undo very easily a
failed update.
TimeShift on Ubuntu family distros with btrfs is relatively easy to set up and to roll back. However it is tied in with Ubuntu's way of setting up
btrfs and is painful on Fedora or other distros using btrfs. I haven't bothered trying to set it up. Knock on wood I haven't had a box that
bricked after an update and I don't fuck around with the OS. Many problems are self inflicted.
I've tried 'em all ... and am gonna stick to EXT4
for a very long time. Does absolutely everything
I want and need quick and easy and reliably. BTRFS
is not a panacea, not at all.
Me too.
On 06/05/2026 09:51, Carlos E.R. wrote:
I've tried 'em all ... and am gonna stick to EXT4
for a very long time. Does absolutely everything
I want and need quick and easy and reliably. BTRFS
is not a panacea, not at all.
Me too.
These days I am more oi a user than an admin or programmer, and I stick
with the herd.
EXT4
TimeShift and what you get from btrfs/zfs ... they're really just a
backup on the box you need to back up and theoretically double yer
drive usage.
AI Overview^^^^^^
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x)
On Wed, 6 May 2026 11:05:14 -0400, c186282 wrote:
TimeShift and what you get from btrfs/zfs ... they're really just a
backup on the box you need to back up and theoretically double yer
drive usage.
Timeshift in fact uses rsync in the absence of btrfs. However when using btrfs it requires Ubuntu style subvolumes. I haven't bothered to figure
out if there is a workaround on the Fedora btrfs box.
My LM laptop is ext4 so Timeshift is using rsync to write to /run/
timeshift. Supposedly btrfs takes less space and is faster with the
drawback of not being able to specify an external drive.
Supposedly on Fedora Btrfs Assistant and Snapper can be set up but as I
said I haven't had enough problems in the last 25+ years to bother.
On Wed, 6 May 2026 13:36:37 +0200, Carlos E.R. wrote:
AI Overview^^^^^^
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x)
Note that information is out of date.
On Wed, 6 May 2026 11:05:14 -0400, c186282 wrote:
TimeShift and what you get from btrfs/zfs ... they're really just a
backup on the box you need to back up and theoretically double yer
drive usage.
Timeshift in fact uses rsync in the absence of btrfs. However when using btrfs it requires Ubuntu style subvolumes. I haven't bothered to figure
out if there is a workaround on the Fedora btrfs box.
My LM laptop is ext4 so Timeshift is using rsync to write to /run/
timeshift. Supposedly btrfs takes less space and is faster with the
drawback of not being able to specify an external drive.
Supposedly on Fedora Btrfs Assistant and Snapper can be set up but as I
said I haven't had enough problems in the last 25+ years to bother.
On 2026-05-06, Lawrence D’Oliveiro wrote:
On Wed, 6 May 2026 13:36:37 +0200, Carlos E.R. wrote:
AI Overview
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x) >> ^^^^^^
Note that information is out of date.
It's not out of date in that it applies to so-called "GRUB Legacy". Or
has that changed in newer releases of "grub1"?
Tried to install OSuse on VBox today. It defaults to btrfs. However,
unlike Fedora, it's fairly easy to change that to EXT4 without
screwing up all the other partitions/settings.
Alas, it WOULDN'T RUN. From the error messages it seems WAYLAND was
the problem, couldn't start some of its stuff. Things would just get
SO far and then hang forever.
The 'new and improved' installer only has Wayland versions of the
three major desktops.
Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2026-05-06, Lawrence D’Oliveiro wrote:
On Wed, 6 May 2026 13:36:37 +0200, Carlos E.R. wrote:
AI Overview
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x) >>> ^^^^^^
Note that information is out of date.
It's not out of date in that it applies to so-called "GRUB Legacy". Or
has that changed in newer releases of "grub1"?
The last release of grub 1 was 21 years ago, noone is still using it,
and giving that information as reply to a question about grub in 2026
is at least misleading.
On 2026-05-07, Marc Haber wrote:
Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2026-05-06, Lawrence D’Oliveiro wrote:
Carlos E.R. wrote:
AI Overview
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x) >>>> ^^^^^^
Note that information is out of date.
It's not out of date in that it applies to so-called "GRUB Legacy". Or >>>has that changed in newer releases of "grub1"?
The last release of grub 1 was 21 years ago, noone is still using it,
and giving that information as reply to a question about grub in 2026
is at least misleading.
Well, I'd say calling GRUB 2 "GRUB" also kind of counts as misleading, especially given I don't think it was much better several years ago.
Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2026-05-06, Lawrence D’Oliveiro wrote:
On Wed, 6 May 2026 13:36:37 +0200, Carlos E.R. wrote:
AI Overview
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x) >>> ^^^^^^
Note that information is out of date.
It's not out of date in that it applies to so-called "GRUB Legacy". Or
has that changed in newer releases of "grub1"?
The last release of grub 1 was 21 years ago, noone is still using it,
and giving that information as reply to a question about grub in 2026
is at least misleading.
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be
corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not
sure if it is even now. The Fedora box is btrfs except for /boot which is
ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing
accounts of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a
pain to convince it otherwise.
There's not a damned thing wrong with EXT4 and at this
point it's ultra-reliable and very fixable.
With the sheer volume and frequent access to data these
days, as opposed to the old PC days, hyper-reliable is
absolutely necessary. Even yer humble laptop may read/write
a petabyte a year depending. It'd better do it RIGHT.
IMHO, use EXT4, or maybe XFS if you're moving Big Data.
ZFS is theoretically good, has some cool features, but
it's very obese.
On 2026-05-04 14:41, Borax Man wrote:
On 2026-05-03, Carlos E.R. <robin_listas@es.invalid> wrote:
On 2026-05-03 14:29, Borax Man wrote:
Come to think of it, its quite late in the game to be still running
EXT3! EXT4 is already nearly two decades old. Personally, I'd
recommend BTRFS or XFS, especially for large storage/archival.
I don't trust btrfs, and xfs poses problems when used for root; for data >>> it is perfect.
I have a raid 6 with 8 active disks, LUKS encrypted and using btrfs with >>> compression. It has developed an error, impossible to correct (I don't
know how, got advice about what to do, but took days and it aborted.
Eventually will redo with XFS instead).
What issues are there with XFS for root?
I don't remember well, and I feel lazy about writing it up, so I asked chatgpt.
The core problem
On BIOS systems (not UEFI), GRUB traditionally embeds part of its code (“core.img”) in the post-MBR gap—the space between the MBR and the first
partition.
This works fine if that gap exists (common with older partition layouts).
But modern partitioning tools or certain layouts don’t leave enough
space there.
Now add XFS:
Why XFS makes it worse
XFS does not support embedding GRUB core files inside the filesystem
itself (unlike something like ext4 with certain GRUB setups). So GRUB
has fewer fallback options.
That leads to this situation:
If there’s no post-MBR gap available
And your /boot is on XFS
→ GRUB has nowhere safe to put core.img
Resulting error
You’ll often see something like:
embedding is not possible, but this is required for cross-disk install
The usual fixes
1. Use a BIOS boot partition (recommended for GPT + BIOS)
Create a small (~1–2 MB) partition with type:
bios_grub (GPT flag)
GRUB will store its core image there safely.
2. Use a different filesystem for /boot
Instead of XFS:
ext4 is the most common safe choice
3. Switch to UEFI (if possible)
With UEFI:
GRUB lives in the EFI System Partition (FAT32)
No need for embedding tricks at all
I have had an issue with BTRFS in the past, an error which couldn't be
corrected. The error didn't have any impact except failing to read one
extended attribute from a file, which didn't matter that much, but it
did mean that one subvolume couldn't be sent properly (a fact I became
aware of too late). The filesystem seemed to work fine regardless.
What made me lost a little trust is I backed up then ran FSCK, and FSCK
made a complete dogs breakfast of the filesystem. This might have been
in 2015, thereabouts.
Well, I hit "something" less than a year ago.
But that was over 10 years ago, and its been fine ever sense. As I
mentioned before, the checksumming picked up issues that would have
otherwise gone unnoticed and led to lost data, so I do trust it but
still, I'd only use it if I really needed the features. If not, XFS
ilent undetectable data loss is very real.
On 2026-05-05, c186282 <c186282@nnada.net> wrote:
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be >>>> corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not
sure if it is even now. The Fedora box is btrfs except for /boot which is >>> ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing
accounts of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a
pain to convince it otherwise.
There's not a damned thing wrong with EXT4 and at this
point it's ultra-reliable and very fixable.
With the sheer volume and frequent access to data these
days, as opposed to the old PC days, hyper-reliable is
absolutely necessary. Even yer humble laptop may read/write
a petabyte a year depending. It'd better do it RIGHT.
IMHO, use EXT4, or maybe XFS if you're moving Big Data.
ZFS is theoretically good, has some cool features, but
it's very obese.
The thing is, silent undetectable data loss is very real. If I'm moving precious data, I need to know it moved correctly. Unfortunately, EXT4
and XFS while they ARE reliable, cannot prevent the issues I had. BTRFS
did.
You can add a checksum or some data integrity to files on EXT4 or BTRFS, using mtree or a hash, or a tool I wrote "checkit" to add a CRC as an extended attribute, so you can detect if the data has changed. I would
never store important data, like personal photos without at least this
CRC. At least with "checkit", I can copy the data, and verify at the
other end without having to do a byte-byte comparison against the
original. The CRC gets copied with the file. You can archive with it
as well, so when you extract years later, you can still verify.
Checkit works quite well, but BTRFS automates this process.
On 07/05/2026 14:14, Borax Man wrote:
ilent undetectable data loss is very real.
As are invisible pink elephants, fairies at the bottom of the garden,
and unicorns generating renewable energy.
If it is undetectable the notion of its factuality is ipso facto
metaphysics and a matter of faith only
On 2026-05-07, The Natural Philosopher <tnp@invalid.invalid> wrote:
On 07/05/2026 14:14, Borax Man wrote:
ilent undetectable data loss is very real.
As are invisible pink elephants, fairies at the bottom of the garden,
and unicorns generating renewable energy.
If it is undetectable the notion of its factuality is ipso facto
metaphysics and a matter of faith only
"As far as we know, our system has never had an undetected error."
On Wed, 6 May 2026 22:04:29 -0400, c186282 wrote:
Tried to install OSuse on VBox today. It defaults to btrfs. However,
unlike Fedora, it's fairly easy to change that to EXT4 without
screwing up all the other partitions/settings.
Alas, it WOULDN'T RUN. From the error messages it seems WAYLAND was
the problem, couldn't start some of its stuff. Things would just get
SO far and then hang forever.
The 'new and improved' installer only has Wayland versions of the
three major desktops.
I have the leap-16.0-offline-installer iso that I used to create the VM
on the Fedora box. It does use btrfs however it is KWin X11 and everything functions normally.
Note that the Fedora box is also KDE but it uses KWin Wayland.
I'm assuming OSuse is openSUSE. Also I'm using kvm/QEMU.
On 2026-05-07 06:41, Marc Haber wrote:
Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2026-05-06, Lawrence D'Oliveiro wrote:
On Wed, 6 May 2026 13:36:37 +0200, Carlos E.R. wrote:
AI Overview
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x) >>>> ^^^^^^
Note that information is out of date.
It's not out of date in that it applies to so-called "GRUB Legacy". Or
has that changed in newer releases of "grub1"?
The last release of grub 1 was 21 years ago, noone is still using it,
and giving that information as reply to a question about grub in 2026
is at least misleading.
I know people using lilo today, which is even older :-)
However, grub 2 has to store parts of itself at the same location
that stage 1.5 was. Just has a different name.
The CRC gets copied with the file. You can archive with it as well,
so when you extract years later, you can still verify.
I was using the big fat offline installer.
DOES offer KVM, but I didn't want KVM for some of the reasons we've
discussed recently.
Anyway, the boot-up errors did indicate that Wayland was the
underlying issue. I'm sure it will work ok in a bare metal install,
but VBox didn't like it.
On Thu, 7 May 2026 17:37:07 -0400, c186282 wrote:
I was using the big fat offline installer.
That's the one. The Tumbleweed installer has just enough to phone home for the rest of the gang.
DOES offer KVM, but I didn't want KVM for some of the reasons we've
discussed recently.
One more time -- I am not running a VM in openSUSE; I'm running openSUSE
in a Fedora VM.
Anyway, the boot-up errors did indicate that Wayland was the
underlying issue. I'm sure it will work ok in a bare metal install,
but VBox didn't like it.
What can I tell you? the Leap16 VM is using X11. The host Fedora box is
using Wayland. The KWin manager in the Leap VM has nothing to do with Wayland.
If you want to fight VBox, have at it.
Carlos E.R. <robin_listas@es.invalid> wrote:
On 2026-05-07 06:41, Marc Haber wrote:
Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2026-05-06, Lawrence D'Oliveiro wrote:
On Wed, 6 May 2026 13:36:37 +0200, Carlos E.R. wrote:
AI OverviewNote that information is out of date.
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x) >>>>> ^^^^^^ >>>>>
It's not out of date in that it applies to so-called "GRUB Legacy". Or >>>> has that changed in newer releases of "grub1"?
The last release of grub 1 was 21 years ago, noone is still using it,
and giving that information as reply to a question about grub in 2026
is at least misleading.
I know people using lilo today, which is even older :-)
I just booted the PC I'm posting from with "Grub Legacy", running
an old Linux. But for some reason I couldn't make it boot newer
Linux kernels so I switched to using Syslinux/Extlinux with them
because the design of Grub 2 is insane.
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location
that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be able
to read the OS partition where the config and the rest of it are kept.
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location
that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
On Fri, 8 May 2026 10:57:23 +0200, Carlos E.R. wrote:
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location
that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
Notice I said “boot partition”, not “MBR”.
On Thu, 7 May 2026 13:14:43 -0000 (UTC), Borax Man wrote:
The CRC gets copied with the file. You can archive with it as well,
so when you extract years later, you can still verify.
What do you do if it fails to verify? This only helps if you have
at least two archive copies.
Rather than doing a single checksum for the whole file, do it for file sections. Because typically the corruption will only occur in one
part, not across the whole file. And nowadays we use cryptographic
hashes, as being a bit more thorough as an error check than old-style
CRCs.
To do it really robustly, you start to get into error-correcting codes
and erasure codes. I think also Merkle trees could get involved at
some point ...
On 2026-05-07 15:14, Borax Man wrote:
On 2026-05-05, c186282 <c186282@nnada.net> wrote:
On 5/4/26 21:22, rbowman wrote:
On Mon, 4 May 2026 12:41:35 -0000 (UTC), Borax Man wrote:
I have had an issue with BTRFS in the past, an error which couldn't be >>>>> corrected.
My first exposure to btrfs was SuSE 13.2. GRUB was not amused. I'm not >>>> sure if it is even now. The Fedora box is btrfs except for /boot which is >>>> ext4.
In THEORY the 'btrfs' is very good. Alas I keep hearing
accounts of un-fixable errors that trash everything.
Fedora now defaults to btrfs and it's a bit of a
pain to convince it otherwise.
There's not a damned thing wrong with EXT4 and at this
point it's ultra-reliable and very fixable.
With the sheer volume and frequent access to data these
days, as opposed to the old PC days, hyper-reliable is
absolutely necessary. Even yer humble laptop may read/write
a petabyte a year depending. It'd better do it RIGHT.
IMHO, use EXT4, or maybe XFS if you're moving Big Data.
ZFS is theoretically good, has some cool features, but
it's very obese.
The thing is, silent undetectable data loss is very real. If I'm moving
precious data, I need to know it moved correctly. Unfortunately, EXT4
and XFS while they ARE reliable, cannot prevent the issues I had. BTRFS
did.
You can add a checksum or some data integrity to files on EXT4 or BTRFS,
using mtree or a hash, or a tool I wrote "checkit" to add a CRC as an
extended attribute, so you can detect if the data has changed. I would
never store important data, like personal photos without at least this
CRC. At least with "checkit", I can copy the data, and verify at the
other end without having to do a byte-byte comparison against the
original. The CRC gets copied with the file. You can archive with it
as well, so when you extract years later, you can still verify.
Checkit works quite well, but BTRFS automates this process.
I understand that new XFS can store data checksums. I don't know details.
On 07/05/2026 14:14, Borax Man wrote:
ilent undetectable data loss is very real.
As are invisible pink elephants, fairies at the bottom of the garden,
and unicorns generating renewable energy.
If it is undetectable the notion of its factuality is ipso facto
metaphysics and a matter of faith only
I'm not sure if you are being facetious, sarcastic or are not
understanding but it is real. You may very well have bad data and not
know.
On 2026-05-07, The Natural Philosopher <tnp@invalid.invalid> wrote:
On 07/05/2026 14:14, Borax Man wrote:
ilent undetectable data loss is very real.
As are invisible pink elephants, fairies at the bottom of the garden,
and unicorns generating renewable energy.
If it is undetectable the notion of its factuality is ipso facto
metaphysics and a matter of faith only
I'm not sure if you are being facetious, sarcastic or are not
understanding but it is real.
You may very well have bad data and not know. Your hard drive could
be returning junk, and neither the drive, nor the filesystem, picks
it up. I only found a faulty hard drive because I decided to run a
checksum against an ISO I downloaded, to verify it downloaded
correctly. The checksum failed, so I downloaded it again, and it
failed again. then it passed, but then failed when checked a second
time.
On Fri, 8 May 2026 10:57:23 +0200, Carlos E.R. wrote:
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location
that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
Notice I said “boot partition”, not “MBR”.
If you want a robust system which can repair without a backup, you
can always use parchive, which can store parity data allowing you to
check AND repair up to a certain amount of damage.
However it doesn't work as well for large directory trees, and if
you do decide to 'parchive' a whole folder, and you need to delete
or update a file, you have to compute it all again.
On 2026-05-08, Lawrence D’Oliveiro wrote:
On Fri, 8 May 2026 10:57:23 +0200, Carlos E.R. wrote:
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location
that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
Notice I said “boot partition”, not “MBR”.
So you're saying GRUB 2 cannot be installed to an MBR?
On 2026-05-08 11:26, Lawrence D’Oliveiro wrote:
On Fri, 8 May 2026 10:57:23 +0200, Carlos E.R. wrote:
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same
location that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
Notice I said “boot partition”, not “MBR”.
Irrelevant.
On 2026-05-08, Computer Nerd Kev wrote:
Carlos E.R. <robin_listas@es.invalid> wrote:
On 2026-05-07 06:41, Marc Haber wrote:
Nuno Silva <nunojsilva@invalid.invalid> wrote:
On 2026-05-06, Lawrence D'Oliveiro wrote:
On Wed, 6 May 2026 13:36:37 +0200, Carlos E.R. wrote:
AI Overview^^^^^^ >>>>>>
GRUB Stage 1.5 is an intermediate bootloader stage in GRUB Legacy (0.9x)
Note that information is out of date.
It's not out of date in that it applies to so-called "GRUB Legacy". Or >>>>> has that changed in newer releases of "grub1"?
The last release of grub 1 was 21 years ago, noone is still using it,
and giving that information as reply to a question about grub in 2026
is at least misleading.
I know people using lilo today, which is even older :-)
I just booted the PC I'm posting from with "Grub Legacy", running
an old Linux. But for some reason I couldn't make it boot newer
Linux kernels so I switched to using Syslinux/Extlinux with them
because the design of Grub 2 is insane.
That even sounds weird, Grub "1" shouldn't need much stuff to boot a
kernel, you point to it, you provide the command line. So I wonder what
was failing. Something else? Something related to a possible initrd?
Syslinux/Extlinux looked just as
good, so I didn't want to turn it into a day-long project wading
through mailing list archives and looking at source code to figure
out what changed.
I may be jumping in here a little late, but I use syslinux/extlinux
to boot all my GNU/Linux systems. Nothing could be simpler. It's
a great system.
I am a bit concerned however about the developmental status. The
latest git commit was in 2019. I do hope that this project won't
become another abandonware.
Nuno Silva <nunojsilva@invalid.invalid> writes:
On 2026-05-08, Lawrence D’Oliveiro wrote:
On Fri, 8 May 2026 10:57:23 +0200, Carlos E.R. wrote:
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location
that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
Notice I said “boot partition”, not “MBR”.
So you're saying GRUB 2 cannot be installed to an MBR?
He didn’t mention the MBR at all, so I don’t know either of you are inserting it into the discussion.
He’s obviously talking about the later
parts of the boot process, specifically he’s referring to the
possibility of core.img being in one filesystem (“the boot partition”) and prefix referencing another filesystem (“the OS partition”). My
system works like that (with core.img in grubx64.efi in the ESP and accompanied by just enough configuration to point it at the full
grub.cfg in my root filesystem).
So you're saying GRUB 2 cannot be installed to an MBR?
On 2026-05-08, Richard Kettlewell wrote:
Nuno Silva <nunojsilva@invalid.invalid> writes:
On 2026-05-08, Lawrence D’Oliveiro wrote:
On Fri, 8 May 2026 10:57:23 +0200, Carlos E.R. wrote:
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location >>>>>>> that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
Notice I said “boot partition”, not “MBR”.
So you're saying GRUB 2 cannot be installed to an MBR?
He didn’t mention the MBR at all, so I don’t know either of you are
inserting it into the discussion.
If you want to boot in PC BIOS, I think the usual way is to install the bootloader to the MBR.
He’s obviously talking about the later parts of the boot process,
specifically he’s referring to the
But to get to these parts, you need to go through the initial
parts.
Carlos said it needed more storage than the MBR can provide to
load the code needed to read from the boot partition, Lawrence said
it's not necessary, so that gets me wondering how is the limitation
Carlos mentioned handled.
possibility of core.img being in one filesystem (“the boot partition”) >> and prefix referencing another filesystem (“the OS partition”). My
system works like that (with core.img in grubx64.efi in the ESP and
accompanied by just enough configuration to point it at the full
grub.cfg in my root filesystem).
.efi and ESP suggests you're talking about a different bootstrapping
process than the one involving the MBR, and EFI firmwares hopefully are
more capable in this regard?
On 2026-05-08, Richard Kettlewell wrote:
Nuno Silva <nunojsilva@invalid.invalid> writes:
On 2026-05-08, Lawrence D’Oliveiro wrote:
On Fri, 8 May 2026 10:57:23 +0200, Carlos E.R. wrote:
On 2026-05-08 02:12, Lawrence D’Oliveiro wrote:
On Thu, 7 May 2026 10:58:54 +0200, Carlos E.R. wrote:
However, grub 2 has to store parts of itself at the same location >>>>>>> that stage 1.5 was. Just has a different name.
There only needs to be enough of it in the boot partition to be
able to read the OS partition where the config and the rest of it
are kept.
Which is not possible. The MBR code is about two hundred bytes.
Notice I said “boot partition”, not “MBR”.
So you're saying GRUB 2 cannot be installed to an MBR?
He didn’t mention the MBR at all, so I don’t know either of you are
inserting it into the discussion.
If you want to boot in PC BIOS, I think the usual way is to install the bootloader to the MBR.
He’s obviously talking about the later
parts of the boot process, specifically he’s referring to the
But to get to these parts, you need to go through the initial
parts. Carlos said it needed more storage than the MBR can provide to
load the code needed to read from the boot partition, Lawrence said it's
not necessary, so that gets me wondering how is the limitation Carlos mentioned handled.
possibility of core.img being in one filesystem (“the boot partition”) >> and prefix referencing another filesystem (“the OS partition”). My
system works like that (with core.img in grubx64.efi in the ESP and
accompanied by just enough configuration to point it at the full
grub.cfg in my root filesystem).
.efi and ESP suggests you're talking about a different bootstrapping
process than the one involving the MBR, and EFI firmwares hopefully are
more capable in this regard?
On Fri, 8 May 2026 12:38:44 -0000 (UTC), Borax Man wrote:
If you want a robust system which can repair without a backup, you
can always use parchive, which can store parity data allowing you to
check AND repair up to a certain amount of damage.
parchive is the sort of thing I’m thinking of. Because it doesn’t
matter how many redundant copies of a file you have, if an error
happens in the same block on all of them, you’re stuffed. Using the
PAR2 format (erasure code) gets around this.
However it doesn't work as well for large directory trees, and if
you do decide to 'parchive' a whole folder, and you need to delete
or update a file, you have to compute it all again.
Surely you don’t update your backup snapshots in-place, you create new ones. Once a backup snapshot is made, you shouldn’t go around fiddling
with it. Either keep it or, if it’s been obsoleted by a newer one,
throw it away.
Borax Man <boraxman@geidiprime.invalid> wrote:
On 2026-05-07, The Natural Philosopher <tnp@invalid.invalid> wrote:
On 07/05/2026 14:14, Borax Man wrote:
ilent undetectable data loss is very real.
As are invisible pink elephants, fairies at the bottom of the garden,
and unicorns generating renewable energy.
If it is undetectable the notion of its factuality is ipso facto
metaphysics and a matter of faith only
I'm not sure if you are being facetious, sarcastic or are not
understanding but it is real.
TNP was being facetious in pointing out that, by definition, an "undetectable data loss" is "**undetectable**". I.e., you can not
possibly know it happened if it is "undetectable".
What you meant, and the term used for this purpose, is "silent data corruption". The byte stream returned by the disk differs from the
byte stream that was saved, but the hardware is slient about the fact
that it is different.
You may very well have bad data and not know. Your hard drive could
be returning junk, and neither the drive, nor the filesystem, picks
it up. I only found a faulty hard drive because I decided to run a
checksum against an ISO I downloaded, to verify it downloaded
correctly. The checksum failed, so I downloaded it again, and it
failed again. then it passed, but then failed when checked a second
time.
I.e., "silent data corruption".
On 2026-05-08, Lawrence D’Oliveiro <ldo@nz.invalid> wrote:
On Fri, 8 May 2026 12:38:44 -0000 (UTC), Borax Man wrote:
If you want a robust system which can repair without a backup, you
can always use parchive, which can store parity data allowing you to
check AND repair up to a certain amount of damage.
parchive is the sort of thing I’m thinking of. Because it doesn’t
matter how many redundant copies of a file you have, if an error
happens in the same block on all of them, you’re stuffed. Using the
PAR2 format (erasure code) gets around this.
However it doesn't work as well for large directory trees, and if
you do decide to 'parchive' a whole folder, and you need to delete
or update a file, you have to compute it all again.
Surely you don’t update your backup snapshots in-place, you create new
ones. Once a backup snapshot is made, you shouldn’t go around fiddling
with it. Either keep it or, if it’s been obsoleted by a newer one,
throw it away.
I use DAR for backups, and it has an option to run parchive over the
backup file. Backups are incremental.
I use parchive also for my home photo collection. One I fill a folder,
I run parchive and make it all read-only.
Borax Man <boraxman@geidiprime.invalid> wrote:
On 2026-05-08, Lawrence D’Oliveiro <ldo@nz.invalid> wrote:
On Fri, 8 May 2026 12:38:44 -0000 (UTC), Borax Man wrote:
If you want a robust system which can repair without a backup, you
can always use parchive, which can store parity data allowing you to
check AND repair up to a certain amount of damage.
parchive is the sort of thing I’m thinking of. Because it doesn’t
matter how many redundant copies of a file you have, if an error
happens in the same block on all of them, you’re stuffed. Using the
PAR2 format (erasure code) gets around this.
However it doesn't work as well for large directory trees, and if
you do decide to 'parchive' a whole folder, and you need to delete
or update a file, you have to compute it all again.
Surely you don’t update your backup snapshots in-place, you create new >>> ones. Once a backup snapshot is made, you shouldn’t go around fiddling >>> with it. Either keep it or, if it’s been obsoleted by a newer one,
throw it away.
I use DAR for backups, and it has an option to run parchive over the
backup file. Backups are incremental.
I use parchive also for my home photo collection. One I fill a folder,
I run parchive and make it all read-only.
Which works very well for "bit rot" situations (the "silent data
corruption" events).
But does not help at all if the files that have been parchived and the parchives themselves all reside on one disk, and that disk itself fails
to the point it is inaccessible. If you can't read the disk you can't
read the files, even to recover with parchive.
You'll still want actual backups on separate "media" to cover for the
"whole disk failed" case.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,116 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 85:27:26 |
| Calls: | 14,305 |
| Files: | 186,338 |
| D/L today: |
647 files (184M bytes) |
| Messages: | 2,525,478 |