Ranges longer than size_t.max
Era Scarecrow
rtcvb32 at yahoo.com
Mon Dec 31 11:20:58 PST 2012
On Monday, 31 December 2012 at 18:55:04 UTC, Stewart Gordon wrote:
> Sector? Do you mean cluster?
Probably. They mean the same thing to me.
> I would have thought it used the whole 32 bits for cluster
> number, with magic values for "unused", "end of chain" and
> "bad". In each case you don't need to point to the next
> cluster as well. Unless it supports something like marking an
> in-use cluster as bad but leaving until another day the task of
> moving the data from it into a good cluster.
Wouldn't have been practical. Historically the FAT table was a
layout of all the clusters that pointed to the next cluster, and
used the largest number to denote EOF. (there were 8 codes or so
reserved, I don't remember exactly).
Taking the math if you were to lay it all out via FAT32 using
the same scheme, you'd end up with 2^34 bytes for a single table.
FAT by default had 2 tables (a backup) meaning 2^35 would be
needed, that is just overhead (assuming you needed it all). Back
then you had 8Gig drives at the most and space being sparse,
28bits makes more sense (1-2Gigs vs 16gigs-32gigs). Of course
obviously it wouldn't make the table(s) bigger if the drive
didn't support above X clusters.
> But looking through
>> http://en.wikipedia.org/wiki/Fat32
>
> there are indeed a handful of magic values for things like this.
>
> Anyway, that states that it uses 28 bits for the cluster
> number, but nothing about what the other 4 bits are for.
>
> But a possibility I can see is that these 4 bits were reserved
> for bit flags that may be added in the future.
I don't remember where I read it, but I was certain they were
used for overhead/flags; Course I was also reading up on Long
File Name (LFN) too and directory structures. Likely lost
somewhere in those texts.
FAT32 would/could have supported files over 4Gig, however the
Folder/FS Max filesize was 32bit, and anything over would have
likely been more complex to incorporate, along with programs
depending on them all being 32bit.
Guess it was easier to make the hard limit rather than make it
extend to further sizes. Plus programmers are going to be lazy
and prefer int whenever possible. Hmmm actually back then long
long's weren't supported (except maybe by gcc), so I don't think
that was much an option.
More information about the Digitalmars-d
mailing list