parallel unzip in progress
Jay Norwood
jayn at prismnet.com
Sat Apr 7 01:22:48 PDT 2012
On Wednesday, 4 April 2012 at 19:41:21 UTC, Jay Norwood wrote:
> The work-around was to convert all the file operations to use
> std.stream equivalents, and that worked well, but I see i the
> bug reports that even that was only working correctly on
> windows. So I'm on windows, and ok for me, but it would be too
> bad to limit use to Windows.
>
> Seems like stdio runtime support for File operations above 2GB
> would be a basic expectation for a "system" language these days.
btw, I posted a fix to setTimes that enables it to update the
timestamp on directories as well as regular files, along with the
source code of this example.
I also did some research on why ntfs is such a dog when doing
delete operations on hard drives, as well as spending several
hours looking at procmon logs, and have decided that the problem
is primarily related to multiple accesses in the master file
table file for the larger files. There is much discussion on the
matter of the MFT getting fragmented on these larger drives, and
a couple of interesting proposed tweaks in the second link.
http://ixbtlabs.com/articles/ntfs/index3.html
http://www.gilsmethod.com/speed-up-vista-with-these-simple-ntfs-tweaks
The second link shows you how to reserve a larger area for MFT,
and the link below looks like it might be able to clean out any
files from the reserved MFT spaces.
http://www.mydefrag.com/index.html
More information about the Digitalmars-d-learn
mailing list