This thread on Hacker News terrifies me

Jonathan M Davis newsgroup.d at jmdavisprog.com
Sun Sep 2 04:21:44 UTC 2018


On Saturday, September 1, 2018 9:18:17 PM MDT Nick Sabalausky (Abscissa) via 
Digitalmars-d wrote:
> On 08/31/2018 03:50 PM, Walter Bright wrote:
> [From your comment in that thread]
>
>  > fill up your system disk to near capacity, then try to run various
>
> apps and system utilities.
>
> I've had that happen on accident once or twice recently. KDE does NOT
> handle it well: *Everything* immediately either hangs or dies as soon as
> it gains focus. Well, I guess could be worse, but it still really irks
> me: "Seriously, KDE? You can't even DO NOTHING without trying to write
> to the disk? And you, you other app specifically designed for dealing
> with large numbers of large files, why in the world would you attempt to
> write GB+ files without ever checking available space?"

I suspect that if KDE is choking, it's due to issues with files in /tmp,
since they like to use temp files for stuff, and I _think_ that some of it
is using unix sockets, in which case they're using the socket API to talk
between components, and I wouldn't ever expect anyone to check disk space
with that - though I _would_ expect them to check for failed commands and
handling it appropriately, even if the best that they can do is close the
program with a pop-up.

I think that what it ultimately comes down to though is that a lot of
applications treat disk space like they treat memory. You don't usually
check whether you have enough memory. At best, you check whether a
particular memory allocation succeeded and then try to handle it sanely if
it failed. With D, we usually outright kill the program if we fail to
allocate memory - and really, if you're using std.stdio and std.file for all
of your file operations, you'll probably get the same thing, since an
exception would be thrown on write failure, and if you didn't catch it, then
it will kill your program (though if you do catch it, it obviously can vary
considerably what happens). The C APIs on the other hand require that you
check the return value, and some of the C++ APIs require the same. So, if
you're not doing that right, you can quickly get your program into a weird
state if functions that you expect to always succeed start failing.

So honestly, I don't find it at all surprising when an application can't
handle not being able to write to disk. Ideally, it _would_ handle it (even
if it's simply by shutting down, because it can't handle not having enough
disk space), but for most applications, it really is thought of like running
out of memory. So, isn't tested for, and no attempt is made to make it sane.

I would have hoped that something like KDE would have sorted it out by now
given that it's been around long enough that more than one person would have
run into the problem and complained about it, but given that it's a suite of
applications developed in someone's free time, it wouldn't surprise me at
all if the response was to just get more disk space.

Honestly, for some of this stuff, I think that the only way that it's ever
going to work sanely is if extreme failure conditions result in Errors or
Exceptions being thrown, and the program being killed. Most code simply
isn't ever going to be written to handle such situations, and a for a _lot_
of programs, they really can't continue without those resources - which is
presumably, why the way D's GC works is to throw an OutOfMemoryError when it
can't allocate anything. Anything C-based (and plenty of C++-based programs
too) is going to have serious problems though thanks to the fact that C/C++
programs often use APIs where you have to check a return code, and if it's a
function that never fails under normal conditions, most programs aren't
going to check it. Even diligent programmers are bound to miss some of them.

- Jonathan M Davis





More information about the Digitalmars-d mailing list