randomIO, std.file, core.stdc.stdio
Charles Hixson via Digitalmars-d-learn
digitalmars-d-learn at puremagic.com
Tue Jul 26 09:35:26 PDT 2016
On 07/25/2016 09:22 PM, ketmar via Digitalmars-d-learn wrote:
> On Tuesday, 26 July 2016 at 04:05:22 UTC, Charles Hixson wrote:
>> Yes, but I really despise the syntax they came up with. It's
>> probably good if most of your I/O is ranges, but mine hasn't yet ever
>> been. (Combining ranges with random I/O?)
>
> that's why i wrote iv.stream, and then iv.vfs, with convenient things
> like `readNum!T`, for example. you absolutely don't need to
> reimplement the whole std.stdio.File if all you need it better API.
> thanks to UFCS, you can write your new API as free functions accepting
> std.stdio.File as first arg. or even generic stream, like i did in
> iv.stream:
>
>
> enum isReadableStream(T) = is(typeof((inout int=0) {
> auto t = T.init;
> ubyte[1] b;
> auto v = cast(void[])b;
> t.rawRead(v);
> }));
>
> enum isWriteableStream(T) = is(typeof((inout int=0) {
> auto t = T.init;
> ubyte[1] b;
> t.rawWrite(cast(void[])b);
> }));
>
> T readInt(T : ulong, ST) (auto ref ST st) if (isReadableStream!ST) {
> T res;
> ubyte* b = cast(ubyte*)&res;
> foreach (immutable idx; 0..T.sizeof) {
> if (st.rawRead(b[idx..idx+1]).length != 1) throw new
> Exception("read error");
> }
> return res;
> }
>
>
> and then:
> auto fl = File("myfile");
> auto i = fl.readInt!uint;
>
> something like that.
>
That's sort of what I have in mind, but I want to do what in Fortran
would be (would have been?) called record I/O, except that I want a file
header that specifies a few things like magic number, records allocated,
head of free list, etc. In practice I don't see any need for record
size not known at compile time...except that if there are different
versions of the program, they might include different things, so, e.g.,
the size of the file header might need to be variable.
This is a design problem I'm still trying to wrap my head around.
Efficiency seems to say "you need to know the size at compile time", but
flexibility says "you can't depend on the size at compile time". The
only compromise position seems to compromise safety (by depending on
void * and record size parameters that aren't guaranteed safe). I'll
probably eventually decide in favor of "size fixed at compile time", but
I'm still dithering. But clearly efficiency dictates that the read size
not be a basic type. I'm currently thinking of a struct that's about 1
KB in size. As far as the I/O routines are concerned this will probably
all be uninterpreted bytes, unless I throw in some sequencing for error
recovery...but that's probably making things too complex, and should be
left for a higher level.
Clearly this is a bit of a specialized case, so I wouldn't be
considering implementing all of stdio, only the relevant bits, and those
wrapped with an interpretation based around record number.
The thing is, I'd probably be writing this wrapper anyway, what I was
wondering originally is whether there was any reason to use std.file as
the underlying library rather than going directly to core.stdc.stdio.
More information about the Digitalmars-d-learn
mailing list