randomIO, std.file, core.stdc.stdio

Charles Hixson via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Tue Jul 26 09:58:59 PDT 2016


On 07/26/2016 05:31 AM, Steven Schveighoffer via Digitalmars-d-learn wrote:
> On 7/25/16 9:19 PM, Charles Hixson via Digitalmars-d-learn wrote:
>> On 07/25/2016 05:18 PM, ketmar via Digitalmars-d-learn wrote:
>>> On Monday, 25 July 2016 at 18:54:27 UTC, Charles Hixson wrote:
>>>> Are there reasons why one would use rawRead and rawWrite rather than
>>>> fread and fwrite when doiing binary random io?  What are the 
>>>> advantages?
>>>>
>>>> In particular, if one is reading and writing structs rather than
>>>> arrays or ranges, are there any advantages?
>>>
>>> yes: keeping API consistent. ;-)
>>>
>>> for example, my stream i/o modules works with anything that has
>>> `rawRead`/`rawWrite` methods, but don't bother to check for any other.
>>>
>>> besides, `rawRead` is just looks cleaner, even with all `(&a)[0..1])`
>>> noise.
>>>
>>> so, a question of style.
>>>
>> OK.  If it's just a question of "looking cleaner" and "style", then I
>> will prefer the core.stdc.stdio approach.  I find it's appearance
>> extremely much cleaner...except that that's understating things. I'll
>> probably wrap those routines in a struct to ensure things like files
>> being properly closed, and not have explicit pointers persisting over
>> large areas of code.
>
> It's more than just that. Having a bounded array is safer than a 
> pointer/length separated parameters. Literally, rawRead and rawWrite 
> are inferred @safe, whereas fread and fwrite are not.
>
> But D is so nice with UFCS, you don't have to live with APIs you don't 
> like. Allow me to suggest adding a helper function to your code:
>
> rawReadItem(T)(File f, ref T item) @trusted
> {
>    f.rawRead(&item[0 .. 1]);
> }
>
> -Steve
>
That *does* make the syntax a lot nicer, and I understand the safety 
advantage of not using pointer/length separated parameters.  But I'm 
going to be wrapping the I/O anyway, and the external interface is going 
to be more like:
struct RF (T, long magic)
{
....
void read (size_t recNo, ref T val){...}
size_t read (ref T val){...}
...
}
where a sequential read returns the record number, or you specify the 
record number and get an indexedIO read.  So the length with be 
T.sizeof, and will be specified at the time the file is opened.  To me 
this seems to eliminate the advantage of stdfile, and stdfile seems to 
add a level of indirection.

Ranges aren't free, are they? If so then I should probably use stdfile, 
because that is probably less likely to change than core.stdc.stdio.  
When I see "f.rawRead(&item[0 .. 1])" it looks to me as if unneeded code 
is being generated explictly to be thrown away.  (I don't like using 
pointer/length either, but it's actually easier to understand than this 
kind of thing, and this LOOKS like it's generating extra code.)

That said, perhaps I should use stdio anyway.  When doing I/O it's the 
disk speed that's the really slow part, and that so dominates things 
that worrying about trivialities is foolish.  And since it's going to be 
wrapped anyway, the ugly will be confined to a very small routine.


More information about the Digitalmars-d-learn mailing list