Another new io library
Steven Schveighoffer via Digitalmars-d
digitalmars-d at puremagic.com
Thu Feb 18 08:27:02 PST 2016
On 2/17/16 5:47 PM, deadalnix wrote:
> First, I'm very happy to see that. Sounds like a good project. Some
> remarks:
> - You seems to be using classes. These are good to compose at runtime,
I have one class, the IODevice. As I said in the announcement, this
isn't a focus of the library, just a way to play with the other pieces
:) It's utility isn't very important. One thing it does do (a relic from
when I was thinking of trying to replace stdio.File innards) is take
over a FILE *, and close the FILE * on destruction.
But I'm steadfastly against using classes for the meat of the library
(i.e. the range-like pipeline types). I do happen to think classes work
well for raw i/o, since the OS treats i/o items that way (e.g. a network
socket is a file descriptor, not some other type), but it would be nice
if you could have class features for non-GC lifetimes. Classes are bad
for correct deallocation of i/o resources.
> - Being able to read.write from an io device in a generator like
> manner is I think important if we are rolling out something new.
I'm not quite sure what this means.
> Literally the only thing that can explain the success of Node.js is this
> (everything else is crap). See async/await in C#
async I/O I was hoping could be handled like vibe does (i.e. under the
hood with fibers).
> - Please explain valves more.
Valves allow all the types that process buffered input to process
buffered output without changing pretty much anything. It allows me to
have a "push" mechanism by pulling from the other end automatically.
In essence, the problem of buffered input is very different from the
problem of buffered output. One is pulling data chunks at a time, and
processing in finer detail, the other is processing data in finer detail
and then pushing out chunks that are ready.
The big difference is the end of the pipe that needs user intervention.
For input, the user is the consumer of data. With output, the user is
the provider of data.
The problem is, how do you construct such a pipeline? The iopipe
convention is to wrap the upstream data. For output, the upstream data
is what you need access to. A std.algorithm.map doesn't give you access
to the underlying range, right? So if you need access to the earlier
part of the pipeline, how do you get to it? And how do you know how FAR
to get to it (i.e. pipline.subpipe.subpipe.subpipe....)
This is what the valve is for. The valve has 3 parts, the inlet, the
processed data, and the outlet. The inlet works like a normal iopipe,
but instead of releasing data upstream, it pushes the data to the
processed data area. The outlet can only pull data from the processed
data. So this really provides a way for the user to control the flow of
data. (note, a lot of this is documented in the concepts.txt document)
The reason it's special is because every iopipe is required to provide
access to an upstream valve inlet if it exists. This makes the API of
accessing the upstream data MUCH easier to deal with. (i.e. pipeline.valve)
Then I have this wrapper called autoValve, which automatically flushes
the downstream data when more space is needed, and makes it look like
you are just dealing with the upstream end. This is exactly the model we
need for buffered output.
This way, I can have a push mechanism for output, and all the processing
pieces (for instance, byte swapping, converting to a different array
type, etc.) don't even need to care about providing a push mechanism.
> - Profit ?
Yes, absolutely :)
-Steve
More information about the Digitalmars-d
mailing list