deprecating std.stream, std.cstream, std.socketstream

Steven Schveighoffer schveiguy at yahoo.com
Wed May 16 08:14:52 PDT 2012


On Wed, 16 May 2012 10:03:42 -0400, Christophe Travert  
<travert at phare.normalesup.org> wrote:

> "Steven Schveighoffer" , dans le message (digitalmars.D:167548), a
>> My new design supports this.  I have a function called readUntil:
>>
>> https://github.com/schveiguy/phobos/blob/new-io2/std/io.d#L832
>>
>> Essentially, it reads into its buffer until the condition is satisfied.
>> Therefore, you are not double buffering.  The return value is a slice of
>> the buffer.
>>
>> There is a way to opt-out of reading any data if you determine you  
>> cannot
>> do a full read.  Just return 0 from the delegate.
>
> Maybe I already told this some time ago, but I am not very comfortable
> with this design. The process delegate has to maintain an internal
> state, if you want to avoid reading everything again. It will be
> difficult to implement those process delegates.

The delegate is given which portion has already been "processed", that is  
the 'start' parameter.  If you can use this information, it's highly  
useful.

If you need more context, yes, you have to store it elsewhere, but you do  
have a delegate which contains a context pointer.  In a few places (take a  
look at TextStream's readln  
https://github.com/schveiguy/phobos/blob/new-io2/std/io.d#L2149) I use  
inner functions that have access to the function call's frame pointer in  
order to configure or store data.

> Do you have an example
> of moderately complicated reading process to show us it is not too
> complicated?

The most complicated I have so far is reading UTF data as a range of dchar:

https://github.com/schveiguy/phobos/blob/new-io2/std/io.d#L2209

Note that I hand-inlined all the decoding because using std.utf or the  
runtime was too slow, so although it looks huge, it's pretty basic stuff,  
and can largely be ignored for the terms of this discussion.  The  
interesting part is how it specifies what to consume and what not to.

I realize it's a different way of thinking about how to do I/O, but it  
gives more control to the buffer, so it can reason about how best to  
buffer things.  I look at as a way of the buffered stream saying "I'll  
read some data, you tell me when you see something interesting, and I'll  
give you a slice to it".  The alternative is to double-buffer your data.   
Each call to read can invalidate the previously buffered data.  But  
readUntil guarantees the data is contiguous and consumed all at once, no  
need to double-buffer

>
> To avoid this issue, the design could be reversed: A method that would
> like to read a certain amount of character could take a delegate from
> the stream, which provides additionnal bytes of data.
>
> Example:
> // create a T by reading from stream. returns true if the T was
> // successfully created, and false otherwise.
> bool readFrom(const(ubyte)[] delegate(size_t consumed) stream, out T t);
>
> The stream delegate returns a buffer of data to read from when called
> with consumed==0. It must return additionnal data when called
> repeatedly. When it is called with a consumed != 0, the corresponding
> amount of consumed bytes can be discared from the buffer.

I can see use cases for both your method and mine.

I think I can implement your idea in terms of mine.  I might just do  
that.  The only thing missing is, you need a way to specify to the  
delegate that it needs more data.  Probably using size_t.max as an  
argument.

In fact, I need a peek function anyways, your function will provide that  
ability as well.

-Steve


More information about the Digitalmars-d mailing list