Announcing Elembuf

H. S. Teoh hsteoh at quickfur.ath.cx
Wed Dec 19 17:54:03 UTC 2018


On Wed, Dec 19, 2018 at 11:56:44AM -0500, Steven Schveighoffer via Digitalmars-d-announce wrote:
> On 12/18/18 8:41 PM, H. S. Teoh wrote:
> > On Tue, Dec 18, 2018 at 01:56:18PM -0500, Steven Schveighoffer via Digitalmars-d-announce wrote:
[...]
> > > Although I haven't tested with network sockets, the circular
> > > buffer I implemented for iopipe
> > > (http://schveiguy.github.io/iopipe/iopipe/buffer/RingBuffer.html)
> > > didn't have any significant improvement over a buffer that moves
> > > the data still in the buffer.
> > [...]
> > 
> > Interesting. I wonder why that is. Perhaps with today's CPU cache
> > hierarchies and read prediction, a lot of the cost of moving the data is
> > amortized away.
> 
> I had expected *some* improvement, I even wrote a "grep-like" example
> that tries to keep a lot of data in the buffer such that moving the
> data will be an expensive copy. I got no measurable difference.
> 
> I would suspect due to that experience that any gains made in not
> copying would be dwarfed by the performance of network i/o vs. disk
> i/o.
[...]

Ahh, that makes sense.  Did you test async I/O?  Not that I expect any
difference there either if you're I/O-bound; but reducing CPU load in
that case frees it up for other tasks.  I don't know how easy it would
be to test this, but I'm curious about what results you might get if you
had a compute-intensive background task that you run while waiting for
async I/O, then measure how much of the computation went through while
running the grep-like part of the code with either the circular buffer
or the moving buffer when each async request comes back.

Though that seems like a rather contrived example, since normally you'd
just spawn a different thread and let the OS handle the async for you.


T

-- 
Жил-был король когда-то, при нём блоха жила.


More information about the Digitalmars-d-announce mailing list