Second Round CURL Wrapper Review
Vladimir Panteleev
vladimir at thecybershadow.net
Sun Dec 4 03:05:32 PST 2011
On Sun, 04 Dec 2011 12:59:15 +0200, Jonas Drewsen <jdrewsen at nospam.com>
wrote:
> Den 03-12-2011 21:58, Vladimir Panteleev skrev:
>> On Sat, 03 Dec 2011 21:17:25 +0200, Jonas Drewsen <jdrewsen at nospam.com>
>> wrote:
>>> The standard example is downloading some content and saving it at the
>>> same time.
>>>
>>> While your main thread saves a chunk to disk (or uploads to another
>>> server) the async thread is busy buffering incoming chunks of data.
>>> This means that you effectively only wait for the slowest of the two
>>> IO operations. If you did it synchronously would worst case have to
>>> wait for all everything to be downloaded and the wait for everything
>>> to be saved or uploaded.
>>>
>>> foreach(chunk; byChunkAsync("www.abc.com/hugefile.bin"))
>>> {
>>> // While writing to file in this thrad
>>> // new chunks are downloaded
>>> // in the background thread
>>> file.write(chunk);
>>> }
>>>
>>> I hope this makes sense.
>>
>> Well, this makes sense from a theoretical / high-level perspective, but
>> OS write buffers greatly reduce the practicality of this. In common use
>> cases the speed difference between disk and wire will differ by orders
>> of magnitude as well.
>>
>
> Read/write buffers does indeed help a lot and there have been quite some
> discussions on this topic earlier in this newsgroups regarding tradeoffs
> etc. Please have a look at these threads (i dont have any links at hand
> unfortunately).
If you're referring to the discussion which was in the context of copying
files, then I have read it. However, it does not apply to typical use
cases of curl. The question here is if this example makes any practical
sense, and by the looks of everything it does not. Or do you disagree?
--
Best regards,
Vladimir mailto:vladimir at thecybershadow.net
More information about the Digitalmars-d
mailing list