Pure dynamic casts?

language_fan foo at bar.com.invalid
Thu Sep 24 20:26:02 PDT 2009


Thu, 24 Sep 2009 20:46:13 -0400, Jeremie Pelletier thusly wrote:

> language_fan wrote:
>> Wed, 23 Sep 2009 10:43:53 -0400, Jeremie Pelletier thusly wrote:
>> 
>>> You're right about concurrency being a different concept than
>>> threading,
>>>   but I wouldn't give threading away for a pure concurrent model
>>>   either.
>>> I believe D is aiming at giving programmers a choice of the tools they
>>> wish to use. I could see uses of both a concurrent model with message
>>> passing and a threading model with shared data used at once in a
>>> program.
>> 
>> The danger in too large a flexibility is that concurrency is not easy
>> and it is getting incresingly complex. You need to be extraordinary
>> good at manually managing all concurrent use of data. If I may predict
>> something that is going to happen, it is that there will be high level
>> models that avoid many low level pitfalls. These models will not
>> provide 100% efficiency, but they are getting faster and faster,
>> without compromizing the safety aspect. This already happened with
>> memory allocation (manual vs garbage collection - in common
>> applications, but not in special cases). Before that we gave some of
>> the error detection capabilities to the compiler (e.g. we do not write
>> array bounds checks ourselves anymore). And optimizations (e.g.
>> register allocation). You may disagree, but I find it much more
>> pleasant to find that the application does never crash even though it
>> works 15% slower than an optimal C++ code would.
> 
> 15% slower is an extreme performance hit. I agree that code safety is
> useful and I use this model all the time for initialization and other
> code which isn't real time, but 15% takes away a lot of the
> application's responsiveness, if you have 50 such applications running
> on your system you just spent $1000 more in hardware to get the
> performance of entry level hardware with faster code.

The cost of e.g. doubling computing power depends on the domain. If you 
are building desktop end user applications, they usually should scale 
from single core atoms to 8-core high-end enthusiastic game computers. So 
the cpu requirements shouldn't usually be too large. Usually even most of 
the 1-3 previous generations' hardware runs them just nicely.

Now doubling the cpu power of a low-end current generation PC does not 
cost $1000, but maybe $20-50. You can continue this until the cpu costs 
about $400-500. By then you've achieved at least tenfold speedup. On the 
gpu market the cheapest chips have very limited capabilities. You can buy 
5 times faster graphics cards for $50-60. $150-160 will get you a 25 
times faster gpu than the first one. 4..8 GB RAM is also really cheap 
these days, and so is a 1.5 TB hard drive. Hardly any desktop program 
requires that much from the hardware. The 15% or even 50% slower 
execution speed seems rather small a problem when you can avoid it by 
buying faster hardware. Hardly any program is cpu bound, not even the 
most demanding games are.

On the server side many systems use php which is both unsafe and slow. If 
you have decent load balancing and cache systems, it does not even matter 
since the system may not be cpu bound, either. Even top10 popular sites 
like wikipedia run on slow php.



More information about the Digitalmars-d mailing list