Pure dynamic casts?
Jeremie Pelletier
jeremiep at gmail.com
Thu Sep 24 21:23:36 PDT 2009
language_fan wrote:
> Thu, 24 Sep 2009 20:46:13 -0400, Jeremie Pelletier thusly wrote:
>
>> language_fan wrote:
>>> Wed, 23 Sep 2009 10:43:53 -0400, Jeremie Pelletier thusly wrote:
>>>
>>>> You're right about concurrency being a different concept than
>>>> threading,
>>>> but I wouldn't give threading away for a pure concurrent model
>>>> either.
>>>> I believe D is aiming at giving programmers a choice of the tools they
>>>> wish to use. I could see uses of both a concurrent model with message
>>>> passing and a threading model with shared data used at once in a
>>>> program.
>>> The danger in too large a flexibility is that concurrency is not easy
>>> and it is getting incresingly complex. You need to be extraordinary
>>> good at manually managing all concurrent use of data. If I may predict
>>> something that is going to happen, it is that there will be high level
>>> models that avoid many low level pitfalls. These models will not
>>> provide 100% efficiency, but they are getting faster and faster,
>>> without compromizing the safety aspect. This already happened with
>>> memory allocation (manual vs garbage collection - in common
>>> applications, but not in special cases). Before that we gave some of
>>> the error detection capabilities to the compiler (e.g. we do not write
>>> array bounds checks ourselves anymore). And optimizations (e.g.
>>> register allocation). You may disagree, but I find it much more
>>> pleasant to find that the application does never crash even though it
>>> works 15% slower than an optimal C++ code would.
>> 15% slower is an extreme performance hit. I agree that code safety is
>> useful and I use this model all the time for initialization and other
>> code which isn't real time, but 15% takes away a lot of the
>> application's responsiveness, if you have 50 such applications running
>> on your system you just spent $1000 more in hardware to get the
>> performance of entry level hardware with faster code.
>
> The cost of e.g. doubling computing power depends on the domain. If you
> are building desktop end user applications, they usually should scale
> from single core atoms to 8-core high-end enthusiastic game computers. So
> the cpu requirements shouldn't usually be too large. Usually even most of
> the 1-3 previous generations' hardware runs them just nicely.
>
> Now doubling the cpu power of a low-end current generation PC does not
> cost $1000, but maybe $20-50. You can continue this until the cpu costs
> about $400-500. By then you've achieved at least tenfold speedup. On the
> gpu market the cheapest chips have very limited capabilities. You can buy
> 5 times faster graphics cards for $50-60. $150-160 will get you a 25
> times faster gpu than the first one. 4..8 GB RAM is also really cheap
> these days, and so is a 1.5 TB hard drive. Hardly any desktop program
> requires that much from the hardware. The 15% or even 50% slower
> execution speed seems rather small a problem when you can avoid it by
> buying faster hardware. Hardly any program is cpu bound, not even the
> most demanding games are.
>
> On the server side many systems use php which is both unsafe and slow. If
> you have decent load balancing and cache systems, it does not even matter
> since the system may not be cpu bound, either. Even top10 popular sites
> like wikipedia run on slow php.
While I agree with what you say, here in Canada our computer parts are
often twice the price they cost in the US, we kinda get screwed over
transport and customs. A high end CPU is not $500 but well above $1000
CAD alone.
And quite frankly, its annoying to change computers every 6 months, I
myself buy the high end top of the line every 4-6 years just to avoid
that and rarely buy upgrades in between since by then its a new CPU
socket, new DDR pinout, new GPU slot, and then I need a more powerful
PSU and all that requires a new mobo. Then bluray is out so I change my
optical drives, and theres a new sata standard so I get new HDDs.
About PHP, its a language that can generate the same webpage in either
0.002 seconds or 2 minutes. Load balancing helps when you cant optimize
PHP anymore or when your database is eating all your system resources.
Wikipedia most likely has load balancing, but it definitely has well
written PHP code running over it too.
More information about the Digitalmars-d
mailing list