RFC: naming for FrontTransversal and Transversal ranges
Georg Wrede
georg.wrede at iki.fi
Sat May 2 12:07:18 PDT 2009
Don wrote:
> Bill Baxter wrote:
>> On Fri, May 1, 2009 at 5:36 PM, bearophile <bearophileHUGS at lycos.com>
>> wrote:
>>> Bill Baxter:
>>>> Much more often the discussion on the numpy list takes the form of
>>>> "how do I make this loop faster" becuase loops are slow in Python so
>>>> you have to come up with clever transformations to turn your loop into
>>>> array ops. This is thankfully a problem that D array libs do not
>>>> have. If you think of it as a loop, go ahead and implement it as a
>>>> loop.
>>> Sigh! Already today, and even more tomorrow, this is often false for
>>> D too. In my computer I have a cheap GPU that is sleeping while my D
>>> code runs. Even my other core sleeps. And I am using one core at 32
>>> bits only.
>>> You will need ways to data-parallelize and other forms of parallel
>>> processing. So maybe nornmal loops will not cuti it.
>>
>> Yeh. If you want to use multiple cores you've got a whole 'nother can
>> o worms. But at least I find that today most apps seem get by just
>> fine using a single core. Strange though, aren't you the guy always
>> telling us how being able to express your algorithm clearly is often
>> more important than raw performance?
>>
>> --bb
>
> I confess to being mighty skeptical about the whole multi-threaded,
> multi-core thing. I think we're going to find that there's only two
> practical uses of multi-core:
> (1) embarressingly-parallel operations; and
> (2) process-level concurrency.
> I just don't believe that apps have as much opportunity for parallelism
> as people seem to think. There's just too many dependencies.
> Sure, you can (say) with a game, split your AI into a seperate core from
> your graphics stuff, but that's only applicable for 2-4 cores. It
> doesn't work for 100+ cores.
I had this bad dream where there's a language where it's trivial to use
multiple CPUs. And I could see every Joe and John executing their
trivial apps, each of which used all available CPUs. They had their
programs and programlets run twice or four times as fast, but most of
them ran in less than a couple of seconds anyway, and the longer ones
spent most of their time waiting for external resources.
All it ended up with was a lot of work for the OS, the total throughput
of the computer decreasing because now every CPU had to deal with every
process, not to mention the increase in electricity consumption and heat
because none of the CPUs could rest. And still nobody was using the GPU,
MMX, SSE, etc.
Most of these programs consisted of sequences, with the odd selection or
short iteration spread far and apart. And none of them used
parallellizable data.
> (Which is why I think that broadening the opportunity for case (1) is
> the most promising avenue for actually using a host of cores).
The more I think about it, the more I'm starting to believe that the
average desktop or laptop won't see two dozen cores in the immediate
future. And definitely, by the time there are more cores than processes
on the average Windows PC, we're talking about gross wasteage.
OTOH, Serious Computing is different, of course. Corporate machine rooms
would benefit from many cores. Virtual host servers, heavy-duty web
servers, and of course scientific and statistical computing come to mind.
It's interesting to note that in the old days, machine room computers
were totally different from PCs. Then they sort-of got together, as in
machine rooms all of a sudden filled with regular PCs running Linux. And
now, I see the trend again separating the PC from the machine room
computers. Software for the latter might be the target for language
features that utilize multiple CPUs.
More information about the Digitalmars-d
mailing list