Unofficial wish list status.(Jul 2008)

Me Here p9e883002 at sneakemail.com
Tue Jul 1 04:32:00 PDT 2008


Bill Baxter wrote:

> Don wrote:
> > Sean Kelly wrote:
> > > > so wut is that stuff u want in d. u haven't even tried d2 so it loox
> > > > like u r interested more in bitchin' than in making real
> > > suggestions. now seriously. speak up. what is it that u as a member of
> > > the community wanna say and walter doesn't listen.
> > > 
> > > I've tried D2.  I've read the spec as well.  I simply don't like it.  
> > 
> > By D2, I imagine you just mean "the const system"?
> > (Most of the other things in D2 seem to have been very popular; if D2-
> > without-const was released, I reckon most of the community would start
> > using it).
> > 
> > It seems to me that the view of the community is "We though we wanted
> > const. But now that we've seen what const involves, we don't want it.".
> > 
> 
> I think there was a lot of hope on the part of the community that a const
> system designed from scratch with 20/20 hind-sight could avoid some of the
> practical problems with the C++ const system.  But it seems the answer to
> that was "no".  The system we've got now seems to solve some theoretical
> problems with the C++ const system at the cost of making practical usage
> slightly more cumbersome.
> 
> But I don't hope for a more usable const any more.  I'd just like to see a
> reduction in the number of flavors of D code.  If we all moved to D2, I think
> we could pretty much just ignore invariant until it actually has some
> practical benefit.  What's left of D2 const is pretty much like what many are
> used to with C++.  Yeh, so you have to write some methods multiple times for
> different constnesses, etc... you get used to it.  I think I could get used
> to D2 const anyway.
> 
> > At the very least, it's a public relations disaster from the point of
> > view of the language designers. They are assuming that with more time
> > and education, the legitimate complaints about first const system will
> > be forgotten, and the const system will be embraced by the community.
> > But there is a very big risk here -- what if it is NOT eventually
> > accepted? What if the community concensus remains that const is just too
> > complicated, without enough benefit? And the language designers remain
> > steadfastly devoted to const? That's a catastrophic scenario, and
> > unfortunately not unlikely.
> > 
> > The fact that someone as senior in the community as yourself is
> > expressing profound dissatisfaction indicates that the risk is very real.
> 
> It would be sad to see the D2 const swerve shake off all the old D supporters
> off the D train.  But on the other hand, new folks do seem to keep popping up
> who would rather use D2 than D1.
> 
> --bb

The problem with D2, is that it is optimising (prematurly) for the lazy, at the
expense of the dilligent.

The functional utopia of transparent threading through purity only succeeds in
dealing with the most trivial forms of shared access--and grossly inefficiently
at that.
Side-effects are par for the course in software. Whether its printing/reading
to/from the screen; or reading or writing from/to a file, or port or socket or
DB. These things cannot be avoided in any *useful program*, and "purity" goes
out the window as soon as you do any one of them. Once you accept that fact,
and realise that shared access to memory is the least of the thread users
problems, all the hyperbole about purity and referential transparency goes
right out the window.

Besides which, all that (unnecessary and unavoidable) replication of data goes
entirely against the grain of the original D ethos, and completely inhibits
efficient algorithms for dealing with thread-shared data.. For example: The
most efficient way to use multiple cores or processors to manipulate large
arrays, is to run multiple, indentical threads *each processesing its own,
unique partition of the gross data. D2's ponchant for replicating whole data
structures every time you look at the thing, completely inhibots this, tried,
tested and proven technique for making use of shared data and threading.

By way of proof. If you have a vanilla, threaded, Perl installed, run this:

	#! perl -slw
	use strict;
	use Tim	e:HiRes qw[ sleep ];
	use threads;
	use threads::shared;

	$|++;

	sub thread{
	   my( $ref, $pos ) = @_;
	   while( sleep 0.1 ) {
	       substr( $$ref, $pos, 1 ) =~ tr[A-Z][B-ZA];
	   }

	my $s :shared = join( '', 'A' .. 'Z' ) x 3;
	my @threads = map{ threads->create( \&thread, \$s, $_ ) } 0 .. 77;
	sleep 1;
	printf "\r$s\t" while sleep 0.05;

For those that do not have Perl, what those that do will see, it the
inline-scrolling (html marque-style) line of text with three copies of the
alphabet rotating left.

Now, that's trivial I here you all cry, but get this. Each character on that
line is being manipualted by a different thread. Try replicating that using D2
(or Haskell which seems to be the inspiration for the current trend in D2
direction).

Now imagine all the concurrent processing problems that can be efficiently
dealt with using the "add & multiply" concurrent processing techniqur. From
image processing--whether jpgs, x-ray, radar, mri or crt scans; weather
prediction; finite element analysis for construction, automotive, boat design,
etc. etc. etc; audio and video compression; cryptographical analysis; neuclear
weapon analysis; space flight calculation; Genome analysis; weapons
trajectories; .... The list is endless.

In each of these fields, partitioning a single large dataset and running
separate, indentical threads on each section is fundemental to efficient
processing of these algorithms. and replicating entire datastructures each time
any single element of them is mutated is grossly, insanely, *stupidly*
inefficient. Total madness. Rather than getting 80-90% time-based gains from
each extra processor, you ruduce that to 35-40%, with the rest spent thrashing
the memory allocator/GC to death *needlessly replicating mutated data*!

Walter: Please, please, please. Take a break. A holiday (far away from Andrie)
and apply your own, rational, non-hyped thought processes to this problem.
(Stay off the purity cool-aid for a few days). Do a little research into the
types of problems and algorithms that truely lend themselves to being
multi-threaded and analyse the effects of all that damn, unwanted and
uncontrollable replication.

Web servers and other IO-bound processes are *not* the target of
multithreading. Take a close look at the sort of algorithms that the
IBM/SONY/etc. Cube processor is targetted at. (It's the core behind the latest
greatest teraflops super-computers), and there is a really good reason for
that. Running simple, compact, highly localised algorithms on vast quantities
of simlar data is the bread and butter of efficient concurrency. All that
replication to achieve "transparent referential integrity" doesn't mean a damn
thing for this type of algorithm. It simply slows everything to a crawl. If
these super computer kept destroying there local L1 & L2 caches by replicating
data every time it mutated, those teraflops systems would be "total flop"
sysems.

For dogs sakes. get a grip, smell the coffee, kick the purity habit and send
Andrie packing back to the "design by commitee" that produces STL..

With respect, but concrete, contrary knowledge, 
b.


-- 




More information about the Digitalmars-d mailing list