howto count lines - fast

H. S. Teoh via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Fri Jun 2 23:32:15 PDT 2017


On Sat, Jun 03, 2017 at 07:00:47AM +0100, Russel Winder via Digitalmars-d-learn wrote:
[...]
> There are many different sorts of programming. Operating systems,
> compilers, GUIs, Web services, machine learning, etc., etc. all
> require different techniques. Also there are always new areas, where
> idioms and standard approaches are yet to be discovered. There will
> always be a place for "heroic", but to put it up on a pedestal as
> being a Good Thing For Allâ„¢ is to do "heroic" an injustice.

Fair enough.  I can see how this would lead to unnecessarily ugly,
prematurely-optimized code.  It's probably the origin of premature
optimization culture especially prevalent in C circles, where you just
get into the habit of automatically thinking things like "i = i + 1 is
less efficient than ++i", which may have been true in some bygone era
but is no longer relevant in the machines and optimizing compilers of
today. And also, constantly "optimizing" code that actually aren't the
bottleneck, because of some vague notion of wanting "everything" to be
fast, yet not being willing to use a profiler to find out where the real
bottleneck is.  As a result you spend inordinate amounts of time writing
the absolutestly fastest O(n^2) algorithm rather than substituting a
moderately unoptimized O(n) algorithm that's far superior, thus actually
introducing new bottlenecks instead of fixing existing ones.


> We should also note that in the Benchmark Game, the "heroic" solutions
> are targetted specifically at Isaac's execution machine, which often
> means they are crap programs on anyone else's computer.

Well, there is some value in targeting a specific execution environment,
but I agree that holding that up as being exemplary of how code should
be written would be rather misguided.


[...]
> The optimisations are though generally aimed at the current execution
> computer. Which is fine in the short term. However in the long term,
> the optimisations become the problem. When the execution context of an
> optimised code changes then the optimisations should be backed out and
> new optimisations applied. Sadly this rarely happens, and you end up
> with new optimisations laid on old (redundant) optimisations, and
> hence to incomprehensible code that people darn't amend as they have
> no idea what the #### is going on.

I've been thinking about this for a while now, actually.  It almost
seems as though there ought to be two distinct layers of abstraction in
a given piece of code, a high-level, logical layer that specifies using
some computation model the desired results, and another lower-level
layer that contains either implementation details or target-specific
tweaks.  There should be an automatic translation from the upper layer
to the lower layer, but after the automatic translation you can go in
and tweak the lower layer *while keeping the upper layer intact*, and
the system (IDE or whatever) would keep track of *both*, with the lower
layer customizations tracked as a set of diffs against the automatically
translated version.  When the upper layer changes, any corresponding
diffs in the lower layer get invalidated and either produces a conflict
the programmer must manually resolve, or else defaults to the new
automated translation.

Furthermore, there should be some system of tracking multiple diff sets
for the lower layer, so that you can specify diff A as applying to
target machine X, and diff B as applying to target machine Y. So you can
target the same logical piece of code to different target machines with
different implementations.


[...]
> > I know that the average general programmer doesn't (and shouldn't)
> > care.  But *somebody* has to, in order to implement the system in
> > the first place. *Somebody* had to implement the "heroic" version of
> > memchr so that others can use it as a primitive. Without that,
> > everyone would have to roll their own, and it's almost a certainty
> > that the results will be underwhelming.
> 
> It may be worth noting that far too few supposedly professional
> programmers actually know enough about the history of their subject to
> be deemed competent. 
[...]

Yes, and that is why the people who actually know what they're doing
need to be able to write the "hackish", optimized implementations of the
nicer APIs provided by the language / system, so that at the very least
the API calls would do something sane, even if the code above that is
crap.


T

-- 
Questions are the beginning of intelligence, but the fear of God is the beginning of wisdom.


More information about the Digitalmars-d-learn mailing list