Potential of a compiler that creates the executable at once

H. S. Teoh hsteoh at quickfur.ath.cx
Fri Feb 11 22:08:57 UTC 2022


On Fri, Feb 11, 2022 at 08:23:10PM +0000, rempas via Digitalmars-d wrote:
> On Friday, 11 February 2022 at 18:13:34 UTC, H. S. Teoh wrote:
> > I'm skeptical of any LoC metric.
[...]
> This reminds me of what Walter said before! It is actually so simple
> that I don't understand what's so hard about it!
[...]

It's not that it's *hard*.  It's pretty straightforward, and everybody
knows what it means.

The problem is the mostly-unfounded *interpretations* that people put on
it.

In the bad ole days, LoC used to be a metric used by employers to
measure their programmers' productivity. (I *hope* they don't do that
anymore, but you never know...)  Which is completely ridiculous because
the amount of code you write has very little correlation with the amount
of effort you put into it. It's trivial to write 1000 lines of sloppy
boilerplate code that accomplishes little; it's a lot harder to write
condense that into 50 lines of code that does the same thing 10x faster
and with 10% of the memory requirements.

One of the hardest bug fixes I've done at my job involve a 1-line fix
for a subtle race condition that took 3+ months to track down and
identify.  I guess they should fire me for non-productivity, because by
the LoC metric I've done almost zero work in that time. Good luck with
the race condition, though; adding another 1000 LoC to the code ain't
getting rid of the race, it'd only obscure it even further and make it
just about impossible to find and fix.

And some of my best bug fixes involve *deleting* poorly-written
redundant code and writing a much shorter replacement. I guess they
should *really* fire me for that, because by the LoC metric I've not
only been unproductive, but *counter*productive. :-P

By the above, it should be clear that the assumption that LoC is a good
measure of complexity is an unfounded one.  If project A has 10000 LoC
and project B has 10000 LoC, does it mean they are of equal complexity?
Hardly. Project A could be mostly boilerplate, copy-pasta, redundant
code, poorly-implemented poorly-chosen O(n^2) algorithms, which has
10000 LoC simply because there's so much useless redundancy. Project B
could be a collection of fine-tuned, hand-optimized professional
algorithms that could do a LOT under the hood, and it has 10000 LoC
because it actually has a large number of algorithms implemented, and
was able to fit them all into 10000 LoC because each individual piece
was written to be as concise as needed to express the algorithm and no
more.  In terms of actual complexity, project A might as well be
kindergarten-level compared to project B's PhD sophistication.  What
does their respective LoC tell us about their complexity?  Basically
nothing.

And don't even get me started on code quality vs. LoC. An IOCCC entry
can easily fit an entire flight simulator into a single page of code,
for example. Don't expect anybody to be able to read it, though (not
even the author :-D).  A more properly-written flight simulator would
occupy a lot more than a single page of code, but in terms of
complexity, they'd be about the same, give or take.  But by the LoC
metric, the two ought to be so far apart they should be completely
unrelated to each other.  Again, the value of LoC as a metric here is
practically nil.


--T


More information about the Digitalmars-d mailing list