Fantastic exchange from DConf

Atila Neves via Digitalmars-d digitalmars-d at puremagic.com
Wed May 10 04:16:57 PDT 2017


On Wednesday, 10 May 2017 at 06:28:31 UTC, H. S. Teoh wrote:
> On Tue, May 09, 2017 at 09:19:08PM -0400, Nick Sabalausky 
> (Abscissa) via Digitalmars-d wrote:
>> On 05/09/2017 08:30 PM, H. S. Teoh via Digitalmars-d wrote:
>> > 
>> > In this sense I agree with Walter that warnings are 
>> > basically useless, because they're not enforced. Either 
>> > something is correct and compiles, or it should be an error 
>> > that stops compilation. Anything else, and you start having 
>> > people ignore warnings.
>> > 
>> 
>> Not 100% useless. I'd much rather risk a warning getting 
>> ignored that NOT be informed of something the compiler noticed 
>> but decided "Nah, some people ignore warnings so I'll just 
>> look the other way and keep my mouth shut".  (Hogan's Compiler 
>> Heroes: "I see NUH-TING!!")
>
> I'd much rather the compiler say "Hey, you! This piece of code 
> is probably wrong, so please fix it! If it was intentional, 
> please write it another way that makes that clear!" - and abort 
> with a compile error.
>
> This is actually one of the things I like about D. For example, 
> if you wrote:
>
> 	switch (e) {
> 		case 1: return "blah";
> 		case 2: return "bluh";
> 	}
>
> the compiler will refuse to compile the code until you either 
> add a default case, or make it a final switch (in which case 
> the compiler will refuse the compile the code unless every 
> possible case is in fact covered).
>
> Now imagine if this was merely a warning that people could just 
> ignore.
>
> Yep, we're squarely back in good ole C/C++ land, where an 
> unexpected value of e causes the code to amble down an 
> unexpected path, with the consequent hilarity that ensues.
>
> IOW, it should not be possible to write tricky stuff by 
> default; you should need to ask for it explicitly so that 
> intent is clear.  Another switch example:
>
> 	switch (e) {
> 		case 1: x = 2;
> 		case 2: x = 3;
> 		default: x = 4;
> 	}
>
> In C, the compiler happily compiles the code for you.  In D, at 
> least the latest dmd will give you deprecation warnings (and 
> presumably, in the future, actual compile errors) for 
> forgetting to write `break;`. But if the fallthrough was 
> intentional, you document that with an explicit `goto case 
> ...`. IOW, the default behaviour is the safe one (no 
> fallthrough), and the non-default behaviour (fallthrough) has 
> to be explicitly asked for.  Much, much better.
>
>
>> And then the flip side is that some code smells are just to 
>> pedantic to justify breaking the build while the programmer is 
>> in the middle of some debugging or refactoring or some such.
>> 
>> That puts me strongly in the philosophy of "Code containing 
>> warnings: Allowed while compiling, disallowed when committing 
>> (with allowances for mitigating circumstances)."
>
> I'm on the fence about the former.  My current theory is that 
> being forced to write "proper" code even while refactoring 
> actually helps the quality of the resulting code.   But I 
> definitely agree that code with warnings should never make it 
> into the code repo.  The problem is that it's not enforced by 
> the compiler, so *somebody* somewhere will inevitably bypass it.
>
>
>> C/C++ doesn't demonstrate that warnings are doomed to be 
>> useless and "always" ignored. What it demonstrates is that 
>> warnings are NOT an appropriate strategy for fixing language 
>> problems.
>
> Point.  I suppose YMMV, but IME unless warnings are enforced 
> with -Werror or equivalent, after a while people just stop 
> paying attention to them, at least where I work.  It's entirely 
> possible that it's a bias specific to my job, but somehow I 
> have a suspicion that this isn't completely the case.  Humans 
> tend to be lazy, and ignoring compiler warnings is rather high 
> up on the list of things lazy people tend to do. The likelihood 
> increases with the presence of other factors like looming 
> deadlines, unreasonable customer requests, ambiguous feature 
> specs handed down from the PTBs, or just plain having too much 
> on your plate to be bothering with "trivialities" like fixing 
> compiler warnings.
>
> That's why my eventual conclusion is that anything short of 
> enforcement will ultimately fail. Unless there is no way you 
> can actually get an executable out of badly-written code, there 
> will always be *somebody* out there that will write bad code. 
> And by Murphy's Law, that somebody will eventually be someone 
> in your team, and chances are you'll be the one cleaning up the 
> mess afterwards.  Not something I envy doing (I've already had 
> to do too much of that).
>
>
> [...]
>> The moral of this story: Sometimes, breaking people's code is 
>> GOOD! ;)
>
> Tell that to Walter / Andrei. ;-)
>
>
> [...]
>> > (Nevermind the elephant in the room that 80-90% of the 
>> > "optimizations" C/C++ coders -- including myself -- have 
>> > programmed into their finger reflexes are actually 
>> > irrelevant at best, because either compilers already do 
>> > those optimizations for you, or the hot spot simply isn't 
>> > where we'd like to believe it is; or outright de-optimizing 
>> > at worst, because we've successfully defeated the compiler's 
>> > optimizer by writing inscrutable code.)
>> 
>> C++'s fundamental paradigm has always been 
>> "Premature-optimization oriented programming". C++ promotes 
>> POOP.
>
> LOL!!
>
> Perhaps I'm just being cynical, but my current unfounded 
> hypothesis is that the majority of C/C++ programmers don't use 
> a profiler, and don't *want* to use a profiler, because they're 
> either ignorant that such things exist (unlikely), or they're 
> too dang proud to admit that their painfully-accumulated 
> preconceptions about optimization might possibly be wrong.

The likelihood of a randomly picked C/C++ programmer not even 
knowing what a profiler is, much less having used one, is 
extremely high in my experience. I worked with a lot of embedded 
C programmers with several years of experience who knew nothing 
but embedded C. We're talking dozens of people here. Not one of 
them had ever used a profiler. In fact, a senior developer (now 
tech lead) doubted I could make our build system any faster. I 
did by 2 orders of magnitude. When I presented the result to him 
he said in disbelief: "But, how? I mean, if it's doing exactly 
the same thing, how can it be faster?". Big O? Profiler? What are 
those? I actually stood there for a few seconds with my mouth 
open because I didn't know what to say back to him.

These people are also likely to raise concerns about performance 
during code review despite having no idea what a cache line is. 
They still opine that one shouldn't add another function call for 
readability because that'll hurt performance. No need to measure 
anything, we all know calling functions is bad, even when they're 
in the same file and the callee is `static`.

I think a lot of us underestimate just how bad the "average" 
developer is. A lot of them write C code, which is like giving 
chainsaws to chimpanzees.

> (And meanwhile, the mere mention of the two letters "G C" and 
> they instantly recoil, and rattle of an interminable list of

That's cognitive dissonance: there's not much anyone can do about 
that. Unfortunately, facts don't matter, feelings do.

Atila


More information about the Digitalmars-d mailing list