Why is `scope` planned for deprecation?
via Digitalmars-d
digitalmars-d at puremagic.com
Tue Nov 18 03:15:27 PST 2014
On Tuesday, 18 November 2014 at 02:35:41 UTC, Walter Bright wrote:
> C is a brilliant language. That doesn't mean it hasn't made
> serious mistakes in its design. The array decay and 0 strings
> have proven to be very costly to programmers over the decades.
I'd rather say that it is the industry that has misappropriated
C, which in my view basically was "typed portable assembly" with
very little builtin presumptions by design. This is important
when getting control over layout, and this transparency is a
quality that only C gives me. BCPL might be considered to have
more presumptions (such as string length), being a minimal
"bootstrapping subset" of CPL.
You always had the ability in C to implement arrays as a variable
sized struct with a length and a trailing data section, so I'd
say that the C provided type safe variable length arrays. Many
people don't use it. Many people don't know how to use it. Ok,
but then they don't understand that they are programming in a low
level language and are responsible for creating their own
environment. I think C's standard lib mistakingly created an
illusion of high level programming that the language only
partially supported.
Adding the ability to transfer structs by value as a parameter
was probably not worth the implementation cost at the timeā¦
Having a "magic struct/tuple" that transfer length or end pointer
with the head pointer does not fit the C design. If added it
should have been done as a struct and to make that work you would
have to add operator overloading. There's an avalanche effect of
features and additional language design issues there.
I think K&R deserves credit for being able to say no and stay
minimal, I think the Go team deserves the same credit. As you've
experienced with D, saying no is hard because there are often
good arguments for features being useful and difficult to say in
advance with certainty what kind of avalanche effect adding
features have (in terms of semantics, special casing and new
needs for additional support/features, time to complete
implementation/debugging). So saying no until practice shows that
a feature is sorely missed is a sign of good language design
practice.
The industry wanted portability and high speed and insisted
moving as a flock after C and BLINDLY after C++. Seriously, the
media frenzy around C++ was hysterical despite C++ being a bad
design from the start. The C++ media noise was worse than with
Java IIRC. Media are incredibly shallow when they are trying to
sell mags/books based on the "next big thing" and they can
accelerate adoption beyond merits. Which both C++ and Java are
two good examples of.
There were alternatives such as Turbo Pascal, Modula-2/3, Simula,
Beta, ML, Eiffel, Delphi and many more. Yet, programmers thought
C was cool because it was "portable assembly" and "industry
standard" and "fast" and "safe bet". So they were happy with it,
because C compiler emitted fast code. And fast was more important
to them than safe. Well, they got what they deserved, right?
Not adding additional features is not a design mistake if you try
hard to stay minimal and don't claim to support high level
programming. The mistake is in using a tool as if it supports
something it does not.
You might be right that K&R set the bar too high for adding extra
features. Yet others might be right that D has been too willing
to add features. As you know, the perfect balance is difficult to
find and it is dependent on the use context, so it materialize
after the fact (after implementation). And C's use context has
expanded way beyond the original use context where people were
not afraid to write assembly.
(But the incomprehensible typing notation for function pointers
was a design mistake since that was a feature of the language.)
More information about the Digitalmars-d
mailing list