does D already have too many language features ?
Nick Sabalausky (Abscissa)
SeeWebsiteToContactMe at semitwist.com
Sun Apr 14 06:01:23 UTC 2019
On 4/11/19 4:41 PM, H. S. Teoh wrote:
> On Thu, Apr 11, 2019 at 03:00:44PM -0400, Nick Sabalausky (Abscissa) via Digitalmars-d wrote:
>> On 4/11/19 2:45 PM, Nick Sabalausky (Abscissa) wrote:
>>> Put simply: The question itself is flawed.
>>>
>>> A language is a workshop.
>>>
>>
>> Another way to look at it:
>>
>> English has all sorts of words that are rarely, if ever, used. But
>> does anybody really complain that English has too many words? No.
>
> Second-language learners would object, but OK, I get your point. :-P
>
Ha ha, true, good point :). But I'd say that applies to ANY non-primary
human-to-human language.
There's other languages I've tried to learn - Turns out, I'm *HORRIBLE*
at both memorization in general and at learning human-only languages.
With the one-two combo of both great personal effort and great personal
interest, I've managed to pick up a TINY bit of "nihongo" (which I take
FAR too much pride in given the miniscule extent of my still sub-fluent
ability). Much as I like to pride myself on ability to learn computer
languages, I'm completely convinced I would NEVER have been able to gain
any level of fluency in English if I hadn't been born into an
English-speaking culture.
Seriously, you don't even know how much respect and admiration I have
for the ESL crowd - those folks who actually manage to learn any
functional amount of this completely insane, nonsensical, absolutely
ridiculous language as a secondary language. To me, the biggest
real-world "superheroes"/"superpowers" are the bilingual/multilingual
folk, no doubt about it. Seriously, I feel like I outright *CHEATED* by
being born into an English-speaking culture!!!
>> Does the presence of all those extra unused words get in the way of
>> millions of people successfully using the language to communicate
>> ideas on a daily basis? No.
>
> Exactly, there's a core of ideas that need to be communicated, and a
> language needs to have at least that set of basic vocabulary in order to
> be useful. One could argue that that's all that's needed -- if you're
> going for language minimalism. OTOH, having a way to express more
> advanced concepts beyond that basic vocabulary comes in handy when you
> need to accomplish more advanced tasks.
Yup! Two basic truths are relevant here:
A: Possessing a tool doesn't mean you have to use it, or even know how
to use it. But lacking a tool GUARANTEES that you CAN'T use it.
B: Problems have inherent complexity. This complexity can either be
abstracted away by your language (or lib) or manifest in your code -
your choice.
> Sure, any old Turing-complete
> language is in theory sufficient to express any conceivable computation,
> but the question is how effectively it can be used to communicate the
> ideas pertaining to that computation. I wouldn't want to write a GUI
> app in lambda calculus, for example.
Actually, I find even that to give Turing-completeness FAR too much
credit...
A true Turing machine is incapable of O(1) random-access. A Turing
machine's random-access is inherently O(n). Even worse, a Turing machine
is also incapable of ANYTHING less than O(n) for random-access.
But real-world computing machines (arguably) ARE capable of O(1)
random-access. Or at the *very least*, accounting for cache effects and
such, real-world machines are *at least* capable of better-than-O(n)
random-access, which is clearly far beyond the capabilities of a pure
Turing machine.
Seriously, I'm convinced *all* CS students should be absolutely REQUIRED
to take a mandatory course of "CS nnn: Everything Turing-Completeness
Does **NOT** Imply". I've come across FAAAARRRR TOO MANY otherwise
well-educated sheeple who erroneously seem to equate
"Turing-completeness" with "Real-world computer capabilities", and that
is just...patently...NOT...TRUE!!!
The truth is that "Turing complete" (much like the incredibly misleading
LR-parsing literature) is *purely* focused on "Can this be computed AT
ALL?" and has ZERO relationship to anything else that actually matters
*IN REALITY*, such as algorithmic complexity (ie, big-O) and anything
relating to the practical usefulness of the resulting data. But...no CS
student in the world seems to know *ANY* of this. Shame. For SHAME...
>
> But how much to include and what to exclude is a complex question that,
> AFAICT, has no simple answer. Therein lies the rub.
>
> Personally, I see a programming language as a kind of (highly)
> non-linear, vector-space-like thing. You have the vast space of
> computations, and you need to find the "basis vectors" that can span
> this space (primitives that can express any conceivable computation).
> There are many possible such basis sets, but some are easier for humans
> to work with than others. Theoretically, as long as the set is Turing
> complete, that's good enough. However, amid the vast space of all
> possible computations, some computations are more frequently needed than
> others, and thus, in the ideal case, your basis set should be optimized
> for this frequently-used subspace of computations (without compromising
> the ability to span the entire space). However, this is a highly
> non-trivial optimization problem, esp. because this is a highly
> non-linear space. And one that not everyone will agree on, because the
> subset of computations that each person may want to span will likely
> differ from person to person. Finding a suitable compromise that works
> for most people (ideally all, but I'm not holding my breath for that
> one) is an extremely hard problem.
>
These are interesting ideas. I'll have to give them more thought...
>
>> Certainly there ARE things that hinder effective communication through
>> English. Problems such as ambiguity, overly-weak definitions, or
>> LACKING sufficient words for an idea being expressed. But English's
>> toolbox (vocabulary) being too full isn't really such a big problem.
>
> Actually, natural language is surprisingly facile, expressive, and
> efficient at every day conversations, because it's built upon the human
> brain's exceptional ability to extract patterns (sometimes even where
> there aren't any :-P) and infer meaning implied from context -- the
> latter being something programming languages are still very poor at.
> Even when a language doesn't have adequate words to express something,
> it's often possible to find paraphrases that can. Which, if it becomes
> widespread, can become incorporated into the language along with
> everything else.
Excellent points...Frankly, I'm going to have to re-read all of this
after some sleep...
More information about the Digitalmars-d
mailing list