The Case For Autodecode

Steven Schveighoffer via Digitalmars-d digitalmars-d at puremagic.com
Fri Jun 3 05:16:50 PDT 2016


On 6/3/16 7:24 AM, ag0aep6g wrote:
> This is mostly me trying to make sense of the discussion.
>
> So everyone hates autodecoding. But Andrei seems to hate it a good bit
> less than everyone else. As far as I could follow, he has one reason for
> that, which might not be clear to everyone:

I don't hate autodecoding. What I hate is that char[] autodecodes.

If strings were some auto-decoding type that wasn't immutable(char)[], 
that would be absolutely fine with me. In fact, I see this as the only 
way to fix this, since it shouldn't break any code.

> char converts implicitly to dchar, so the compiler lets you search for a
> dchar in a range of chars. But that gives nonsensical results. For
> example, you won't find 'ö' in  "ö".byChar, but you will find '¶' in
> there ('¶' is U+00B6, 'ö' is U+00F6, and 'ö' is encoded as 0xC3 0xB6 in
> UTF-8).

Question: why couldn't the compiler emit (in non-release builds) a 
runtime check to make sure you aren't converting non-ASCII characters to 
dchars? That is, like out of bounds checking, but for char -> dchar 
conversions, or any other invalid mechanism?

Yep, it's going to kill a lot of performance. But it's going to catch a 
lot of problems.

One thing to point out here, is that autodecoding only happens on 
arrays, and even then, only in certain cases.

-Steve


More information about the Digitalmars-d mailing list