The Case Against Autodecode
H. S. Teoh via Digitalmars-d
digitalmars-d at puremagic.com
Thu Jun 2 23:46:41 PDT 2016
On Thu, Jun 02, 2016 at 04:29:48PM -0400, Andrei Alexandrescu via Digitalmars-d wrote:
> On 06/02/2016 04:22 PM, cym13 wrote:
> >
> > A:“We should decode to code points”
> > B:“No, decoding to code points is a stupid idea.”
> > A:“No it's not!”
> > B:“Can you show a concrete example where it does something useful?”
> > A:“Sure, look at that!”
> > B:“This isn't working at all, look at all those counter-examples!”
> > A:“It may not work for your examples but look how easy it is to
> > find code points!”
>
> With autodecoding all of std.algorithm operates correctly on code points.
> Without it all it does for strings is gibberish. -- Andrei
With ASCII strings, all of std.algorithm operates correctly on ASCII
bytes. So let's standardize on ASCII strings.
What a vacuous argument! Basically you're saying "I define code points
to be correct. Therefore, I conclude that decoding to code points is
correct." Well, duh. Unfortunately such vacuous conclusions have no
bearing in the real world of Unicode handling.
T
--
I am Ohm of Borg. Resistance is voltage over current.
More information about the Digitalmars-d
mailing list