The Case Against Autodecode
Marc Schütz via Digitalmars-d
digitalmars-d at puremagic.com
Mon May 30 04:58:55 PDT 2016
On Saturday, 28 May 2016 at 12:04:20 UTC, Andrei Alexandrescu
wrote:
> On 5/28/16 6:59 AM, Marc Schütz wrote:
>> The fundamental problem is choosing one of those possibilities
>> over the
>> others without knowing what the user actually wants, which is
>> what both
>> BEFORE and AFTER do.
>
> OK, that's a fair argument, thanks. So it seems there should be
> no "default" way to iterate a string, and furthermore iterating
> for each constituent of a string should be fairly rare. Strings
> and substrings yes, but not individual points/units/graphemes
> unless expressly asked. (Indeed some languages treat strings as
> first-class entities and individual characters are mere short
> substrings.)
>
> So it harkens back to the original mistake: strings should NOT
> be arrays with the respective primitives.
I think this is going too far. It's sufficient if they (= char
slices, not ranges) can't be iterated over directly, i.e. aren't
input ranges (and maybe don't work with foreach). That would
force the user to append .byCodeUnit etc. as needed.
This provides a very nice deprecation path, by the way, it's just
not clear whether it can be implemented with the way `deprecated`
currently works. I.e. deprecate/warn every time auto decoding
kicks in, print a nice message to the user, and later remove auto
decoding and make isInputRange!string return false.
More information about the Digitalmars-d
mailing list