Random string samples & unicode - Reprise
Daniel Gibson
metalcaedes at gmail.com
Mon Sep 13 04:14:29 PDT 2010
On Mon, Sep 13, 2010 at 4:50 AM, bearophile <bearophileHUGS at lycos.com> wrote:
> Jonathan M Davis:
>
>> It's not necessarily a bad idea,
>
> I don't know if it's a good idea.
>
>
>> but I'm not sure that we want to encourage code
>> that assumes ASCII. It's far too easy for English-speaking programmers to end up
>> making that assumption in their code and then they run into problems later when
>> they unexpectedly end up with unicode characters in their input, or they have to
>> change their code to work with unicode.
>
> On the other hand there are situations when you know you are dealing just with digits, or few predetermined symbols like ()+-*/", or when you process very large biological strings that are composed by a restricted and limited number of different ASCII chars.
>
> Bye,
> bearophile
>
Can't you just use byte[] for that? If you're 100% sure your string
only contains ASCII characters, you can just cast it to byte[], feed
that into algorithms and cast it back to char[] afterwards, I guess.
Cheers,
- Daniel
More information about the Digitalmars-d
mailing list