string to character code hex string
    Moritz Maxeiner via Digitalmars-d-learn 
    digitalmars-d-learn at puremagic.com
       
    Sat Sep  2 11:28:02 PDT 2017
    
    
  
On Saturday, 2 September 2017 at 18:07:51 UTC, bitwise wrote:
> On Saturday, 2 September 2017 at 17:45:30 UTC, Moritz Maxeiner 
> wrote:
>> 
>> If this (unnecessary waste) is of concern to you (and from the 
>> fact that you used ret.reserve I assume it is), then the easy 
>> fix is to use `sformat` instead of `format`:
>>
>
> Yes, thanks. I'm going to go with a variation of your approach:
>
> private
> string toAsciiHex(string str)
> {
>     import std.ascii : lowerHexDigits;
>     import std.exception: assumeUnique;
>
>     auto ret = new char[str.length * 2];
>     int i = 0;
>
>     foreach(c; str) {
>         ret[i++] = lowerHexDigits[(c >> 4) & 0xF];
>         ret[i++] = lowerHexDigits[c & 0xF];
>     }
>
>     return ret.assumeUnique;
> }
If you never need the individual character function, that's 
probably the best in terms of readability, though with a decent 
compiler, that and the two functions one should result in the 
same opcode (except for bitshift&bitmask swap).
>
> I'm not sure how the compiler would mangle UTF8, but I intend 
> to use this on one specific function (actually the 100's of 
> instantiations of it).
In UTF8:
--- utfmangle.d ---
void fun_ༀ() {}
pragma(msg, fun_ༀ.mangleof);
-------------------
---
$ dmd -c utfmangle.d
_D6mangle7fun_ༀFZv
---
Only universal character names for identifiers are allowed, 
though, as per [1]
[1] https://dlang.org/spec/lex.html#identifiers
    
    
More information about the Digitalmars-d-learn
mailing list