char array weirdness

Jack Stouffer via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Wed Mar 30 22:40:43 PDT 2016


On Wednesday, 30 March 2016 at 22:49:24 UTC, ag0aep6g wrote:
> When byCodeUnit takes no time at all, isn't 1µs infinite times 
> slower, instead of 100 times? And I think byCodeUnits's 1µs is 
> so low that noise is going to mess with any ratios you make.

It's not that it's taking no time at all, it's just that it's 
less than 1 hecto-nanosecond, which is the smallest unit that 
benchmark works with.

Observe what happens when the times are no longer averaged, I 
also made some other changes to the script:

import std.datetime;
import std.stdio;
import std.array;
import std.utf;
import std.uni;

enum testCount = 1_000_000;

void test(char[] var)
{
     auto a = var.array;
}

void test2(char[] var)
{
     auto a = var.byCodeUnit.array;
}

void test3(char[] var)
{
     auto a = var.byGrapheme.array;
}

void main()
{
     import std.conv : to;
     import std.random : uniform;
     import std.string : assumeUTF;

     // random string
     ubyte[] data;
     foreach (_; 0 .. 200)
     {
         data ~= cast(ubyte) uniform(33, 126);
     }

     auto result = to!Duration(benchmark!(() => 
test(data.assumeUTF))(testCount)[0]);
     auto result2 = to!Duration(benchmark!(() => 
test2(data.assumeUTF))(testCount)[0]);
     auto result3 = to!Duration(benchmark!(() => 
test3(data.assumeUTF))(testCount)[0]);

     writeln("auto-decoding", "\t\t", result);
     writeln("byCodeUnit", "\t\t", result2);
     writeln("byGrapheme", "\t\t", result3);
}

$ ldc2 -O3 -release -boundscheck=off test.d
$ ./test
auto-decoding		1 sec, 757 ms, and 946 μs
byCodeUnit		87 ms, 731 μs, and 8 hnsecs
byGrapheme		14 secs, 769 ms, 796 μs, and 6 hnsecs


More information about the Digitalmars-d-learn mailing list