Request for comments: std.d.lexer

FG home at fgda.pl
Wed Jan 30 02:38:39 PST 2013


On 2013-01-30 10:49, Brian Schott wrote:
> If my math is right, that means it's getting 4.9 million tokens/second now.
> According to Valgrind the only way to really improve things now is to require
> that the input to the lexer support slicing. (Remember the secret of Tango's XML
> parser...) The bottleneck is now on the calls to .idup to construct the token
> strings from slices of the buffer.

Yep. I'm eager to see what timings you get with the whole file kept in memory 
and sliced, without copying token strings.


More information about the Digitalmars-d mailing list