Examples of DFA

Walter Bright newshound2 at digitalmars.com
Tue Sep 23 22:10:22 UTC 2025


On 9/23/2025 1:01 PM, Dennis wrote:
> On Tuesday, 23 September 2025 at 17:46:20 UTC, Walter Bright wrote:
>> Your algorithm is interesting, but I don't think it is fundamentally different 
>> from the one I sketched out.
> 
> I think both algorithms compute the same thing, but your algorithm represents 
> the graph with an adjacency matrix and does an inefficient search. The algorithm 
> be O(V + E), with V = vertices = number of function declarations, and E = edges 
> = number of function calls.
> 
> That is, compile time should scale linearly with the amount of source code being 
> compiled.
> 
> Your algorithm does a O(V^2) traversal O(V) times, running in O(V^3) time. 
> That's a pathological worst case scenario, but I'm not convinced the constant 
> factor is so large that it's more practical than a linear search.

I don't think it is anywhere near that bad. It does more than O(V) only when 
there are cycles.


> On a slight tangent, I'm not a fan of how dmd's performance culture seems to be 
> hyper focused on local constant time code optimizations that worsen code quality 
> (like bit-hacking and using global variables for reusable memory buffers)

The reusable memory buffer thing was benchmarked by myself long ago, and was a 
big win. I don't know what you mean by bit-hacking.

My general approach is to make it work first, not worrying too much about 
algorithms. I've discovered that D makes it fairly easy to change algorithms, 
while C/C++ code tends to be a lot more brittle.

Recoding for performance later takes advantage of there being a test suite in 
place for it.

> when 
> there's a number of glaring issues related to time complexity, which are way 
> more severe than a redundant allocation here and there could ever be.

They both matter.


> Whenever I investigate what slows down compilation time of real code, it's 
> usually because it triggers a bad case of one of the many quadratic or even 
> exponential algorithms in dmd's code. I'm talking:

I've discovered many such algorithms in my C++ compiler, and fixed them. I've 
been less diligent with D's source, mainly because I have too much on my plate. 
It's great that you're looking into this!

> - O(n^2) MSCoff section insertion (https://github.com/dlang/dmd/pull/16780)
> - O(n^2) Checking for missing/duplicated switch case statements (TODO)
> - O(2^n) Nested function sibling detection 
> (https://github.com/dlang/dmd/pull/14886)
> - O(2^n) Return type isolated from parameters check 
> (https://github.com/dlang/dmd/pull/12885)
> - O(n*m) CTFE execution because AA literals aren't memoized 
> (https://github.com/dlang/dmd/issues/20179)
> - O(2^n) Function inlining (https://github.com/dlang/dmd/issues/20315)
> - O(n^2) Loop optimizations (https://github.com/dlang/dmd/issues/20535)

This is all valuable stuff!


> A fast constant factor is nice, but let's not introduce more 'compilation time 
> landmines' into dmd's code base, and prefer linear algorithms over quadratic ones.

There's mathematical time, the O(whatever) thing, and there's the detail coding 
practice time. When I worked on the Warp preprocessor, I spent a lot of time 
trying different algorithms and data structures, and benchmarking. It paid off 
to not be too pre-judgmental about what time it would take.

Other things matter a lot, too, like cache-friendly data structures.

I appreciate your efforts here, it is very valuable work you are doing!


More information about the Digitalmars-d mailing list