The Trouble with MonoTimeImpl (including at least one bug)

Forest forest at example.com
Tue Apr 2 18:35:24 UTC 2024


I'm working on code that needs to know not only how much time has 
elapsed between two events, but also the granularity of the 
timestamp counter. (It's networking code, but I think granularity 
can be important in multimedia stream synchronization and other 
areas as well. I would expect it to matter in many of the places 
where using MonoTimeImpl.ticks() makes sense.)

For clarity, I will use "units" to mean the counter's integer 
value, and "steps" to mean the regular increases in that value.

POSIX exposes counter granularity as nanoseconds-per-step via 
clock_getres(), and MonoTimeImpl exposes its reciprocal (1/n) via 
ticksPerSecond(). I'm developing on linux, so this appeared at 
first to be sufficient for my needs.

However, I discovered later that ticksPerSecond() doesn't return 
counter granularity on Windows or Darwin. On those platforms, it 
returns units-per-second instead: the precision of one unit, 
rather than one step. This is problematic because:

- The function returns conceptually different information 
depending on the platform.
- The API offers the needed granularity information on only one 
platform.
- The API is confusing, by using the word "ticks" for two 
different concepts.


I think this has gone unnoticed due to a combination of factors:

- Most programs do only simple timing calculations that don't 
include a granularity term.

- There happens to be a 1:1 unit:step ratio in some common cases, 
such as on my linux box when using MonoTime's ClockType.normal.

- On Windows and Darwin, MonoTimeImpl uses the same fine-grained 
source clock regardless of what ClockType is selected. It's 
possible that these clocks have a 1:1 unit:step ratio as well. 
(Unconfirmed; I don't have a test environment for these 
platforms, and I haven't found a definitive statement in their 
docs.)

- On POSIX, although selecting ClockType.coarse should reveal the 
problem, it turns out that ticksPerSecond() has a special case 
when clock steps are >= 1us, that silently discards the 
platform's clock_getres() result and uses a hard-coded value 
instead. (Bug #24446.) That value happens to yield a 1:1 
unit:step ratio, hiding the problem.


Potential fixes/improvements:

1. Give MonoTimeImpl separate functions for reporting 
units-per-second and steps-per-second (or some other 
representation of counter granularity, like units-per-step) on 
all platforms.

2. Remove the special case described in bug #24446. I suspect the 
author used that hard-coded value not because clock_getres() ever 
returned wrong data, but instead because they misunderstood what 
clock_getres() does. (Or if *I* have misunderstood it, please 
enlighten me.)

3. Implement ClockType.coarse with an actually-coarse clock on 
all platforms that have one. This wouldn't solve the above 
problems, but it would give programmers access to a presumably 
more efficient clock and would allow them to avoid Apple's extra 
scrutiny/hoops for use of a clock that can fingerprint devices.
https://developer.apple.com/documentation/kernel/1462446-mach_absolute_time


Open questions for people who use Win32 or Darwin:

Does Win32 have an API to get the granularity (units-per-step or 
steps-per-second) of QueryPerformanceCounter()?

Does Darwin have such an API for mach_absolute_time()?

If the unit:step ratio of the Win32 or Darwin clocks are always 
1:1, is that clearly documented somewhere official?

Do either of those platforms offer a coarse monotonic clock?




More information about the Digitalmars-d mailing list