Resolution of core.time.Duration...

Steven Schveighoffer schveiguy at yahoo.com
Tue May 17 06:25:39 PDT 2011


On Tue, 17 May 2011 06:15:54 -0400, Alexander <aldem+dmars at nk7.net> wrote:

> ...why it is in hnsec? I know that this resolution is used in Win32 API  
> (file time), but since TickDuration may be 1 ns resolution, wouldn't it  
> be better to make Duration to be stored with maximum (defined so far)  
> resolution?

if you use hnsecs, then you get a range of SysTime of -30k to 30k years.   
That might seem overkill, but consider that even going to dnsecs (10  
nanoseconds) reduces your range to -3k to +3k years.  The problem is that  
nobody is likely to care about that extra 10 intervals, but losing out on  
27,000 years x 2 is pretty significant.  It seems like a no-brainer to me.

straight nanoseconds are not possible, because we couldn't even represent  
our current date with it.

Plus, we have good precedence, both Microsoft and Tango use that tick  
duration.  It's a natural conclusion.

> Especially because Duration may not hold long intervals (> months) - so  
> there is no problem with overflow.

A Duration is the result of subtracting two SysTime's, which uses hnsecs  
as its tick, so yeah, there is a problem with overflow if you use a finer  
resolution.

> Thread.sleep() accepts Duration (or hnsec) as an argument, while system  
> resolution is higher, and on some systems it is even possible that it  
> can sleep less than 100ns.

The minimum sleep time for a thread is one clock period.  If your OS is  
context switching more than once per 100ns, your OS is going to be doing  
nothing but context switching.  Processors just aren't fast enough to deal  
with that (and likely won't ever be).  100 ns is a reasonable resolution  
for that.

Real time applications may require more precise timing, but you would  
likely need a separate API for that.

> SysTime is also kept in hnsecs, while resolution of system time (on  
> Linux at least) is 1ns. Sure, in case of SysTime it is all bound to  
> overflow, but it depends how value is stored - if we split seconds and  
> nanoseconds, it will be fine.

Again, the resolution of the *structure* may be nsecs, but the actual  
intervals you have access to is about every 4ms on linux ( see  
http://en.wikipedia.org/wiki/Jiffy_(time) ).

If it makes you feel better to use higher resolution timing, the  
facilities are there, just use the C system calls.

> Additionally, when accepting long values as an argument for duration it  
> is more logically to use SI units :)

I agree that accepting a long as an alternative to Duration, it makes  
sense to use a more normal tick resolution.  The chances of someone  
wanting to have their process sleep for more than 300 years (e.g. for  
nanosecond resolution) is pretty small.  This might be a worthwhile change.

I'm not sure how much code this might affect, though.  It would be plenty  
disturbing if your code started sleeping for 100ms instead of the 10s you  
thought you requested.  What might be a good path is to disable those  
functions that accept a long for a few releases, then re-instate them with  
a new meaning.

-Steve


More information about the Digitalmars-d mailing list