Would it be possible to have something like this for D?
Max Haughton
maxhaton at gmail.com
Fri Aug 23 16:22:01 UTC 2019
On Friday, 23 August 2019 at 08:33:49 UTC, JN wrote:
> On Thursday, 22 August 2019 at 10:30:59 UTC, bauss wrote:
>> Rust has:
>> https://perf.rust-lang.org/
>>
>> I think it would be very beneficial to have something similar
>> for D.
>
> The charts look very pretty, but I am skeptical of the added
> value of them. Even if we had such graphs for D, there needs to
> be action done. What if someone commits a change that makes the
> performance go bad. Someone needs to notice that the graph went
> down, an issue would have to be reported. And the likely
> response would be "yeah but this change is very important and I
> am not sure why it affects the performance this bad", causing
> performance regressions to stack up over time.
>
> At the place I am employed at we do a lot of performance
> dashboards like this one, and I know from experience that most
> of the metrics like instruction counts are too vague to really
> guide people. Wall time would kind of work, but then you'd need
> a long enough benchmark so that any performance drops are
> significant enough to sound an alarm.
I proposed a SAOC thing to this, and one of things I was
envisaging is too not only test builds against master
(LDC/gdc/DMD whatever) but also to test against different
versions of the compiler e.g. we can see very clearly if
something regresses in terms of performance (and fire an alert in
some way).
As to runtime performance, it's slightly more problematic because
AFAIK performance in a cloud instance isn't overly consistent but
my idea is time them (for a baseline) but also point various
different types code analysers and profilers at them so we can
find out why (hopefully) something regresses. This would
hopefully go from the level of simple function profiling to more
detailed heap and cache measurements.
More information about the Digitalmars-d
mailing list