Iterators and Ranges: Comparing C++ to D to Rust
Petar
Petar
Tue Jun 15 16:21:07 UTC 2021
On Tuesday, 15 June 2021 at 13:25:23 UTC, Paulo Pinto wrote:
>>
>> The real problem is when the safe code is not fast enough and
>> people rewrite it in an unsafe language.
>
> Actually the real problem is that people think it is not fast
> enough, based on urban myths without touching any kind of
> profiling tools, or measuring it against the project hard
> deadlines.
I agree it's a factor. However in one case you can prove the
myths wrong with real data, while in the other case the data is
given as a reason to the low-level/unsafe route.
> More often than not, it is already fast enough, unless we are
> talking about winning micro-benchmarks games.
Perhaps you feel the need to push back on some prevailing
Reddit/Hackernews misconceptions, but I'm referring to real-world
cases. For example, how is it possible, that e.g. on the same
computer switching between two Slack channels takes 3-4 seconds,
but at the same time runs demanding AAA games from from 2-3 years
ago just fine? Unless we're living in different universes, you
must have noticed the increasing quantity of bloatware apps are
more slow and janky than ever, without any corresponding increase
of functionality. I'm not saying that the use of a tracing GC is
a problem or anything of the sorts. Often times, there's many
small inefficiencies (each of which small enough that it's lost
in the noise in profiler trace) that when taken as whole
accumulate and make perceived user experience bad.
Also, I don't about you, but since you often talk about .NET and
UWP, in the past I've worked full-time on WPF/SL/UWP control
libraries and MVVM apps. For a purely app developer, the profiler
often is really not that helpful when most of the CPU time time
is spent in the framework after they've async-ified their code
and moved all heavy computation out of the UI thread. Back then
(before .NET Core was even a thing) I used to spend a lot of time
using decompilers and later browsing
[referencesource.microsoft.com](https://referencesource.microsoft.com/#PresentationFramework) to understand where the inefficiencies lie. In the end, the solutions (as confirmed by both benchmarks and perceived application performance) were often to rewrite the code to avoid high-level constructs like `DependencyProperty` and even sometimes reach for big hammers like `ILEmitter`, to speed-up code that was forced to rely on runtime reflection.
In summary, when the framework dictates an inefficient API design
(and also not really type-safe - everything is relying on dynamic
casts and runtime reflection), the whole ecosystem (from
third-party library developers to user-facing app writers)
suffers. In the past several years MS has put a ton of effort
into optimizing .NET Core under the hood, but often times the
highest gains come from a more efficient APIs (e.g. otherwise why
would they invest all this effort into value types, Spans,
ref-returns, etc.).
More information about the Digitalmars-d
mailing list