D GUI Framework (responsive grid teaser)

Ethan gooberman at gmail.com
Sat May 25 23:23:31 UTC 2019


On Sunday, 19 May 2019 at 21:01:33 UTC, Robert M. Münch wrote:
> Hi, we are currently build up our new technology stack and for 
> this create a 2D GUI framework.

This entire thread is an embarrassment, and a perfect example of 
the kind of interaction that keeps professionals away from online 
communities such as this one.

It's been little more than an echo chamber of people being wrong, 
congratulating each other on being wrong, encouraging people to 
continue being wrong and shooting down anyone speaking sense with 
wrong facts and wrong opinions.

The amount of misinformation flying around in here would make 
<insert political regime of your own taste here> proud.

Let's just start with being blunt straight up: Congratulations, 
you've announced a GUI framework that can render a grid of 
squares less efficiently than Microsoft Excel.

So from there, I'm only going to highlight points that need to be 
thoroughly shot down.

> So this gives us 36 FPS which is IMO pretty good for a desktop 
> app target

Wrong. A 144Hz monitor, for example, gives you less than 7 
milliseconds to provide a new frame. Break that down further. On 
Windows, the thread scheduler will give you 4 milliseconds before 
your thread is put to sleep. That's if you're a foreground 
process. Background processes only get 1 millisecond. So from 
that you can assume for a standard 60Hz monitor, your worst case 
is that you need to provide a new frame in 1 millisecond.

I currently have 15 programs and 60 browser tabs open. On a 
laptop. WPF can keep up. You can't.

> But you shouldn't design a UI framework like a game engine.

Wrong. Game engines excel at laying out high-fidelity data in 
sync with a monitor's default refresh rate. You're insane if you 
think a 2D interface shouldn't be done in a similar manner. 
Notice Unity and Unreal implement their own WIMP framework across 
multiple platforms, designed it like a game engine, and can keep 
it responsive.

And just like a UI framework, whatever the client is doing 
separate to the layout and rendering is *not* its responsibility.

> Write game-engine-like code if you care about *battery life*??

The core of a game engine will aim to do everything as quickly as 
possible and go to sleep as quickly as possible. Everyone here is 
assuming false equivalency between a game engine, and the game 
systems and massive volumes of data that just plain take time to 
process.

> A game engine is designed for full redraw on every frame.

Wrong. A game engine is designed to render new frames when the 
viewpoint is dirty. Any engine that decouples simulation frame 
from monitor frame won't do a full redraw every simulation frame. 
A game engine will often include effects that get rendered at 
half of the target framerate to save time.

Your definition for "full redraw" is flawed and wrong.

> cos when I think of game engines, I think of framerate 
> maximization, which equals maximum battery drain because you're 
> trying to do as much as possible in any given time interval.

Source: I've released a mobile game that lets you select battery 
options that basically result in 60Hz/30Hz/20Hz. You know all I 
did? Decoupled the renderer, ran the simulation 1/2/3 times, and 
rendered once. Suits burst processing, which is known to be very 
good for the battery.

If you find a game engine that renders its UI every frame despite 
having no dirty element, you've found baby's first game UI.

> for good practice of stability, threading and error reporting, 
> people should look at high-availability, long-lived server 
> software. A single memory leak will be a problem there, a 
> single deadlock.

Many games *already have* this requirement. There's plenty of 
knowledge within the industry of reducing server costs with 
optimisations.

> For instance, there is no spatial datatructure that is 
> inherently better or more efficient than all other spatial 
> datastructures.

Wrong. Three- and four-dimensional vectors. We have hardware 
registers to take advantage of them. Represent your object's 
transformation with an object comprising a translation, a 
quaternion rotation, and if you're feeling nice to your users a 
scale vector.

WPF does exactly this. In a round-about way. But it's there.

> Well, what I meant by "cutting corners" it that games reach 
> efficiency by narrowing down what they allow you to do.

Really. Do tell me more. Actually, don't, because whatever you 
say is going to be wrong and I'm not going to reply to it anyway. 
Hint: We provide more flexibility than your out-of-the-box 
WPF/GTK/etc for whatever systems we provide.

> Browsers are actually doing quite well with simple 2D graphics 
> today.

Browsers have been rendering that on GPU for years.

Which starts getting us in to this point.

> I think CPU rendering has its merits and is underestimated a 
> lot.

> In the 2D realm I don't see so much gain using a GPU over using 
> a CPU.

So. On a 4K or higher desktop (Apple shift 5K monitors). Let's 
say you need to redraw every one of those 3840x2160 pixels at 
60Hz. Let's just assume that by some miracle you've managed to 
get a pixel filled down to 20 cycles. But that's still 8,294,400 
pixels. That's 16.6MHz for one frame. Almost a full GHz to keep 
it responsive at 60 frames per second. 2.4GHz for a 144Hz display.

So you're going to get one thread doing all that? Maybe vectorise 
it? And hope there's plenty of blanks space so you can run the 
same algorithm on four contiguous pixels at a time. Hmmm. Oh, I 
know, multithread it! Parallel for each! Oh, well, now there's an 
L2 cache to worry about, we'll have to work at different chunks 
at different times and hope each chunk is roughly equal in cost 
since any attempt to redistribute the load in to the same cache 
area another thread is working on will result in constant cache 
flushes.

OOOOORRRRRRRRRR. Hey. Here's this hardware that executes tiny 
programs simultaneously. How many shader units does your hardware 
have? That many tiny programs. And its cache is set up to accept 
the results of those programs without massive flush penalties. 
And they're natively SIMD and can handle, say, multi-component 
RBG colours without breaking a sweat. You don't even have to 
worry about complicated sorting logic and pixel overwrites, the 
Z-buffer can handle it if you assign the depth of your UI element 
to the Z value.

And if you *really* want to avoid driver issues with the pixel 
and vertex pipeline - just write compute shaders for everything 
for hardware-independent results.

Oh, hey, wait a minute, Nick's dcompute could be exactly what 
you're want if you're only doing this to show a UI framework can 
be written in D. Problem solved by doing what Manu suggested and 
*WORKING WITH COMMUNITY MEMBERS WHO ALREADY INTIMATELY UNDERSTAND 
THE PROBLEMS INVOLVED*

---

Right. I'm done. This thread reeks of a "Year of Linux desktop" 
mentality and I will also likely never read it again just for my 
sanity. I expect better from this community if it actually wants 
to see D used and not have the forums turn in to Stack Overflow 
Lite.


More information about the Digitalmars-d-announce mailing list