Whats holding ~100% D GUI back?
Gregor Mückl
gregormueckl at gmx.de
Fri Nov 29 02:42:28 UTC 2019
On Thursday, 28 November 2019 at 15:29:21 UTC, Ethan wrote:
> On Wednesday, 27 November 2019 at 16:06:38 UTC, Gregor Mückl
> wrote:
>> On Tuesday, 26 November 2019 at 12:15:57 UTC, rikki cattermole
>> wrote:
>>> - Render pipeline is from AAA games, the only person I trust
>>> with designing it is Manu
>>
>> I don't understand what you mean by this. A game rendering
>> pipeline and a desktop UI rendering pipeline are fundamentally
>> very, very different beasts. One shouldn't be used to emulate
>> the other as the use cases are far too dissimilar.
>
> Stop saying this. It's thoroughly incorrect. Cripes.
>
> If you think the desktop layout engine introduced in Windows
> Vista, or even the layout engines used in mobile browsers and
> current desktop browsers, doesn't have a ton in common with a
> game rendering pipeline then your knowledge is well outdated.
I don't want to belabor that point too much, but I can say a few
things in response to that:
Yes, compositors are implemented using 3D rendering APIs these
days because they slap together textured quads on screen. They
don't concern themselves with how the contents of these quads
came to be.
And rendering the window contents is where things start to
diverge a lot. A game engine is a fundamentally different beast
from a renderer for the kind of graphics a UI draws. The graphics
primitives that GUI code wants to deal map awkwardly to the GPU
rendering pipeline. Sure, there are ways (some of them quite
impressive), but it's a pain. There's no explicit scene graph.
You can construct a sort of implied scene graph elements from the
draw calls the widgets make in their paint event handlers and go
from there. But UI code sometimes requests state changes like
crazy, switches primitive types, enables and disabling blending,
depends a lot on clipping etc... and you can't simply go and
reorder most of that. As a result, renderer for that is quite
different from a renderer for a 3D scene, and hard to do in its
own right. They can use the same GPU rendering API, but the
algorithms on top of that are quite different. If you don't
believe me, you can go and read some code: ImGUI, cairo-gl, Qt,
WPF...
As for browsers: an HTML page is essentially a pretty static
scene graph with quite simple constituent elements, with the
exception of a few outliers like canvas. The range of styles
possible through CSS is limited in such a way that a HTML
rendering engine can do a lot of reasoning about that. That's a
luxury of not having paint event handlers executing arbitrary
code. And typical engines spend quite some time on that -
hundreds of ms aren't uncommon on a page load as far as I know.
DOM changes through JS can also be surprisingly slow for that
reason. All that processing is too heavy for an application that
has paint event handlers and wants to refresh quickly.
More information about the Digitalmars-d
mailing list