The State of the GUI

Adam Wilson flyboynw at gmail.com
Thu Oct 25 00:24:42 UTC 2018


On 10/24/18 12:39 PM, drug wrote:
> On 24.10.2018 22:27, luckoverthere wrote:
>> On Wednesday, 24 October 2018 at 13:13:53 UTC, drug wrote:
>>> 24.10.2018 16:00, Guillaume Piolat пишет:
>>>> - OpenGL does not "work everywhere". It's deprecated on macOS. In 
>>>> general portable APIs don't make any giant any money: the trend is 
>>>> fragmentation hence why abstraction over specific APIs is a must: 
>>>> that's where Unity was better than anyone else.
>>>>
>>> So, nothing specific to OpenGL (it can be related to any other 
>>> technology). But in general I am agree with again - renderer 
>>> shouldn't be fixed. I recently started using `nuklear`, immediate gui 
>>> c library. it's totally platform and renderer agnostic. We could use 
>>> this way.
>>
>> I'd be against using those API for GUI. Something like ImGui and 
>> Nuklear are made specifically for games in mind. For rapid prototyping 
>> and debugging. The GUI can be constructed and manipulate the logic 
>> in-place. What it does not do well is efficiency. The GUI is 
>> reconstructed from scratch every frame. This is not needed for most 
>> GUIs, that is incredibly inefficient. It suites games as you are 
>> already rendering a new frame every 16ms or so. Qt uses OpenGL I 
>> think, but it doesn't render every frame. It'll optimize to try to 
>> reduce how many frames it has to create and it doesn't re-render 
>> anything it doesn't have to.
> 
> You are wrong about immediate mode a little bit. Yes, someone use it 
> like you described above. But nothing prevent you from building 
> efficient application. You can even build retained gui using immediate 
> one. Moreover immediate mode is more efficient than retained in some cases.

You are both technically correct, but the word "efficiency" can be used 
in two different ways here. Immediate mode can be incredibly efficient 
from a rendering performance standpoint. But in general they are much 
less efficient than retained mode from a developer standpoint. So the 
question is, what kind of efficiency is more important?

My vote is for developer efficiency. Most UX's need a minor fraction of 
what even an Intel GMA 3000 can pump out, much less that of what a 
Vega64 or GeForce RTX 2080 Ti. I only have a Radeon Pro WX4100 because 
of the fact that it has four DP1.2 outputs in a half-height form factor. 
I certainly never get with a parsec of actually stressing the capacity 
of rendering silicon.

But if I can do something with 10 lines of code over 1000 lines of code 
I will take that option ever single time. My time is WAY more expensive 
than some GPU time.

-- 
Adam Wilson
IRC: EllipticBit
import quiet.dlang.dev;


More information about the Digitalmars-d mailing list