D GUI Framework (responsive grid teaser)

Ola Fosheim Grøstad ola.fosheim.grostad at gmail.com
Thu May 23 06:05:06 UTC 2019

On Thursday, 23 May 2019 at 00:23:50 UTC, Manu wrote:
> it's really just a style
> of software design that lends to efficiency.
> Our servers don't draw anything!

Then it isn't specific to games, or particularly relevant to 
rendering. Might as well talk about people writing search engines 
or machine learning code.

> Minimising wasted calculation is always relevant. If you don't 
> change part of an image, then you'd better have the tech to 
> skip rendering it (or skip transmitting it in this scenario), 
> otherwise you're wasting resources like a boss ;)

Well, it all depends on your priorities. The core difference is 
that (at least for the desktop) a game rendering engine can focus 
on 0% overhead for the most demanding scenes, while 40% overhead 
on light scenes has no impact on the game experience. Granted for 
mobile engines then battery life might change that equation, 
though I am not sure if gamers would notice a 20% difference in 
battery life...

For a desktop application you might instead decide to favour 50% 
GPU overhead across the board as a trade off for a more flexible 
API that saves application programmer hours and freeing up CPU 
time to processing application data. (If your application only 
uses 10% of the GPU, then going to 15% is a low price to pay.)

> I don't think you know what you're talking about.

Let's avoid the ad hominems… I know what I am talking about, but 
perhaps I don't know what you are talking about? I thought you 
were talking about the rendering engines used in games, not 
software engineering as a discipline.

> I don't think we 'cut corners' (I'm not sure what that even 
> means)...

What is means is that in a game you have a negotiation between 
the application design requirements and the technology 
requirements. You can change the game design to take advantage of 
the technology and change the technology to accommodate the game 
design. Visual quality only matters as seen from the particular 
vantage points that the gamer will take in that particular game 
or type of game.

When creating a generic GUI API you cannot really assume too 
much. Let's say you added ray-traced widgets. It would make 
little sense to say that you can only have 10 ray-traced widgets 
on display at the same time for a GUI API. In a game that is 
completely acceptable. You'd rather have the ability to put some 
extra impressive visuals on screen in a limited fashion where it 
matters the most.

So the priorities is more like in film production. You can pay a 
price in terms of technological special casing to create a more 
intense emotional experience. You can limit your focus to what 
the user is supposed to do (both end user and application 
programmer) and give priority to "emotional impact". And you also 
have the ability to train a limited set of workers (programmers) 
to make good use of the novelty of your technology.

When dealing with unknown application programmers writing unknown 
applications you have to be more conservative.

> patterns. You won't tend to have OO hierarchies and sparsely 
> allocated
> graphs, and you will naturally tend to arrange data in tables 
> destined
> for batch processing. These are key to software efficiency in 
> general.

If you are talking about something that isn't available to the 
application programmer then that is fine. For a GUI framework the 
most important thing after providing a decent UI experience is to 
make the application programmers life easier and more intuitive. 
Basically, your goal is to save programmer hours and make it easy 
to change direction due to changing requirements.  If OO 
hierarchies is more intuitive to the typical application 
programmers, then that is what you should use at the API level.

If your write your own internal GUI framework then you have a 
different trade-off, you might put more of a burden on the 
application developer in order to make better overall use of your 
workforce. Or you might limit the scope of the GUI framework to 
getter better end-user results.

> 'Object hierarchy' is precisely where it tends to go wrong. 
> There are a million ways to approach this problem space; some 
> are naturally much more efficient, some rather follow design 
> pattern books and propagate ideas taught in university to kids.

You presume that efficiency is a problem. That's not necessarily 
the case. If your framework is for embedded LCDs then you are 
perhaps limited to under 500 objects on screen anyway.

I also know that Open Inventor (from SGI) and VRML made people 
more productive. It allowed people to create experiences that 
they otherwise would not have been able to, both in industrial 
prototypes and artistic works.

Overhead isn't necessarily bad. A design with some overhead might 
cut the costs enough for the application developer to make a 
project feasible. Or even make it accessible for tinkering. You 
see the same thing with the Processing language.

> Sure, maybe that's a reasonable design. Maybe you can go a step 
> further and transform your arrangement a 'hierarchy'? Data 
> structures are everything.

In the early stages it is most important to have freedom to 
change things, but with an idea of where you could insert spatial 
data-structures. Having a plan for where you can place 
accelerating data-structures and algorithms do matter, of course.

But you don't need to start there. So I think he is doing well by 
keeping rendering simple in the first iterations.

> Right. I only advocate good software engineering!
> But when I look around, the only field I can see that's doing a 
> really good job at scale is gamedev. Some libs here and there 
> enclose some tight worker code, but nothing much at the 
> systemic level.

It is a bit problematic for generic libraries to use worker code 
(I assume you mean actors running on separate threads) as you put 
some serious requirements on the architecture of the application. 
More actor-oriented languages and run-times could make it 
pleasant though, so maybe an infrastructure issue where 
programming languages need to evolve. But you could for a GUI 
framework, sure.

Although I think the rendering structure used in browser 
graphical backends is closer to what people would want for an UI 
than a  typical game rendering engine. Especially the styling 

More information about the Digitalmars-d-announce mailing list