Whats holding ~100% D GUI back?

Ethan gooberman at gmail.com
Sat Nov 30 10:12:42 UTC 2019


On Friday, 29 November 2019 at 13:27:17 UTC, Gregor Mückl wrote:
> A complete wall of text that missed the point entirely.

Wow.

Well. I said it would need to be thorough, I didn't say it would 
need to be filled with lots of irrelevant facts to hide the fact 
you couldn't give a thorough answer to most things.

1 and 2 can both be answered with "a method of hidden surface 
removal." A more detailed explanation of 1 is "a method of hidden 
surface removal using a scalar buffer representing distance of an 
object from the viewpoint" whereas 2 is "a method of hidden 
surface removal using a set of planes or a matrix to discard 
non-visible objects". Othographic is a projectionless frustum, ie 
nothing is distorted based on distance and there is no field of 
view. Given your ranting about how hard clipping 2D surfaces is, 
the fact that you didn't tie these questions together speaks 
volumes.

3, it's a simplistic understanding at best. Paint calls are no 
longer based on whether a region on the screen buffer needs to be 
filled, they're called on each control that the compositor 
handles whenever a control is dirty.

4 entirely misses the point. Entirely. ImGui retains state behind 
the scenes, and *then* decides how best to batch that up for 
rendering. The advantage for using the API is that you don't need 
to keep state yourself, and zero data is required from disk to 
layout your UI.

5, pathetic. The thorough answer is "determine the distance of 
your output pixel from the line and emit a colour accordingly." 
Which, consequently, is exactly how you'd handle filling regions, 
your line will have a direction from which you can derive a 
positive and negative space from. No specific curve was asked 
for. But especially rich is that the article you linked provides 
an example of how to render text on the GPU.

(Anyone actually reading: You'd use this methodology these days 
to build a distance field atlas of glyphs that you'd use to then 
render strings of text. Any game you see with fantastic quality 
text these days uses this. Its applications in the desktop space 
is that you don't necessarily need to re-render your glyph atlas 
for zooming text or different font sizes. But as others point 
out: Each operating system has its own text rendering engine that 
gives distinctive output even with the same typefaces, so while 
you could homebrew it like this you'd ideally want to let the OS 
render your text and carry on from there.)

So short story: If I wanted a bunch of barely-relevant facts, I'd 
read Wikipedia. If I want someone with a thorough understanding 
of rendering technology and how to apply that to a desktop 
environment, you'd be well down the bottom of the list.


More information about the Digitalmars-d mailing list