Is there a way to tell D to rearrange struct members or to not rearrange class members?
WraithGlade
wraithglade at protonmail.com
Sat Jun 28 15:20:44 UTC 2025
On Friday, 27 June 2025 at 20:08:31 UTC, Lance Bachmeier wrote:
> ...
>
> I think you're referring to [this description in the
> spec](https://dlang.org/spec/class.html#fields):
>
>> The D compiler is free to rearrange the order of fields in a
>> class to optimally pack them. Consider the fields much like
>> the local variables in a function - the compiler assigns some
>> to registers and shuffles others around all to get the optimal
>> stack frame layout. This frees the code designer to organize
>> the fields in a manner that makes the code more readable
>> rather than being forced to organize it according to machine
>> optimization rules. Explicit control of field layout is
>> provided by struct/union types, not classes.
On Saturday, 28 June 2025 at 03:53:38 UTC, Jonathan M Davis wrote:
>
> I don't know how much D does that with classes, but the
> implementation is allowed to. It may do it all the time or not
> at all. I'd have to check.
>
> As for structs, the compiler is not going to muck with their
> layout on its own. You'll get padding based on the relative
> size of the fields, but the D compiler is going to do what C
> does with the layout. The language doesn't provide any
> mechanism to automatically rearrange the fields.
>
> ...
>
> - Jonathan M Davis
Thank you for linking to the spec Lance and for your insight
Jonathan.
In addition to the [class
spec](https://dlang.org/spec/class.html#fields), I also looked at
the [struct
spec](https://dlang.org/spec/struct.html#struct_layout) and found
that especially informative. The `struct` spec has a significant
amount of precise nuance on the details of how `struct` low-level
layouts work exactly and C compatibility details.
I was hoping there was some simple `pragma` such as some
hypothetical `pragma(memory_rearrange)` or `pragma(memory_fixed)`
to change the memory layout behavior of `class`s and `struct`s to
whatever is best, since in reality the optimal layout is entirely
independent from the other properties that `class` and `struct`
have and so in principle it'd be better for there to be a way to
control it explicitly.
This is especially the case for `struct`, since there are lots of
good reasons to avoid OOP and its overhead and conceptual muck
(and indeed most of the most popular new languages seem to be
aggressively moving away from OOP) but only very specific reasons
(such as interfacing with C or hardware or network protocols,
etc) to not want the memory layout of `struct` rearranged to be
space-optimal.
Indeed, my hope is to not use OOP at all unless forced to at
interface points by 3rd party libraries. The more I've used OOP
the more I've found it ever increasingly awkward and stilted and
fundamentally a misguided basis for structuring code both
conceptually and practically.
Oh well though. It is not that hard or difficult to work around
such things, especially with a language as nice as D (or similar
languages), just occasionally a bit tedious.
-----
A big freewheeling philosophical side-tangent:
This does line up with my experiences of language design features
where an ideological "best practice" is codified into the
language's structure itself tend to very often pan out in the
long term to be design mistakes (although often, as in this case,
relatively small ones). It is extremely easy to think that a
supposed "best practice" covers all cases well and will improve
code when in reality it does nothing of the sort.
Conceptually orthogonal concepts should be kept orthogonal in
languages instead of being conflated or having unnatural
arbitrary ideological constraints imposed upon them preemptively,
in most if not almost all cases. Keeping orthogonal concepts
orthogonal both simplifies implementations and reduces how often
users are restricted and stifled for no real reason.
There is an oft recurring patterns of languages that impose such
restrictions ending up benefiting from relax those constraints
over time as they evolve and/or newer languages casting aside
those kinds of unnatural assumptions entirely and benefit greatly
from doing so.
D is a great language and well designed, but as I've been
learning it I have noticed that nearly always when there is
something I dislike about the language (which is relatively
seldom) it is because some form of "best practice" or structural
assumption (rooted in a clearly apparent lack of imagination
regarding the essentially infinite diversity of possible use
cases) has been imposed upon the language.
There are very few imposed structural "best practices" that are
genuinely universal in my experience. Good naming of variables
would be a rare example of a universally good practice, in my
opinion, but in contrast even pervasive and nearly universally
advocated systems (such as OOP for many years in the past few
decades) have turned out (from a deeper language design
perspective) to ultimately be riddled with false assumptions and
thus the structure imposed by such systems when reified and
scaled up to real software systems has often caused more harm
than good.
Anyway, that's just me rambling from a idealistic philosophical
perspective about languages.
From a pragmatic perspective D (and many other languages) is
great overall obviously and is very worth of praise and wider use
in that regard.
There are still many places where future languages could improve
things though, and most of the low hanging fruit in that regard
hinge upon vigorously questioning many very arbitrary assumptions
and needless rigidity (both in syntax and semantics) in language
design.
One of my favorite examples of that syntactically is *mixfix
notation* (unrelated to mixins, though the words look similar at
a careless glance), i.e. allowing function definitions to be
*arbitrary phrases* instead of
`alwaysTryingToCramAFullSentenceOfSemanticsAwkwardlyIntoOneSymbol`. There is a pervasive false assumption though that since COBOL is bad and uses mixfix then mixfix itself must be bad but that entirely wrong and is almost 100% *orthogonal* conceptually.
Look for example at rare examples of modern mixfix (i.e. phrase
based) languages such as Jinx on GitHub (a C++ embedded scripting
language) and LDPL for syntactic demonstrations of the falseness
of the assumption that C-like syntax is some kind of natural
endpoint for language syntax evolution. Mixfix enables a level of
readability that on a whole level above practically all the
modern popular languages, especially if it was refined and
polished up to be nice and homoiconic and concise enough. It lets
you define functions as nearly natural language phrases such as
"render (some_object) at location (some_vector) using (some_enum)
rendering method" (improving readability even to the point
non-programmers can read and understand it) and enables each
argument to have its own separate vararg list in theory (not sure
if Jinx or LDPL do that though) and so on.
Likewise, s-expression based languages such as Lisp and Scheme
family languages (e.g. Racket and Steel Bank Common Lisp) and
concatenative languages such as Forth and Factor also provide
great demonstrations of how much more elegant and conceptually
natural and unburdensome and freely expressive syntax can
actually be. Factor identifiers can be literally anything except
whitespace, for example, which is wonderfully liberating.
A hybrid of mixfix syntax and concatenative semantics with C-like
performance for example would have fantastic potential for both
readability and expressiveness, but the C family of languages has
people syntactically chasing their own tails and also
semantically working within a set of contrived classical OOP
assumptions that have no actual genuinely logical basis in what
computing or modeling of conceptual program designs is actually
capable of.
In some ways OOP modeling is even literally *conceptually
backwards*, since it poses that "subtypes" should be "extensions"
with more capabilities than parent classes when in reality from a
logical/mathematical perspective subtypes should actually be
*more constrained* (hence *less* capable) than their parent types
in order to actually be universally substitutable for their
parent.
For example, a square is a subtype (a subset member) of a
rectangle from a logical and mathematical perspective but OOP
models that literally backwards (as something capable of more and
hence as a *superset*, contradicting the actual logical type
relation in reality) and hence results in many conceptually
incorrect consequences such as the notorious "Circle-Ellipse"
problem that demonstrates how ill-conceived OOP methodology
actually is in actuality and how much of a conceptual deadend it
actually is to try to force programs to fit any preconceived
ontology of unnatural categories embodied by any class hierarchy.
That's precisely why so many OOP hierarchies pretty much never
achieve conceptual perfection: they literally *can't*, it's
*logically impossible* because OOP inherently embodies
self-contradictory criteria in its philosophy of what types
"should" be. Logical subtypes must be more constrained but
extension implies a superset of capabilities. Those properties
are fundamentally logically incompatible, hence the mess OOP
usually creates if you look closely enough and question it
alertly enough. Reuse of code via OOP inheritance in contrast is
essentially just a poorer form of reuse than other more genuinely
modular choices such as functions or fixed data formats (e.g.
image file formats) or agent-based message passing ("true OOP" a
la Smalltalk/Pharo) or MVC separation or templates or whatever
other more well considered model one could use.
Subtyping and extension are like oil and water from the
standpoint of actual logical reasoning. A square is a rectangle
but rectangles require independently modifiable sides. Likewise,
a square permits aggressive optimizations not possible for
rectangles. Trying to put them in a hierarchy is thus a microcosm
of how *even for trivially simple and common cases* OOP based
models are doomed to fail conceptually (even if they "work") as
an optimal model regardless of how much effort one ever puts into
them. That's why compile-time typing and composition and
functional programming and so on so often end up better: because
they don't try to fulfill a self-contradictory and hence
counterproductive misleading criteria for how code "should be"
structure such as that embodied by classical OOP.
That's just me pontificating about the ever elusive hypothetical
"best language" and such though. Make of that what you will.
Short and medium term pragmatism is a different matter,
unfortunately. One is forced to contrive one's own mind to fit
these kinds of pre-existing assumptions because that is what the
body of most software already uses, which also creates a "chicken
and egg" problem that perpetuates the lack of real conceptual
diversity in the most popular (and hence viable in production)
languages writ large.
That lack of real diversity also instills false confidence in the
existing body of assumptions and implies that few people actually
end up experiencing any full projects ever made under any
alternative paradigm of design...
This tangent really got longer than I intended, but language
design is a soapbox issue for me and so it was fun to let that
tangent unroll wherever it went.
On the other hand, my years of obsession with finding better
languages has done much harm to my actual software output and I
have wished for many years now that I was not so uptight about
these kinds of things because it really is quite a distraction
from making more real software and such.
I often really wish I was the kind of person who didn't care
about these kinds of things so much and didn't have such a hard
time resisting the urge to build abstractions papering over these
problems in ways that derail too much of my time and historically
have led me to constantly switching languages and libraries
literally for a whole decade without hardly producing any of the
personal project software I've for so long wanted to. That has
become a real problem for me unfortunately, and I really wish I
could lighten up on it.
Anyone else had similar experiences in that regard and have some
insight on fighting against one's own language idealist impulses
enough to be productive?
I've tried telling myself to recognize that it is the end result
(what the end-user sees, etc) that really matters many times, and
countless other notes to myself in that regard, but here we are
anyway.
Language design is fascinating but can be quite a distraction.
Perhaps I should go do a bit of art again for a while, since at
least in art production one is not beset by these kinds of
confounding factors of feeling forced to fit other people's minds
such as we experience too often in software while wanting to
express *our own minds* instead but being frustrated by the lack
of viability of that in the ecosystem and the non-existence and
ephemeral nature of the elusive "perfect language" and such. You
know what I mean?
Anyway, have a great day/night/etc all and see you around!
More information about the Digitalmars-d-learn
mailing list