D-
Artur Skawina
art.08.09 at gmail.com
Sat Feb 11 09:47:48 PST 2012
On 02/11/12 02:46, Era Scarecrow wrote:
>>
>> There is no way you get a D application into 64K. The language is not powerful enough. Only C can achieve that.
>
> I'll need to agree. Porting D to a smaller memory space and with cramped features in all of this is not going to be good no matter how you look at it. I'm sure it's similar to comparing using perl in something with only 64k of memory, one must ask where you can put the interpreter, decoding and working with the source text, and many other things, not to mention even if you pulled it off, the speed penalty.
>
> With only 64k, you aren't going to need anything extremely complex or elaborate.
> You MIGHT get away with exporting D code to using C symbols, but you'll likely be stuck working with structs, no library support, no heap, no memory management, and fixed-sized arrays. I doubt you'd need templates, or any of the higher functions. All structures and types must be basic or known statically at compile time. Unlikely for lambdas to be used, and a score of other features.
>
> This is all just speculation, but I think you get the picture. If you make a subset of D, it would most likely be named Mini-D. But at that point you've got an enhanced C without going C++.
>
I assumed the poster you're replying to was not being serious.
There's absolutely no difference between the code generated from C and D,
unless you use a few D-only concepts; ignoring compiler issues.
Having several levels of modules and templates, that in the end emit
single CPU instructions is not only possible, it lets you write
completely generic code that's both safer and not less efficient than
writing the equivalent in assembler.
I can easily see D being acceptable for 8K projects, 64K would probably
allow for a mostly complete runtime.
The answer isn't to butcher the language, it's to fix the shortcomings.
So, for example: all array ops need to be lowered and mapped to D code
so that they can be intercepted. Then you can either disallow everything
not supported or allow some subset and fail for every other operation.
Slicing probably counts too, as you may want to disallow or limit its use.
(Intercepted) pseudo-GC allocation might even work for some cases, even
if i'm not sure that it would be very useful, w/o things like scoped args
and more compiler support.
"synchronized" needs to accept anything that implements opLock() and
opUnlock(). Not only does that let you change the locking primitives,
it also lets you use it for things like CLI/STI-type UP exclusion,
since I doubt the 64K machine would be MP. "synchronized" is better
than RAII/scope because it gives the compiler more information; once
it knows that the code inside the critical region is much more expensive
to run, it can more aggressively move certain parts outside, keeping
the critical region as small as possible (think "immutable" accesses,
that are hidden under a layer of functions/templates).
Classes are probably not a good idea in such a small system, yes.
A compiler option to turn off the default TLS model would help too.
What did I miss?
artur
More information about the Digitalmars-d
mailing list