D vs Java

Georg Wrede georg.wrede at nospam.org
Thu Mar 23 06:50:21 PST 2006


Disclaimer:

I'M JUST BLURTING OFF-HAND HERE, ANYONE IN A HURRY, OR LOOKING FOR
WORTHWHILE CONTENT, OUGHT TO SKIP THIS POST.


pragma wrote:
> In article <dvs41e$1ae5$1 at digitaldaemon.com>, Sean Kelly says...
> 
>> pragma wrote:
>> 
>>> In article <dvrrca$103i$1 at digitaldaemon.com>, Don Clugston 
>>> says...
>>> 
>>>> In digitalmars.com digitalmars.D:35128, Walter said of the 
>>>> difference in reals between Linux and Windows:
>>>> 
>>>>>> pragma's DDL lets you (to some extent) mix Linux and 
>>>>>> Windows .objs. Eventually, we may need some way to deal 
>>>>>> with the different padding.

If we specify a new spec, we could have library files that are usable in 
several architectures. The compiler could even automatically generate 
them as multi-platform binaries. As to the main executable, this might 
be harder.

>>>> I think it's a pipe dream to expect to be able to mix obj files
>>>> between operating systems. The 96 bit thing is far from the 
>>>> only difference.

The obj files the compiler makes, don't necessarily need to have 
anything to do with the OS or OS calls. If all system calls go through 
the runtime library, then it can sort out any OS-specific stuff. And, 
especially now that Win, Lin, and Mac all run on the same processors, 
large parts of the .obj files should look the same already.

Having the end result runnable on all three OSs may be an unnecessary 
thing. But accepting that _targeting_ will be per os, then the .obj 
files could theoretically be the same for all of them. At the end of the 
day the .objs of an app get linked together, and with the OS-specific 
runtime library and startup code.

This would be neat -- if every processor was IA. But since we want to 
write D for other processors (and hopefully for embedded processors 
too), the one-obj-for-all-archs is wasted savings.

And of course, if the calling conventions are different, then it may 
present an obstacle. But then again, as long as all calls to non-D stuff 
go through the runtime library, then we're safe. Except for efficiency 
issues.

>>> I read Walter's remark, and it came to me like a shot from the 
>>> blue.

Having 80=96 bit floats is handled in Linux -- after all, it's Linux 
itself that "wants" it. The math unit returns 80 bit entities, which are 
then stored in 96 bits. No problem.

For program portability, there are mainly two issues here:

  - internal representation
  - user defined data structures

The issues with the internal representation are fixed already (why else 
could one stay oblivious of the fact), in Linux.

Storing these floats in user defined data structures may be another 
thing. Ideally this could be solved with a convention (or standard), 
where they are stored as 80 bit or 96 bit, whatever is commonly agreed 
upon. (For example, D for Windows could decide simply to store 80 bit 
floats as 96 bits, period.) Code for calling system calls or libraries 
could take care of the alignment.

>>>> Now, he's quite knowledgeable, but I'd love to prove him wrong 
>>>> on this one. I find it hard to believe that it would be 
>>>> impossible. I guess the question is, will the subset of 
>>>> functionality that works be sufficient to be useful? I guess we
>>>>  won't know until the ELF side is working.
>>>> 
>>>> "Compile once, run everywhere that matters"? (Win, Linux, Intel
>>>>  Mac).

Truly multi-OS applications would be cool, of course. But then again, 
wouldn't software vendors like to sell two boxes instead of one? "You 
want to run this on both x and y, right!" Besides, there's all the GUI 
related differences too. If the binaries would be multi-OS by default, 
then the programmer should have to (at least) have a general idea of 
what not to do.

>>> Pipe dream or not, I think its worth looking into.  And you're 
>>> right: the portable subset of features may be just barely usable.
>>> Until we get some people really pounding away on this, we'll 
>>> never quite know.

I think it's doable. But as to if it is worth the effort, I'd need some 
persuasion.

If not else, the resulting app could by default become a small 
(OS-dependent) "loader" plus the binary. So you'd always get 4 files: 
the three (mac, lin, win) loaders, and the "real program".

>> For what it's worth, there was a thread on comp.std.c++ recently 
>> about a standard shared library format, and someone said that 
>> library formats have recently become sufficiently similar that this
>> is a possibility.
> 
> Ahh, thanks for the info Sean (and for reminding me what a godsend 
> *this* particular NG is).  Its funny looking at the posts that go 
> back a year or two to see folks lobbying around a "Module format" or 
> "module include" (read: import) operator.  Overall, I think we're on 
> the right track with D and DDL.  If any ABI for cross-platform 
> binaries/libraries/modules is going to come about, it'll likely come 
> up later out of necessity - so far I've been *anticipating* need 
> rather than satisfying it.
> 
> On an unrelated note, I also stumbled into Bjarne's proposal on XTI 
> which is just flat-out scary:
> 
> http://lcgapp.cern.ch/project/architecture/XTI_accu.pdf

XTI, Extended Type Information (possibly classes or trees)
XPR, External Program Representation (Human + computer readable)

Incidentally, XPR looks Pascal like, to me.

An idea I got the other year was, to have the IDE not use program source 
code at all. "Source" files would be binary representations of the 
source code (probably as trees), and the only place where "conventional 
source code" would exist, is only on the screen of the program editor.

(Yes, I do know this has been implemented (to various extents) already 
elsewhere since way back, no problem. But now I'm thinking D.)

As an aside, this would let one have comments, drawings and such stuff 
directly "in the source code". It would also enable faster compilation, 
since tokenizing, parsing and semantics would already be done. (And of 
course, there'd be an Export to "normal source code", for long term 
backups and such.)

Since any modern editor needs all kinds of intellisense, bells and 
whistles, keeping the code in an IPR (internal program representation) 
would make this much easier. If a standard file format existed, then 
every programmer could have his own program layout (meaning curly brace 
placement, indentation, comment style and formatting, etc.) on his 
screen, even if he collaborates with others.

Processing of binary source would be very easy compared with textual 
source, as even Bjarne points out in the pdf. One could do all kinds of 
automatic processing, testing and measurement, and automatic derivation 
and analysis, or convenient automated refactoring. And conversion of 
source code to/from XML would be trivial.

Search and replace and such would then be more intelligent! You could 
globally replace a word in one sense and not in others.

D can already exist in HTML, enabling in-source pictures and a rich 
"source experience", theoretically with sound and videos embedded in the 
source code. (The usefulness of these remains to be seen.) One big 
problem witht this is debugging and editing, they become more work for 
the programmer, so currently the HTML-D format mainly suits education.



More information about the Digitalmars-d mailing list