On processors for D (Was: Re: std.date proposal)

Georg Wrede georg.wrede at nospam.org
Tue Apr 4 17:13:16 PDT 2006


Walter Bright wrote:
> Georg Wrede wrote:
>> Walter Bright wrote:
>> 
>>> Double has another problem when used as a date - there are
>>> embedded processors in wide use that don't have floating point
>>> hardware. This means that double shouldn't be used in core
>>> routines that are not implicitly related to doing floating point
>>> calculations.
>> 
>> 
>> Ignoring the issue of date, I have a comment on processors:
>> 
>> IIRC, D will never be found on a processor less than 32 bits.
>> Further, it may take some time before D actually gets used in
>> something embedded.
>> 
>> By that time, IMHO, it is unlikely that a 32b processor would not 
>> contain a math unit.
>> 
>> ---
>> 
>> Of course this may warrant a discussion here, which is good,
>> because then we might end up with a more clear set of goals, both
>> for library development and for D itself.
> 
> 
> At the start that D wasn't going to accommodate 16 bit processors for
> very good reasons, there are 32 bit processors in wide use in the 
> embedded market that do not have hardware floating point. There is no
> reason to gratuitously not run on those systems.

Ok, that was exactly the answer I thought I'd get.

Currently, this issue is not entirely foreign to me. I'm delivering a HW 
+ SW solution to a manufacturer of plastics processing machines, where 
my solution will supervise the process and alert an operator whenever 
the machine "wants hand-holding".

For that purpose, the choice is between an 8-bit and a 16-bit processor. 
Very probably a PIC. (So no D here. :-), I'll end up doing it in C.)

Now, considering Moore, and the fact that the 80387 math coprocessor 
didn't have all too many transistors, the marginal price of math is 
plummeting. Especially compared with the minimum number of transistors 
needed for a (general purpose) 32-bit CPU.

Also, since the purveyors of 32-bit processors are keen on showing the 
ease of use and versatility of their processors, it is likely that even 
if math is not on the chip, they at least deliver suitable libraries to 
emulate that in software.

---

As I see it, there are mainly two use cases for D with embedded 
processors (correct me if I'm wrong): First (and probably the more 
popular scenario), there either exists a rudimentary (probably even a 
real-time) OS for the processor (or application domain), delivered (for 
free) by the HW manufacturer, or, they deliver the necessary libraries 
to be used either with their compiler or for GCC cross compiling.

Second use case being, one is about to develop the entire SW for an 
application "from scratch".

Now, in the former case, math is either on-chip, or included in the 
libraries. In the latter, either we don't use math, or we make (or 
acquire) the necessary functions from other sources.

---

The second use case worries me. (Possibly unduely?) D not being entirely 
decoupled from Phobos, at least creates an illusion of potential 
problems for "from-scratch" SW development for embedded HW.

---

We do have to remember the reasons leading to choosing a 32-bit 
processor in the first place: if the process to be cotrolled is too 
complicated or otherwise needs more power than a 16-bit CPU can deliver, 
only then should one choose a 32-bit CPU. Now, at that time, it is 
likely that requirements for RAM, address space, speed, and other things 
are big enough that the inclusion of math (in HW or library) becomes 
minor. (Oh, and some of the current 16-bit (and even some 8-bit) 
processors do actually deliver astonishing horsepower already.)

So, assuming D has access to math on _all_ of the processors and HW 
it'll ever be on, suddenly doesn't seem so arbitrary.



More information about the Digitalmars-d-announce mailing list