Non-null objects, the Null Object pattern, and T.init

Paolo Invernizzi paolo.invernizzi at gmail.com
Sun Jan 19 06:23:43 PST 2014


On Sunday, 19 January 2014 at 12:20:42 UTC, Ola Fosheim Grøstad 
wrote:
> On Sunday, 19 January 2014 at 07:40:09 UTC, Walter Bright wrote:
>> On 1/18/2014 6:33 PM, Walter Bright wrote:
>>> You elided the qualification "If it is a critical system". 
>>> dmd is not a safety critical application.
>>
>> And I still practice what I preach with DMD. DMD never 
>> attempts to continue running after it detects that it has 
>> entered an invalid state - it ceases immediately. Furthermore, 
>> when it detects any error in the source code being compiled, 
>> it does not generate an object file.
>
> I think the whole "critical system" definition is rather vague. 
> For safety critical applications you want proven implementation 
> technology, proper tooling and a methodology to go with it. And 
> it is very domain specific. Simple algorithms can be proven 
> correct, some types of signal processing can be proven 
> correct/stable, some types of implementations (like a FPGA) 
> affords exhaustive testing (test all combination of input). In 
> the case of D, I find that a somewhat theoretical argument. D 
> is not a proven technology. D does not have tooling with a 
> methodology to go with it. But yes, you want backups due to 
> hardware failure even for programs that are proven correct. In 
> a telephone central you might want to have a backup system to 
> handle emergency calls.
>
> If you take a theoretical position (which I think you do) then 
> I also think you should accept a theoretical argument. And the 
> argument is that there is no theoretical difference between 
> allowing programs with known bugs to run and allowing programs 
> with anticipated bugs to run (e.g. catching "bottom" in a 
> subsystem). There is also no theoretical difference between 
> allowing DMD to generate code that is not following the spec 
> 100%, and allowing DMD to generate code if an anticipated 
> "bottom" occurs. It all depends on what degree of deviance from 
> the specified model you accept. It is quite acceptable to catch 
> "bottom" for an optimizer and generate less optimized code for 
> that function, or to turn off that optimizer setting. However, 
> in a compiler you can defer to "the pilot" (compiler) so that 
> is generally easier. In a server you can't.

I'm trying to understand your motivations, but why in a server 
you can't? I still can't grasp that point.
--
Paolo


More information about the Digitalmars-d mailing list