Any way to override base type with dervived in derived type
IntegratedDimensions
IntegratedDimensions at gmail.com
Fri May 25 03:19:59 UTC 2018
On Friday, 25 May 2018 at 01:42:48 UTC, Basile B. wrote:
> On Friday, 25 May 2018 at 01:17:45 UTC, IntegratedDimensions
> wrote:
>> On Friday, 25 May 2018 at 01:02:00 UTC, Basile B. wrote:
>>> On Friday, 25 May 2018 at 00:15:39 UTC, IntegratedDimensions
>>> wrote:
>>>> On Thursday, 24 May 2018 at 23:31:50 UTC, Alex wrote:
>>>>> On Thursday, 24 May 2018 at 20:24:32 UTC,
>>>>> IntegratedDimensions wrote:
>>>>>> class T;
>>>>>> class TT : T;
>>>>>>
>>>>>> interface I
>>>>>> {
>>>>>> @property T t();
>>>>>> }
>>>>>>
>>>>>> abstract class A
>>>>>> {
>>>>>> T _t;
>>>>>> @property T t() { return _t; }
>>>>>>
>>>>>> }
>>>>>>
>>>>>> class C : A
>>>>>> {
>>>>>>
>>>>>> // Stuff below uses t as TT but compiler, of course,
>>>>>> treats t as T
>>>>>> ...
>>>>>> }
>>>>>>
>>>>>>
>>>>>> The issue is that I programmed the class C with a variable
>>>>>> that directly was based off TT, I later subderived T from
>>>>>> TT and exposed it in I. (TT was refactored in to T and not
>>>>>> T)
>>>>>>
>>>>>
>>>>> As as a side note:
>>>>> I can hardly follow this, as you don't show, where you use
>>>>> the interface I. However, especially if TT was refactored
>>>>> in such a way, that is a set difference of T and not T, why
>>>>> you choose to derive from T instead of to contain T?
>>>>>
>>>>
>>>> It really should be obvious that A was meant to derive from
>>>> I. This is just standard oop. Simply leaving off : I should
>>>> not be a deal breaker because it would not change the whole
>>>> problem from black to white or vice versa.
>>>>
>>>> T is a member to be included. You can only derive from one
>>>> class. C can't derive from both A and T and even if it did,
>>>> it would mean something else.
>>>>
>>>>
>>>>
>>>>> https://en.wikipedia.org/wiki/Composition_over_inheritance
>>>>> http://wiki.c2.com/?CompositionInsteadOfInheritance
>>>>>
>>>>> Well, can imagine useful cases though...
>>>>>
>>>>
>>>> This is not a composition pattern.
>>>>
>>>> This is a parallel inherentence pattern.
>>>>
>>>> TT : T = T
>>>> | | |
>>>> v v v
>>>> C : A : I
>>>>
>>>> TT is used with C and T with I.
>>>>
>>>> When C changes to C', TT : T changes to TT' : T
>>>>
>>>> All functions that use TT in C are forced to use it as if it
>>>> were of type T rather than TT which requires a bunch of
>>>> casts.
>>>>
>>>> This is generally a violation of type logic. There is
>>>> nothing in that prevents t from being something like TTT
>>>> which has no direct relation to TT.
>>>>
>>>> But the programming logic of the code enforces t to be of
>>>> type TT in C *always*. So I don't know why I would have to
>>>> use casting all the time. It would be nice if there where a
>>>> simple logical way to enforce a design pattern in the type
>>>> system knowing that it is enforced at runtime. This makes
>>>> cleaner code, nothing else.
>>>>
>>>>
>>>>
>>>>>>
>>>>>> But all the code in C assumes t is of type TT but now due
>>>>>> to the interface it looks like a T, even though internally
>>>>>> it is actually a TT.
>>>>>>
>>>>>> What I'd like to do is
>>>>>>
>>>>>> class C : A
>>>>>> {
>>>>>> private override @property TT t() { return
>>>>>> cast(TT)(_t); } // null check if necessary
>>>>>> // Stuff below uses t which is now a TT
>>>>>> ...
>>>>>> }
>>>>>>
>>>>>> or whatever.
>>>>>>
>>>>>> This is simply so I don't have to rename or cast all my
>>>>>> uses of t in C to type TT.
>>>>>>
>>>>>> I'm pretty much guaranteed that in C, t will be type TT
>>>>>> due to the design(C goes with TT like bread with butter).
>>>>>>
>>>>>> So, it would be nice if somehow I could inform the type
>>>>>> system that in C, t is always of type TT and so treat it
>>>>>> as such rather than forcing me to explicitly cast for
>>>>>> every use. Again, I could rename things to avoid the same
>>>>>> name usage but in this case it is not necessary because of
>>>>>> the design.
>>>>>>
>>>>>> Is there any semantics that can get me around having to
>>>>>> rename?
>>>>>
>>>>> Maybe, you are looking for Curiously Recurring Template
>>>>> Pattern?
>>>>>
>>>>> ´´´
>>>>> interface I(P)
>>>>> {
>>>>> @property P t();
>>>>> }
>>>>>
>>>>> abstract class T(P) : I!P
>>>>> {
>>>>> P _p;
>>>>> @property P t() { return _p; }
>>>>> }
>>>>>
>>>>> class TT : T!TT
>>>>> {
>>>>>
>>>>> }
>>>>>
>>>>> void main()
>>>>> {
>>>>> auto tt = new TT();
>>>>> static assert(is(typeof(tt.t) == TT));
>>>>> }
>>>>> ´´´
>>>>
>>>> No, I am trying to keep parallel derived types consistently
>>>> connected. If A is derived from B and C from D and B uses D
>>>> then A uses C. Consistency cannot be guaranteed by the type
>>>> system at compile time because A is typed to use C, I want
>>>> to restrict it further to D.
>>>
>>> You must put a template parameter in the interface and
>>> specialize the class that implements the interface.
>>>
>>> ```
>>> module runnable;
>>>
>>> class T{}
>>> class TT : T{}
>>>
>>> interface I(N)
>>> {
>>> @property N t();
>>> }
>>>
>>> abstract class A(N) : I!N
>>> {
>>> N _t;
>>> @property N t() { return _t; }
>>> }
>>>
>>> class C1 : A!T{}
>>> class C2 : A!TT{}
>>>
>>> void main(string[] args)
>>> {
>>> import std.traits;
>>> static assert(is(ReturnType!(C1.t) == T));
>>> static assert(is(ReturnType!(C2.t) == TT));
>>> }
>>>
>>> module runnable;
>>>
>>> class T{}
>>> class TT : T{}
>>>
>>> interface I(N)
>>> {
>>> @property N t();
>>> }
>>>
>>> abstract class A(N) : I!N
>>> {
>>> N _t;
>>> @property N t() { return _t; }
>>> }
>>>
>>> class C1 : A!T{}
>>> class C2 : A!TT{}
>>>
>>> void main(string[] args)
>>> {
>>> import std.traits;
>>> static assert(is(ReturnType!(C1.t) == T));
>>> static assert(is(ReturnType!(C2.t) == TT));
>>> }
>>> ```
>>>
>>> but obviously this won't work if you want to derive C1 or
>>> C2...
>>
>>
>> or if there 100 fields.
>>
>> This isn't a proper solution.
>>
>> The whole issue is not outside of C but inside
>>
>> Hypothetically
>>
>> class C : A
>> {
>> @property TT : T t() { return _t; }
>>
>> // t can be used directly as TT rather than having to do
>> (cast(TT)t) everywhere t is used.
>> }
>>
>> would solve the problem and it would scale.
>>
>> The way it would work is that inside C, t is treated as type
>> TT which is derived from T. The derivation means that it
>> satisfies the interface constraint since we can always stick a
>> TT in a T.
>>
>> Since the type system doesn't allow such behavior, I'm trying
>> to find a convenient way to simulate it that isn't more
>> complicated than just casting. The problem is that casting is
>> very verbose and scales with the code size.
>>
>> Also, such notation would still work with deriving from C
>>
>> class CC : C
>> {
>> @property TTT : TT t() { return _t; }
>> }
>
> Using dynamic castc while you think that the static type should
> be known is not good anyway.
Who said I "think" they are know?
You missed the point where I said the code is designed to work
this way. Forcing one to make their code more terse for some
arbitrary sake of "safety" is also not a good way.
If there is only one point of assignment to T and that point
assigned it a type TT : T then that is pretty damn safe. Sure,
not 100% because anyone can do anything any time in a program.
Nothing is 100% safe. It is not difficult to circumvent the type
system that claims to be 100% correct. I am trying to work with
the type system and inform it of a well defined behavior but it
own't let me because it is too dense to understand.
More information about the Digitalmars-d-learn
mailing list