VisualD's Intellisense not workign with gtk
Johnson via Digitalmars-d-ide
digitalmars-d-ide at puremagic.com
Mon Aug 7 12:56:51 PDT 2017
On Monday, 7 August 2017 at 18:06:37 UTC, Rainer Schuetze wrote:
>
>
> On 06.08.2017 22:18, Johnson Jones wrote:
>> On Sunday, 6 August 2017 at 19:27:26 UTC, Rainer Schuetze
>> wrote:
>>>
>>>
>>> On 06.08.2017 18:31, Johnson Jones wrote:
>>>> (this is a bit long of a post so take your time(when you
>>>> have time, I'm in no rush) and get some coffee ;)
>>>>
>>>>
>>>> What about locals not seeming to show up? Not sure if my
>>>> question got answered or not?
>>>
>>> I think there is some misunderstanding about the interaction
>>> between the debugger, the semantic engine and the compiler
>>> generated JSON information.
>>>
>>> The short version: there is no interaction at all.
>>>
>>> The slightly longer version:
>>>
>>> - the debugger is a component completely different from the
>>> editor. It operates on the debug info built into the
>>> executable. This represents the information from the last
>>> successful build, but this is hardly accessible to the
>>> editor. It can get obsolete and wrong if you start editing.
>>>
>>> - the JSON information is similar to the debug information in
>>> that it represents the information from the last successful
>>> build, but more accessible to the editor. It gets obsolete
>>> and wrong if you start editing, too. In addition it doesn't
>>> include any information about local variables, so that you
>>> have to analyze the code with other means to make sense of
>>> any identifier.
>>>
>>> - that's the job of the semantic analyzer engine. It updates
>>> whenever you change any of the source code. It figures out
>>> what type an identifier is by following the appropriate
>>> lookup rules, and it can also list its members. So matching
>>> it to information also available elsewhere does not really
>>> help, because that's not the difficult part.
>>>
>>>
>>
>> Yeah, but these locals are variables that haven't changed.
>> It's part of a library that never changes so the information
>> should always be up to date and everything should be
>> consistent with respect to those elements.
>>
>> If it's the semantic analyzers job to report a locals type and
>> all the sub type info, then it's not doing it's job. It should
>> provide coverage in such a way that things that haven't been
>> modified since the last build(like "external" libraries)
>> should be ok to report information because it's safe to assume
>> it's correct.
>>
>> Editing files is an extremely local thing and rarely breaks
>> the semantics of much of the program(it can in extreme cases
>> but usually very rare and less likely the larger the program).
>>
>> If it's still a problem, how about eventually adding an option
>> where we can specify certain json paths as being semantically
>> correct and the semantic engine uses that data as if it were,
>> regardless. It is up to the programmer to make it so.
>>
>> Hence, for something like phobos or gtk or other libraries we
>> won't be modifying, generate the json, stick it in that dir...
>> and the semantic engine uses it regardless of correctness. It
>> matches the locals types up with it and if they match it
>> presents that info.
>>
>> What's the worse that could happen? We get some invalid
>> elements in intellisense? I'm ok with that... as long as they
>> are only wrong if I actually modified the libraries and
>> produced an inconsistent state, which I won't do since I don't
>> modify phobos.
>
> Yeah, getting some additional information from JSON files could
> work, but it's not so easy to make the connection. The JSON
> file does not list any locals, so you still have to make sense
> of an identifier. If you find it's type (not vailable in JSON),
> listing its members is not a big deal.
>
> I think time is better invested at improving the semantic
> engine, though in the long run, the dmd compiler is supposed to
> be usable as a library (but getting it to a point where it can
> be integrated with an IDE is way further ahead IMO).
>
> So, if you can provide full source files instead of single line
> snippets of things that don't work, it will likely be a lot
> easier to reproduce and fix the semantic engine. Also, you
> might want to add them as reports to https://issues.dlang.org/
> for component visuald so they don't get lost.
Could Dscanner not be used?
https://github.com/dlang-community/D-Scanner
The "--ast" or "--xml" options will dump the complete abstract
syntax tree of the given source file to standard output in XML
format.
Simply match the source line number up with the ast, extract the
type. This type then is used as a look up in the JSON(it should
be there somewhere, if not, update dmd to add type information so
cross referencing can be used).
IMO, DMD should be able to generate basically every
cross-relation that one needs to do proper debugging. When it
evaluates a mixin it can generate whatever info is required and
even allow debugging mixins since dmd invokes dmd again to
generate the mixin output, insert a debugger in between.
It seems that the main problem is that dmd itself is not very
well designed to support these types of features. Maybe that is
the place to start?
Basically every line in the source code to correspond to a line
in the binary(although, not a one to one function and not even a
function at all, one can still encode such a mapping to find out
what is where) and vice versa. All type info, mixin expansions,
etc should be easily understood. All this has to work else dmd
couldn't compile them.
The only problem seems to be dmd extracting and presenting the
information in such a way that Visual D can understand it?
In a sense, I don't see why any semantic analysis has to be done.
Everything is already done by DMD and it will do it
perfectly(because it has to).
It would probably be better to add the proper modifications to
DMD so that it can self-regulate with changes in DMD or the
grammar. This prevents having to update any separate utility
every time dmd changes something.
I don't know if dmd as a library will really add any thing new
unless it already has the functionality to do what is required...
but if that's the case it should also be able, then to output all
the required info to a file that Visual D could use, so it's just
a performance difference.
As far as source code is concerned, it happens on any project, so
any project should exhibit such problems. Of course, mixin issues
will only happen when mixins are used.
I think any typical use of Visual D tends to demonstrate some
issue that needs to be resolved. These are not issues that rarely
pop up but are pretty much an every day thing. IMO, there is
something fundamentally broke with Visual D or DMD as regard to
debugging capabilities. I know some of it is with dmd and the
power that be don't care because they don't use modern debugging
techniques... so it doesn't effect them. They are probably
experienced enough programs and don't write any real apps in D
either(exclude utilities are libraries that don't really use a
wide variety of things) to have these types of issues... and when
they do they either know how to fix them, use their old school
debugging methods, or just work through them... none of which is
acceptable for me or the average user of D. (and I seriously
doubt they have probably even used Visual D, much less for any
serious project)
As a case in point: mixin debugging. This is necessary. It is no
different than normal debugging. Where would modern programming
be without debuggers? Well, that is where we are at with mixins.
We have no real way to debug them. I use them all the time and
have to really be careful with what I'm doing and use stupid
tricks to debug them. Mainly string mixins, which is far more
time consuming than getting an error at some line number. I've
learned now that I should use write them as normal runtime
functions first and then once they work to make them ctfe.
But mixin debugging should be easy. After all, it's just a d
program inside a d program. dmd compiles it internally, but if
dmd had an "internal" debugger then we could debug it and get
better results.
In a since, it's just back tracing anyways.
dmd -> mixin -> dmd -> mixin output
<= <=
If the internal dmd would keep track of whats going on it could
map the output line numbers to the input and we could debug the
output(which is effectively inserted in to the main source code
directly in a "temp" file(dmd may obfuscate that process but that
is what effectively is going on))... Visual D could link errors
to the original mixin through the line mapping and even open out
the generated output in a view for us to see what the mixin
outputted.
Of course, all this is not so simple but it's not so hard... just
work(which I guess is the hard part).
My guess is that because dmd was designed a long time ago that it
didn't take in to account what might be, and hence we have what
we have... which is great on one hand and sorry on the other.
I would think that one of the most important things that the D
foundation would be working on besides bugs in DMD and language
design issues is a proper IDE and debugger... but that doesn't
seem to be the case.
Anyways, I've ranted long enough... sorry...
More information about the Digitalmars-d-ide
mailing list