This is why I don't use D.

Jonathan M Davis newsgroup.d at jmdavisprog.com
Fri Sep 7 19:15:21 UTC 2018


On Friday, September 7, 2018 10:35:29 AM MDT H. S. Teoh via Digitalmars-d 
wrote:
> On Fri, Sep 07, 2018 at 09:24:13AM -0600, Jonathan M Davis via
> Digitalmars-d wrote: [...]
> > What's somewhat more of an open question is how new compiler releases
> > should be handled. Aside from the issue that _every_ package then
> > potentially needs to be tested (which could take a while), there's the
> > issue of which versions to test. Testing every release of a package
> > would be overkill, but simply testing the latest isn't necessarily
> > enough.
>
> The initial setup will take a long time because we have so many packages
> that have never been tested in this way before.  But once the database
> is up-to-date, I expect that it will take much less work to keep things
> up-to-date.
>
> For the initial setup, I don't think it's necessary to test *every*
> historial release of the compiler -- the most important thing is the
> current official releases of all compilers (gdc, ldc, dmd).[*]  If a
> package doesn't compile with the latest release, then optionally do a
> unidirectional search backwards to find the last compiler version that
> compiles it successfully, and log that.

I pretty much assumed that we'd just start with the most recent compiler
release, and then we'd only have to run anything for a given package when it
did a new release or when a new version of the compiler was released. My
point about versions was about which versions of a package to test. We don't
necessarily want to just test the latest package version, but we also don't
want to be testing every version of a package when a new compiler version
comes out. I think that at some point here, some thought will have to go
into what makes the most sense in that regard.

> ([*] I was going to suggest including dmd-nightly as well, but that
> poses the problem of load: running it every night will cause a lot of CI
> churn, which also generates a lot of mostly-useless information -- no
> one will care about which of 50 dmd git revisions failed / succeeded to
> compile a package, just whether the *latest* dmd-nightly works.  So
> perhaps an update once a week or once a month in between releases will
> be enough.  But dmd-nightly is an optional extra that can be skipped for
> now.  The important baseline is the current official releases of gdc /
> ldc / dmd.)

My gut reaction is that we shouldn't include anything but actual dmd
releases. As a developer of Phobos, I do often use the development version,
but in general, I don't think that that's something that we should be
encouraging the average user to do.

> > Also, there's the question of what it even means to test each package
> > to verify that it's working. Does that mean running the unit tests?
> > Not all projects run them the same way, and it could be pretty
> > expensive to run the tests for all packages on code.dlang.org. And of
> > course, that assumes that the package even has any unit tests to begin
> > with. So, should testing the package just mean that dub build works?
> > That will catch the really basic problems, but it won't usually catch
> > problems in templates and could easily miss other types of problems -
> > though maybe it's enough. Regardless, it's going to need to be made
> > clear what it means that code.dlang.org claims that a package works
> > with a particular version of the compiler.
>
> [...]
>
> Let's not overcomplicate things and doom the flight before it takes off.
> Let's start with the absolute minimum baseline, and then additional
> perks can be added on top of that once the baseline is working.
>
> I say the baseline is that dub build works.
>
> If we feel like going an extra mile, dub test.
>
> If we feel like doing even more, then add other stuff like compiling
> provided example programs, testing template instantiations and what-not.
> (Though one would expect that dub test ought to do that already,
> otherwise I question the quality of the code / unittests.)
>
> As long as we publish exactly what is being run to verify whether the
> package "works", I think that should be good enough for starters.  Even
> if dub build, say, doesn't catch all problems, it's still better than
> the status quo of no information at all.  Let's not let the perfect
> become the enemy of the good, as is the common malady around here.

Honestly, I wouldn't rely on anything beyond dub build working in a
consistent manner across projects. As far as I can tell, you can't actually
do anything properly custom with dub test, and I'm inclined to think that
how it approaches things is outright wrong. With dxml, I was forced to
abandon dub test completely, because it broke anything that depended on
dxml. As far as I can tell, what happens is that dub pulls in the
dependencies and builds them all with dub build (so without stuff like
-unittest), but then when you use dub test on your project, it stupidly uses
all of the configuration options that the dub test configuration uses in the
dependencies, meaning that anything that didn't get built with dub build
wasn't there, and you got linker errors. I initially tried to fix it by
versioning all unittest blocks and all unit test helpers with a version
identifier specific to dxml's dub test build, but dub stupidly declared that
version identifier when any project depending on dxml ran dub test, so you
_still_ got linker errors. My conclusion was that the only sane thing to do
was to completely abandon dub test and define a separate build type for
running dxml's unit tests so that the version identifier could actually only
be declared when building dxml's unit test build. The result is that dub
test always passes but does absolutely nothing, which is really annoying,
but I don't trust how dub test behaves farther than I can throw it, and it's
currently my intention to never use it on any library I release ever. The
whole thing really pisses me off.

So, anyway, I think that it's pretty clear that you can't rely on anything
other than dub build working for a project, because that has to work for it
function as a dub package, whereas none of the rest does.

Regardless, aside from pointing out that dub test is not a command that you
can rely on working for dub packages in general, my point was really that we
need to be sure of what we're trying to test here, and when we present it on
code.dlang.org, it needs to be clear what it's indicating. If we were to
decide that we wanted it to indicate some level of actual functionality,
then we'd pretty much have to figure out how to run unit tests or to
indicate the lack of unit testing, which creates a whole host of problems.
On the other hand, if we just want to do dub build, we need to be clear that
when code.dlang.org indicates that a project "passes," all that means is
that dub build succeeds and that it doesn't actually say anything about how
well the package works. Ideally, we want to present useful information that
isn't misleading.

- Jonathan M Davis





More information about the Digitalmars-d mailing list