DLF September 2023 Monthly Meeting Summary

matheus matheus at gmail.com
Mon Nov 13 04:46:07 UTC 2023


On Monday, 13 November 2023 at 03:07:07 UTC, Mike Parker wrote:
> On Monday, 13 November 2023 at 00:55:37 UTC, zjh wrote:
>> On Sunday, 12 November 2023 at 19:50:02 UTC, Mike Parker wrote:
>>
>> https://gist.github.com/mdparker/f28c9ae64f096cd06db6b987318cc581
>>
>>
>> I can't access it,please post it here.
>
> I can't. It's too big. That's why I posted it there.

Well maybe splitting in 2 parts? - Let's try:

Part 1:

DLF September 2023 Monthly Meeting Summary

The D Language Foundation's monthly meeting for September 2023 
took place on Friday the 15th at 15:00 UTC. After we'd had one of 
our shortest meetings ever the previous month, this one was a bit 
of a long one, lasting a little over two hours.

Note that this was Timon Gehr's first time joining us for a 
monthly. I'd spoken with him at DConf, and he'd expressed 
interest in joining both our monthlies and our planning sessions 
when he has the time for them. I'll be inviting him as a 
permanent member as long as he's willing to be one.
The Attendees

The following people attended the meeting:

     Walter Bright
     Timon Gehr
     Martin Kinkelin
     Dennis Korpel
     Mathias Lang
     Átila Neves
     Razvan Nitu
     Mike Parker
     Adam D. Ruppe
     Robert Schadek
     Steven Schveighoffer

Robert

Robert got us started by letting us know he had done some 
preliminary JSON 5 work at DConf. He also gave an update on his 
script for the Bugzilla to GitHub migration. He had changed it to 
use a "hidden" API that someone from GitHub revealed to him when 
he reached out for assistance. Though there are still rate limits 
to deal with, his script was now much faster. The previous script 
would have taken days to migrate the dmd issues, but for a test 
run, he was able to do it in one sitting at DConf. He was ready 
to show me how to use it so I could test it and provide him with 
feedback.

Other than that, he'd done a few small things on DScanner and was 
waiting on Jan Jurzitza (Webfreak) to merge them. He noted that 
Walter had asked him to write an article for the blog related to 
his DConf talk. Robert had an idea for an article related to the 
DScanner updates to supplement the talk.

(UPDATE: During a subsequent planning session, Robert reminded me 
that the only reason I had volunteered to do the migration was 
that he didn't have admin access to our repositories. That was 
easily rectified. He will now be doing the migration. At our most 
recent planning session, we talked about a migration plan. Before 
taking any steps, he's going to chat with Vladimir Panteleev. 
Vladimir raised several concerns with me a while back about the 
Dlang bot and other things the migration might affect. Robert 
wants to get up to speed with all of that before moving forward.)
Me

I told everyone I'd just gotten home from my DConf/vacation trip 
two days before the meeting and had spent a chunk of that time 
decompressing. I did manage to get a little bit of post-DConf 
admin out of the way the night before by going through all the 
receipts from everyone eligible for reimbursement to let them 
know how much was due to them. I went into some details on how I 
was going to make those payments. The big news there was that we 
managed to get enough in revenue from registrations and 
sponsorships that we came in under budget, i.e., the amount 
Symmetry needed to send us to make up the difference was less 
than the total they'd allocated for reimbursements. (Thanks to 
Ahrefs, Ucora, Decard, Funkwerk, and Weka for helping out with 
that!)

I then reported that I'd started editing Saeed's video. The venue 
had provided me access to all of their footage this year. Last 
year, they only gave me footage from one camera and wanted us to 
pay extra for more. This year, I have footage from three cameras 
('main', 'wide', and 'left') as well as the video feed of the 
slides.

Next, I noted that John Colvin and I had participated in an 
after-action meeting with Sarah and Eden from Brightspace, our 
event planners (if you were at DConf, Eden was the young woman 
sitting out front all four days, and Sarah was with her on the 
first day). We all agreed that, aside from the unfortunate laptop 
theft and COVID outbreak, the mechanics of the conference went 
well this year. We went through some feedback we'd received to 
discuss how to improve things next year (more info on badges, an 
actual registration form to get that info, etc.), and tossed 
around some ideas on how to prevent future thefts and mitigate 
the risk of COVID outbreaks. One thing we agreed on there is to 
have an extra person at the door whose main job is to check 
badges. There will surely be other steps to take once we consult 
with the venue. They're evaluating what measures they can take to 
avoid a repeat at any event they host.

I also let everyone know what Sarah said about our community. Due 
to disruptions at Heathrow at the start of the conference, 
several attendees found themselves with canceled flights. A 
number of them had an extraordinarily difficult time arranging 
transportation. Sarah told us that in her years of event 
planning, she'd never seen so many people go to the lengths that 
this group went through to attend an event. She found that 
amazing and said we have a special community.

John and I agreed that planning for DConf '23 got moving too late 
and that we needed to start planning earlier for DConf '24. We'd 
like to push it back into September, since that would be past 
peak travel season, meaning cheaper airfare and lodging. 
Unfortunately, that's also peak conference season. CodeNode had 
offered us a nice rate for the off-peak period the past two 
years, but the cost during the peak period was prohibitive. We're 
going to see if we can get a later date at a reasonable price, 
and also look into moving back again to May.

Dennis wondered if it would be worth dropping the Hackathon day 
to reduce the cost, or maybe figure out a way to better utilize 
it, or perhaps find a more casual space for it. I replied that it 
was already significantly cheaper than the other three days. We 
budget for fewer people, which reduces both the venue and 
catering costs, and we don't use the A/V system. I think we 
should find a way to better utilize it (some people do get work 
done that day, and there's a good bit of discussion going on, 
too). This year, several people attended Saeed's IVY workshop 
sessions. I said that Nicholas Wilson had an idea for a workshop 
for next year. Brian Callahan had also suggested workshop ideas.

(UPDATE: Plans are already afoot for DConf '24. I've got a 
meeting with two Symmetry people this week to discuss the budget, 
and all three of us will be meeting with Brightspace soon to talk 
about planning. I hope to announce the next edition of DConf 
Online soon. Last year, having it in December so soon after DConf 
was a royal pain. So I'm pushing it into 2024 this time. I just 
need to get a solid idea about what time of year we're doing 
DConf before I can schedule it. I want to keep six months between 
them.)
Adam

Unreachable code

Adam started by bringing up the "statement unreachable" warning. 
He said it's really annoying and we should just eliminate it. One 
false positive is enough to destroy the value it delivers, and it 
doesn't actually reduce bugs. He had a PR that versioned it out.

Walter said he had initially been against adding that warning, 
but it had highlighted some unintentionally unreachable lines in 
his code. He said we could debate how much value it has, but it's 
not valueless.

Timon said he was in favor of getting rid of it because he found 
it mostly annoying.

Steve agreed that the feature has very little use, especially in 
templated code where something might be unreachable in one 
instantiation, but reachable in another. You then have to work 
your template around that and it causes all kinds of grief. It's 
valuable if you're using it to find those kinds of problems, like 
in a linting context. But most people have warnings turned on as 
errors, so when they don't want to see this one message, they 
either have to turn off warnings-as-errors or add a workaround in 
code. Dub has warnings as errors by default. He said he'd rather 
see the compiler just remove unreachable code rather than warn 
about it.

Mathias didn't think it was completely useless. It had helped him 
a couple of times. But, as Steve said, it's a pain with 
templates. If you have a static foreach or a static if, you're 
going to hit it pretty quickly. He often ends up writing 
conditions that are always true just to shut the compiler up. 
It's not realistic to avoid warnings. He equates a false positive 
in the compiler to a broken CI: when people get used to a broken 
CI, they ignore it completely and no longer rely on it. You want 
your CI to always be reliable. You want the same for your 
compiler. If it's giving false positives, get rid of that feature.

I asked if this was the time to start talking about 
enabling/disabling specific warnings on the command line. Walter 
said that's so tempting and it's so awful. There was a general 
chorus of "No".

Martin said he agreed with all the points raised so far. This is 
a feature that should be in a configurable linter for those who 
want it. It had helped him a couple of times, but it had gotten 
in the way on many, many more occasions than where it was useful.

Walter asked if it would be possible or make sense to disable the 
warning in template expansions. Adam said yes, but gave the 
example of bisecting a function. In that case, you'll stick an 
assert(0) somewhere as a debugging aid and now it won't compile 
anymore. So what's the point? Walter agreed that's annoying, but 
that he usually comments out the rest of the code to silence the 
warning. Adam said he was just asking to look at the 
cost-benefit. Even if the benefit is non-zero, the cost is 
significant.

Walter considered it, then said it was a tough call. I told him 
that this sounds like it's not enabling the realization of ideas, 
but rather frustrating the realization of ideas (the first clause 
of the DLF's IVY statement is 'to enable the realization of 
ideas'). Walter agreed and said it's not a big deal if it's 
removed and consigned to the linter.

Dennis liked the decision, but he made the point that it's nice 
for this information to be utilized by editors. For example, 
code-d grays out unused parameters, so you can see it's unused, 
but it's not intrusive in any way. He suggested that we disable 
the check in dmd, but keep the code for it so that it can be used 
by clients of dmd-as-a-library. Walter said that's a good idea.

(UPDATE Adam has since modified his PR so that the check is 
disabled but not removed, and the PR has been merged.)

DLF roles

Next, Adam asked who was responsible for decisions about Phobos, 
saying that it used to be Andrei. I replied that it was Átila.

Then he said he'd heard something about an ecosystem management 
team and asked about its status. I explained that was an idea I'd 
had a while back. I'd posted about it in the forums and mentioned 
it in my DConf '23 talk. I was trying to organize a group of 
volunteers who could bring order to the ecosystem (identify the 
most important projects, help us migrate services to new servers, 
etc.). Though things got off to a positive start, in the end, it 
never materialized. The volunteers all had time constraints that 
caused long delays, and I wasn't checking in with them frequently 
enough to coordinate them. Then we entered the IVY program and 
that led us to have a massive change of plans (more accurately, 
it caused us to start planning things where there were no plans 
before). So the "ecosystem management team" that I envisioned is 
no longer a thing. Instead, we have a high-level goal to "Enhance 
the Ecosystem". Mathias and Robert are in charge of that. Once 
they get moving at full speed, they'll be doing those things I 
had expected the management team to do.

Adam mentioned that Andrei used to reach out and actively solicit 
people to do specific tasks. That was why Adam wrote the octal 
thing; Andrei had put it out as a challenge to see if someone 
could do it. He said he kind of missed Andrei. Adam never cared 
for his code reviews, but Andrei did some interesting things. I 
noted that reaching out to people and finding new contributors is 
in my wheelhouse. I've reached out to a few existing contributors 
for discussions already (including Adam and Steve, which is how 
they ended up in our meetings) and will reach out to more in the 
future (note that Razvan, Dennis, and I have discussed different 
steps we can take to bring more active contributors in and 
implement some ideas a little further down the road; Dennis has 
already started a contributor tutorial series on our YouTube 
channel and plans to continue it).

(I should note for those who aren't aware that Razvan, Dennis, 
and I are each paid a small monthly stipend for our DLF work, 
mostly courtesy of Symmetry, for a fixed number of hours per week 
which we often exceed. Additionally, Symmetry allows Átila one 
work day each week to spend on DLF stuff. We're the only ones 
receiving any compensation. Any work Mathias and Robert do for 
the DLF is done primarily on a volunteer basis on their own time.)
Steve

Steve reported that he'd been trying to get his Raylib port to 
use ImportC. He'd found that ImportC had come a long way. He'd 
gotten to a point where it was almost working except for a few 
linker errors. He'd been working with Martin to solve them and 
expected them to be fixed in the next LDC release. It was 
possible that with the next version of LDC, he could have a 
Raylib port with internal parts that were ImportC enabled. He 
thought that was just exciting. If ImportC can get to the point 
that you can just import what you need from C, that would take a 
lot of headaches away from creating ports and bindings.

Next, he said he'd started his next D coding class. It reminded 
him that on Windows, more polish is needed in the support for 
someone who's never done software development to get everything 
up and running. Before, he'd set up a web page showing the 
required steps, but the kids still couldn't get it done. He'd 
like to see this sort of thing get focus and support from the 
community.

But everything else had been going well. He'd been really happy 
with the responsiveness of all the people working on D.
Timon

Timon said that at the DConf Hackathon, he had aimed to fix some 
bugs, but he'd given in to temptation and did something else: 
he'd implemented tuple unpacking in foreach loops. He then shared 
his screen to give us a demonstration. Then he showed us a 
problem he'd uncovered. The way the Phobos tuple is implemented 
causes multiple postblit and destructor calls on a struct 
instance in certain situations. With his tuple unpacking, it's 
even worse. Ideally, what we want is to replace all of these 
calls with a move. This has been a bit annoying, but he doesn't 
know what to do about it. It's a limitation of the language.

Walter thanked him for working on tuples. He's unable to do 
anything with tuples at the moment because he's committed to 
stabilizing the language first, but what Timon was doing with 
tuples was important. Timon said he was just keeping it rebased 
on master. The examples he showed us were running on master.

Dennis said that move semantics and copy constructors always 
confuse him. He found it difficult to follow the DIPs that have 
tried to address this stuff. Átila said this had kind of fallen 
by the wayside, and it needs to be finished, as there are 
definitely copies happening right now that shouldn't be.

I suggested that we treat Walter's 'Copying, Moving, and 
Forwarding' DIP as a stabilization feature and not an 
enhancement. I think we must get it in. Átila agreed because we 
shouldn't be having copies for no reason. We claim to have moves 
and then sometimes they don't work.

Timon agreed and brought up a related point. He didn't see that 
there was currently a legal way to deallocate an immutable value, 
where the runtime system is doing it for you. Dennis asked if he 
meant "no safe way" or "just no way at all". Timon repeated "no 
legal way". Like if you have a function that just deallocates an 
immutable value, the compiler can elide it because it returns 
void. Because it's pure. There needs to be a way to say that 
every value needs to be consumed somehow. By default, this is 
with the destructor, but it should be possible to have a function 
that says, "I'm consuming this value now" and then it's gone 
after the function call. We have core.lifetime.move, but it 
doesn't actually remove the thing from consideration. It just 
default initializes it and then you get this pure destructor call 
again.

Martin said that Timon's demo was cool stuff and suggested that 
his lifetime issues are coming from the glue layer, which 
emplaces the result of function calls in final local variables, 
directly if possible. It needs to be handled in the glue layer 
because of some special cases like struct constructors. That's 
pretty ugly. If we kept the interface as it is right now, tuple 
support would need to be reimplemented in every glue layer, not 
just dmd, to make things nicely work.

Martin continued: Regarding the runtime stuff, as Timon 
mentioned, there are language limitations. The basic building 
blocks are there, but Timon's destructor example is one of the 
limitations. That's why Martin has been pressing for move and 
forward to be built-ins instead of the current library solutions. 
We could also get rid of some template bloat if they were built 
in. And sure we don't want those extra postblit and destructor 
calls, but even moves can be quite expensive to destruct if the 
payload is quite big. E.g., a 4kb move isn't free. Ideally, we'd 
have direct emplacement. But we'd probably need to change the AST 
to have it.

Timon thanked Martin for the insight. He thinks this is a 
fundamental thing in the language that should have some sort of 
resolution. Then Martin went into some details about how it's 
hacked into the language and how move and emplace work right now. 
He also said he'd found multiple implementations in DRuntime and 
tried to streamline everything to use core.lifetime.move, but 
ideally move and emplace should be built-ins, not library 
solutions.

Walter said he'd written the forwarding and moving DIP so long 
ago that he couldn't remember the details, so he'd need to 
reacquaint himself with it. He noted that move semantics became 
of interest in the @live implementation, so any proposal to build 
move semantics into the language would probably affect @live for 
the better. The ownership/borrowing system is based on moving 
rather than copying. He didn't have anything insightful to say 
about it at the moment. He'd handed the DIP off to Max a while 
back (when we decided that Walter and Átila shouldn't judge their 
own DIPs anymore, which in hindsight was a mistake) and it never 
got to the finish line.

I told Walter that I had some feedback on that DIP from Weka. 
Once he's ready to look at it again, I'll forward it to him and 
reach out to Weka for any clarification. I suggested we come back 
to this two months or so later on, given that he had so many 
other things to work on. Walter said he had so many priorities at 
the moment. Átila had been pushing him to get moving on changing 
the semantics of shared, which he'd agreed to do. That was 
holding up a lot of people, too. And then there were the ImportC 
problems. It's an unending river of stuff. So he agreed we should 
push the move stuff a little further down the road for the time 
being. I said I'd make a note and bring it up again in a couple 
of months.
Mathias

Mathias brought up -preview=in. At a past meeting, Mathias had 
agreed to change the behavior for dmd such that it always passes 
by ref (Walter felt that having it sometimes pass by value and 
sometimes by ref was an inconsistency and wanted consistent, 
predictable behavior; Martin disagreed, seeing it as an 
optimization, and said he would keep the current behavior in 
LDC). Mathias hadn't gotten around to changing it. So a few weeks 
before the meeting, Walter had emailed him about it. So he'd 
started implementing it and that had created a bug in dmd. He had 
been trying to track it down but hadn't had much time for it. It 
was one of his priorities at the moment.

He then reported that he'd had a productive discussion with the 
Ahrefs people at DConf. They're very interested in D's C++ 
integration. They had a problem in that they had some classes 
they wanted to pass by reference, but at the moment when you have 
a function that takes a class argument, it manages it as a 
pointer. They had talked about ways to pass it by ref instead. 
Mathias wanted to implement a proof of concept and had already 
discussed it with Walter.
Martin

Martin said he'd been working on the D 2.105 bump for LDC. It had 
been going quite smoothly so far. He'd finally been able to test 
the Symmetry codebase against the latest compiler releases. Some 
of the 2.103 and 2.104 regressions had already been fixed by 
then. Some hadn't, but they had been fixed since then. He thanked 
Razvan for that. Everything was looking good. He was hoping to be 
able to test Symmetry's codebase early against the 2.106.0 
release.

He closed by telling Steven that fixes for the ImportC issues 
he'd reported should be part of the LDC 1.35 release.
Dennis

Dennis reported that Cirrus CI no longer had unlimited free 
credits. On that day, he'd seen a PR that had sent us over the 
monthly limit, and it was only the middle of the month. He 
suggested we need to upgrade or reduce the number of CI jobs.

Martin said that LDC had the same problem, but was happy to see 
we'd made it that far. He said that in August, the LDC project 
had used 180 credits, so 50 credits just wasn't much. He wasn't 
sure what they were going to do either. It was the only native 
64-bit ARM platform they could use and test.

We had a bit of discussion about how much it might cost us in 
total per month if we were to upgrade our plan. Martin suggested 
we could shuffle things around a bit. He'd been looking into 
CircleCI again and they had a better offer for open-source 
projects. Of course, that could change at any time, but he'd done 
some calculations and they had a much higher service limit. He 
wasn't sure about Mac OS and if Circle would be feasible for 
testing dmd and M1. But as far as he was aware Cirrus was the 
only one that had support for BSD. Another option was hosting 
completely custom runners.

Steve said it would be good to try to work out a budget. Use what 
we can for free, pay for what we need. Maybe try to find 
volunteers to host like we used to do with the auto tester. 
Because one day there might not be any free options left.

Mathias advised that we tie ourselves up more with GitHub. They 
have a good base offering that he didn't think would go away, but 
even if it did, it can pretty easily register any platform and 
any runner. A few bare metal machines here and there could do the 
job. It would essentially be the same as the auto tester we had 
before but on GitHub. And much more open, so we could make it 
easier for people to register a machine if they want to test. He 
notes that we have two servers running the BuildKite tests, one 
managed by Ian and one by Mathias. They handle the work just fine.

Dennis said we don't need to run all the checks on every commit 
on every PR. That's wasteful. We could first do a quick test to 
see if the compiler builds at all and then split it off to all 
the platforms and BuildKite to reduce our CPU load. GitHub has 
pipelines, so it's possible to have different stages of CI.

Walter said that's a good idea. Martin noted one thing to keep in 
mind was that the latency time for the first feedback is nice if 
the first test works, but if it just happens to fail on ARM or 
something, then the latency is twice as high as it was before. 
That might not be a problem for dmd, but would be painful with 
LDC where some jobs take up to an hour and a half. Dennis said 
that in practice it either fails immediately or it compiles but 
doesn't compile the standard library, or it runs the test suite 
but not BuildKite. There are three stages at which either all 
platforms fail or all succeed.

Mathias said he would look into it.
Razvan

Template issues

Razvan started with the regressions Martin had mentioned he'd 
fixed. They were caused by a PR from Walter to stop the compiler 
from emitting lowerings that instantiate the runtime hook 
templates when the context is speculative. He said normally, when 
you find e.g., the new array expression, you immediately lower it 
to the newarray template, then instantiate the template and 
analyze it, and store it in some field. But if that context is 
CTFE, then in most cases you don't need to do that. You're going 
to interpret the expression anyway. But there are some situations 
where you still need to do the lowering even in a CTFE context. 
So if the lowering isn't done, you end up with instantiation 
errors in those situations.

He said the solution here would be to always instantiate the 
template and save it as a lowering, but then you don't always 
have to emit the code for it. That approach also comes with 
problems. What the frontend does now is when you have a template 
instantiation in a speculative context, it marks the template 
instance as not needing codegen. But then if the instance 
instantiates other templates, those are no longer marked as being 
speculative. And if the root template is not generated but the 
children instances are generated, this also leads to problems. As 
an example, he cited one of the regressions: given a __ctfe block 
that instantiated map, passing it a lambda as an alias parameter, 
then the code did not get lowered to the hook because it was in a 
CTFE context. But then that lambda was passed to other templates 
that map instantiated, and when it was analyzed in those 
contexts, it caused ICEs.

His TL;DR: we need to fix this by seeing what the root 
instantiation is and propagating the fact that any child 
instantiations it triggers don't need codegen.

Martin said that everything Razvan described is supposed to 
happen and works in most circumstances. It's all very complex 
stuff. He then went into quite some detail about what happens 
with template instantiations triggered from a speculative 
instantiation. There are some issues with the current 
implementation that we have some workarounds for, but he felt 
that the regressions weren't related to that. He said that in 
some cases, we rely on the new template lowerings to generate 
errors by themselves to catch some error cases and have some nice 
reasons for those failures. Shortcutting those in CTFE by just 
checking you're okay, we're going to skip the lowering because 
we're going to assume it always works. That's not going to cut 
it. That's the main problem. We need to remove those checks when 
there's a possibility that the lowered template instantiations 
can fail. That's what was fixed and it's working nicely now.

Martin said maybe a more general solution was to abandon the 
lowering in the frontend. Walter suggested that maybe the 
lowering should be done the old way, in the glue code. Razvan 
said that doing it that way is like giving up on the issues that 
are surfacing right now because we're using templates in the 
runtime. Other template issues have come up that people hadn't 
been hitting so often, but now that we're using templates in the 
runtime they're more obvious.

He said that another issue is with attributes. Those lowerings 
are done from all sorts of attributed contexts. Currently, 
inference doesn't work if you have cyclic dependencies on 
template instantations. The compiler just assumes they're 
@system, not pure, not @nogc, and so on. He doesn't think the 
template lowerings in the frontend are the problem here, but that 
there are some template emission bugs, or instantiation bugs, 
that we need to fix anyway.

Martin agreed.

Regarding the attribute inference problem, Timon said maybe it's 
a matter of computing the largest fixed point and not the 
smallest fixed point. The compiler could infer an attribute 
whenever there is no reason that it cannot be there, but 
currently, it infers them if it thinks there is a reason for them 
to be there. That seems like the wrong way around. But if we 
don't do any introspection, it's a bit tricky to get it right.

Martin said that in the past, Symmetry's codebase had been hit by 
an issue where attribute inference was stopped. The compiler 
skips it at some points. He cited an example when an aggregate 
hasn't yet had semantic analysis in a particular instantiation 
where we need one of its member functions. Inference is just 
skipped completely. It's a tricky problem.

Razvan said he had just wanted to point out that he's looking 
into this problem when instances that don't need codegen are 
getting codegen. He hoped to find out what the issue was.


More information about the Digitalmars-d-announce mailing list