Bring back foreach int indexes

Liam McGillivray yoshi.pit.link.mario at gmail.com
Sun Mar 24 08:23:03 UTC 2024


On Wednesday, 29 November 2023 at 14:56:50 UTC, Steven 
Schveighoffer wrote:
> I don’t know how many times I get caught with size_t indexes 
> but I want them to be int or uint. It’s especially painful in 
> my class that I’m teaching where I don’t want to yet explain 
> why int doesn’t work there and have to introduce casting or use 
> to!int. All for the possibility that I have an array larger 
> than 2 billion elements.

Yes! This! Right now I have 22 of these deprecation warnings 
every time I compile my program. I was going to start a new 
thread recommending this feature be dedeprecated. I'm happy to 
find this old thread with Steve suggesting this very thing, and 
also glad to see most people here are on my side. Having to add 
another variable to do an explicit cast would be ugly.

I don't want this to actually be removed at some point and 
destroy my code which should be perfectly acceptable. It should 
just be removed from deprecation.

When I was thinking of starting this thread myself, I had the 
feeling that there would be some kind of objection from 
programmers more experienced than me. But it looks like Jonathan 
M Davis was the only one here to give a serious argument why it 
shouldn't be allowed.

On Thursday, 30 November 2023 at 15:25:52 UTC, Jonathan M Davis 
wrote:
> Because size_t is uint on 32-bit systems, using int with 
> foreach works just fine aside from the issue of signed vs 
> unsigned (which D doesn't consider to be a narrowing 
> conversion, for better or worse). So, someone could use int 
> with foreach on a 32-bit system and have no problems, but when 
> they move to a 64-bit system, it could become a big problem, 
> because there, size_t is ulong. So, code that worked fine on a 
> 32-bit system could then break on a 64-bit system (assuming 
> that it then starts operating on arrays that are larger than a 
> 32-bit system could handle).

An interesting, not bad point, but I don't think it's enough to 
justify removing this language feature. It's just too unlikely of 
a scenario to be worth removing a feature which improves things 
far more often than not.

Firstly, how often would it be that a program wouldn't explicitly 
require more array values than `uint` can fit, but is still 
capable of filling the array beyond that in places when the 
maximum array size is enough?

For someone to do all the development and testing of their 
program on a 32-bit system must be a rare scenario. Even 10 years 
ago, if someone was running a 32-bit desktop operating system, it 
meant that either they had one of the older computers still in 
use, or they stupidly chose the 32-bit version even though their 
computer was 64-bit capable. The kinds of people who would use a 
programming language like D aren't the most likely people to make 
such mistakes. Those that write programs that other people use 
are even less likely. Now with Windows no longer coming in 32-bit 
versions, these days are largely behind.

There are probably some people around using D for embedded 
applications, which may involve 32-bit microcontrollers. In 2024 
and beyond, this is the only scenario where someone may 
realistically use D and do all the testing on a 32-bit system. 
They would then need to move the same program to a 64-bit system 
after testing for the problem to emerge. I just don't think this 
is likely enough to be worth removing the feature.

In the unlikely chance this problem ever does happen, it's just 
one more of many places where bugs can happen. It might not ever 
happen. If it were to ever happen, it probably would have already 
back when 32-bit systems were more common. If there are no known 
cases of this, then I think it's safe to remove it from 
deprecation.

Maybe disallow it from functions marked `@safe`, but generally, I 
think this feature should be allowed without deprecation warnings.


More information about the Digitalmars-d mailing list