[GSoC] Mir.random.flex - Generic non-uniform random sampling

Ilya Yaroshenko via Digitalmars-d-announce digitalmars-d-announce at puremagic.com
Mon Aug 22 22:40:24 PDT 2016


On Monday, 22 August 2016 at 18:09:28 UTC, Meta wrote:
> On Monday, 22 August 2016 at 15:34:47 UTC, Seb wrote:
>> Hey all,
>>
>> I am proud to publish a report of my GSoC work as two 
>> extensive blog posts, which explain non-uniform random 
>> sampling and the mir.random.flex package (part of Mir > 
>> 0.16-beta2):
>>
>> http://blog.mir.dlang.io/random/2016/08/19/intro-to-random-sampling.html
>> http://blog.mir.dlang.io/random/2016/08/22/transformed-density-rejection-sampling.html
>
> It's really nice to see that GSoC has been such a huge success 
> so far. Everyone has done some really great work.
>
>
>> Over the next weeks and months I will continue my work on 
>> mir.random, which is supposed to supersede std.random, so in 
>> case you aren’t following the Mir project [1, 2], stay tuned!
>>
>> Best regards,
>>
>> Seb
>>
>> [1] https://github.com/libmir/mir
>> [2] https://twitter.com/libmir
>
> I'm curious, have you come up with a solution to what is 
> probably the biggest problem with  std.random, i.e., it uses 
> value types and copying? I remember a lot of discussion about 
> this and it seemed at the time that the only really solid 
> solution was to make all random generators classes, though I 
> think DIP1000 *may* help here.

This is an API problem, and will not be fixed. Making D scripting 
like language is bad for Science. For example, druntime (Fibers 
and Mutexes) is useless because it is too high level and poor 
featured in the same time.

The main problem with std.random is that std.random.uniform is 
broken in context of non-uniform sampling. The same situation is 
for 99% uniform algorithms. They just ignore the fact that for 
example, for [0, 1) exponent and mantissa should be generated 
separately with appropriate probabilities for for exponent


More information about the Digitalmars-d-announce mailing list