GPGPU and D
John Colvin
john.loughran.colvin at gmail.com
Sun Aug 18 14:35:33 PDT 2013
On Sunday, 18 August 2013 at 18:30:29 UTC, Russel Winder wrote:
> On Sun, 2013-08-18 at 19:46 +0200, John Colvin wrote:
>
>> A github group could be a good idea, for sure. A simple wiki
>> page with some sketched out goals would be good too, which I
>> guess would draw on the content of the previous thread.
>
> If I remember correctly in order to make a GitHub group you
> have to make
> a user with an email address and convert it to a group. I can
> set up a
> temporary mail list on my SMTP server for this so no problem.
> The real
> problem is what to call the group and the project. Anyone any
> ideas?
>
>> Anyway, I can't really get too involved right now, my masters
>> thesis is due in a terrifyingly small amount of time.
>> However, come September and onwards I could definitely spend
>> some serious time on this. If everything goes to plan I might
>> well be able to justify working on such a project as a part of
>> my PhD.
>
> I too have not as much time to actually code on this as I would
> like in
> the short term, but it is better to actually do little bits
> than nothing
> at all. So having the infrastructure in place is an aid for
> little
> things to happen to keep the momentum going. Albeit a small
> momentum. :-)
>
> Good luck with the thesis writing. What is the topic? Which
> university?
I always have a bad time explaining it haha, here's the title:
Automated tracing of divergent ridges in tokamak magnetic spectra.
Basically, the fusion guys at culham produce load of spectrograms
and have very little systematic workflow for analysing them. It's
almost all done done by eye. I've developed a new ridge tracing
algorithm and applied it to the spectra, with then some extra
steps to identify particular magnetic events that occur in the
reactors. It's all a bit ad-hoc, but it'll do for a masters by
research.
I'm at Univeristy of Warwick, Engineering department (coming from
a physics BSc). I'll be joint physics and engineering for the
PhD, continuing (read reinventing-from-scratch) the same work.
There are so much data with so much heavy duty processing that a
GPU solution will probably be a good choice. We have a HPC
cluster with some GPU compute nodes* so for me, being able to
target them efficiently - both in runtime and developer-time
terms - would be great. Much more interesting that just spamming
the data proc nodes, anyway! I would have to persuade the the
sysadmins to install gdc/ldc though...
*(6 nodes, each with 2 NVIDIA Tesla M2050 GPUs, 48 GB RAM and 2
Intel Xeon X5650s)
More information about the Digitalmars-d
mailing list