<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On 11-nov-10, at 15:16, Fawzi Mohamed wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>On 11-nov-10, at 09:58, Russel Winder wrote:<br><font class="Apple-style-span" color="#006312"><br></font><blockquote type="cite">MPI and all the SPMD approaches have a severely limited future, but I<br></blockquote><blockquote type="cite">bet the HPC codes are still using Fortran and MPI in 50 years time.<br></blockquote><br>well whole array operations are a generalization of the SPMD approach, so I this sense you said that that kind of approach will have a future (but with a more difficult optimization as the hardware is more complex.<br></div></blockquote><br>sorry I translated that as SIMD, not SPMD, but the answer below still holds in my opinion, if one has a complex parallel problem mpi is a worthy contender, the thing is that in many occasions one doesn't need all its power.</div><div>If a client server, a distributed or a map/reduce approach work, then simpler and more flexible solutions are superior.</div><div>That (and its reliability problem, that PGAS also shares) is, in my opinion, the reason MPI is not very used outside the computational community.</div><div>Being able to tackle also MPMD in a good way can be useful, and that is what the rpc level does between computers, and the event based scheduling within a single computer (ensuring that one processor can do meaningful work while the other waits.</div><div><br><blockquote type="cite"><div>About MPI I think that many don't see what MPI really does, mpi offers a simplified parallel model.<br>The main weakness of this model is that it assumes some kind of reliability, but then it offers<br>a clear computational model with processors ordered in a linear of higher dimensional structure and efficient collective communication primitives.<br>Yes MPI is not the right choice for all problems, but when usable it is very powerful, often superior to the alternatives, and programming with it is *simpler* than thinking about a generic distributed system.<br>So I think that for problems that are not trivially parallel, or easily parallelizable MPI will remain as the best choice.<font class="Apple-style-span" color="#006312"><br></font></div></blockquote></div><br></body></html>