D could catch this wave: web assembly

Joakim via Digitalmars-d digitalmars-d at puremagic.com
Wed Jun 24 00:25:25 PDT 2015


On Tuesday, 23 June 2015 at 11:50:48 UTC, Suliman wrote:
> On Tuesday, 23 June 2015 at 11:41:03 UTC, Wyatt wrote:
>> On Tuesday, 23 June 2015 at 11:37:41 UTC, Suliman wrote:
>>> Am I right understand that web assembly would not completely 
>>> new technology and would be just evolution of asm.js, so all 
>>> of webassembly apps would run in old javascript virtual 
>>> machine?
>>
>> They covered this question in the FAQ, too:
>> https://github.com/WebAssembly/design/blob/master/FAQ.md#why-create-a-new-standard-when-there-is-already-asmjs
>
> I can't understand what I will see if I open HTML page? Would 
> it's classical HTML page with import of some binary on top of 
> it or what?

What you'll see in the browser is what you already see now in 
HTML5.  All that's changing under the hood is that they're 
providing more ways to compile other languages and use them in 
place of javascript.  So if you "View Source" on the webapp, 
you'll see HTML/CSS, as you always did, and some kind of textual 
representation of webassembly instead of javascript.

On Tuesday, 23 June 2015 at 11:55:24 UTC, Abdulhaq wrote:
> On Tuesday, 23 June 2015 at 11:09:31 UTC, Joakim wrote:
>
>>
>> As for a GC, why would webasm need to provide one?  I'd think 
>> the languages would just be able to compile their own GC to 
>> webasm, which seems low-level enough.
>>
>
> From the docs:
>
>  Even before GC support is added to WebAssembly, it is possible 
> to compile a language's VM to WebAssembly (assuming it's 
> written in portable C/C++) and this has already been 
> demonstrated (1, 2, 3). However, "compile the VM" strategies 
> increase the size of distributed code, lose browser devtools 
> integration, can have cross-language cycle-collection problems 
> and miss optimizations that require integration with the 
> browser.

You cut off two key sentences before that:

"Beyond the MVP, another high-level goal is to improve support 
for languages other than C/C++. This includes allowing 
WebAssembly code to allocate and access garbage-collected (JS, 
DOM, Web API) objects."

So they're only talking about "GC support" for integrating with 
javascript and DOM objects, not the GC for some other language 
compiled to webasm.  I thought Ola was talking about the latter, 
maybe he was talking about the former.

On Tuesday, 23 June 2015 at 16:10:58 UTC, Nick Sabalausky wrote:
> On 06/23/2015 07:09 AM, Joakim wrote:
>>
>> But if you have some emotional connection with the term 
>> "desktop" and
>> can't take the fact that they're being rendered defunct, I can 
>> see why
>> you'd want to ignore all that and just call the new devices 
>> "converged"
>> or "desktops." :)
>>
>
> As opposed to someone with an emotional connection with the 
> term "smartphone" and can't take the fact that what such 
> devices are turning into is not what they used to be and that 
> they're getting there by borrowing from an old uncool 
> "outdated" style of computing ;)

Actually, I deliberately use the term "mobile devices" and only 
occasionally "smartphone," as I believe tablets will end up 
selling much better than they are now, particularly for this kind 
of docked usage.  And I most definitely don't find desktop 
computing to be "old uncool 'outdated'," as I've often said that 
touchscreens are a big drop in interaction bandwidth from 
keyboards and trackpads (using a trackpad exclusively with a 
laptop and ultrabook over the last decade, I now think mice are a 
step backwards too), though that tradeoff is understandable for 
most mobile use.  Just don't expect me to actually type anything 
longer than a couple words on those touch keyboards, I'll just 
save it for later on my physical keyboard.

So no, no emotional connection here, or I wouldn't be calling for 
multi-window UIs on mobile, that allow real work to get done, for 
some time now.

I simply disagree that taking one feature, multi-window UIs, is 
"convergence" in any meaningful sense, so you can say they've 
just become "desktops."  I've tried to persuade you and Kagamin 
otherwise and appear to have failed. :)

>>> I've done so already. It's absolutely terrible. At best, it's 
>>> an
>>> occasional replacement for those already-horrid
>>> mini-touchscreen-keyboards (which almost anything is better 
>>> than).
>>
>> I've been surprised on the few occasions I used google's voice
>> translation about how good it was, but I haven't use it much.
>>
>
> It's much better than I expected too, but even still, approx 
> 50% of the time I use it (50% is NOT an exaggeration here) I 
> end up having to go back and edit its mistakes. Plus it's laggy 
> because of yet another problem: It works by sending everything 
> the mic hears straight to Google. So much for end-to-end 
> encryption/privacy.

Supposedly they've made voice translation work completely offline 
a little while back, though I'm not sure if they still use the 
online mode by default.

> And then here's the one that isn't even conceivably fixable by 
> technological improvements: I've found that oftentimes, 
> dictation is just isn't a very natural fit for your mental 
> process, even if it does work flawlesly.
>
> I know that's somewhat vague, because it's difficult to 
> explain. but I'll put it this way: Dictation is almost like the 
> "waterfall model" of text entry. Versus a keyboard being more 
> naturally suited to iterative refinement, and working out how 
> you want to word something. Sure, you can do that with voice, 
> but it's less natural. (That's actually part of why I prefer 
> email to telephone calls for business and technical 
> communications.)

As you said, editing can be done through a voice interface too.  
It's just not common yet and people are still getting familiar 
with that new voice editing process.  I bet editing could 
actually be made much faster through voice, particularly for 
large documents.  I agree with you about text email being 
preferable to telephone calls for many kinds of communication, 
but that's not relevant here, as you're sending a 
voice-transcribed text email if you're using voice translation.

On Tuesday, 23 June 2015 at 16:37:23 UTC, Ola Fosheim Grøstad 
wrote:
> On Tuesday, 23 June 2015 at 11:09:31 UTC, Joakim wrote:
>> On Monday, 22 June 2015 at 16:34:58 UTC, Ola Fosheim Grøstad 
>> wrote:
>>> People are already writing less javascript, but without a GC 
>>> in webasm most languages are better of compiling to 
>>> javascript or a mix.
>>
>> The problem is that they may be writing less javascript now, 
>> but they're still stuck with the performance of javascript, as 
>> they're just compiling to javascript.  Webasm making that 
>> faster and allowing more languages should change that equation 
>> much more.
>
> asm.js/Webasm is more restricted. Those restrictions basically 
> tells the JIT that the code has already been optimized, doesn't 
> need higher level support and it can be translated into machine 
> language as is...

And you're saying this will make webasm as slow as javascript or 
slower?  I think the idea here is to beat javascript's speed: as 
long as they do that, it's worthwhile.

> Although I don't think javascript is the bottle neck at the 
> moment. I think the layout and render engine is.

You may be right for the UI: I haven't profiled it.  But for 
computationally-intensive stuff like a physics engine, which is 
what this is supposedly aimed at, javascript is the bottleneck.

>> As for a GC, why would webasm need to provide one?  I'd think 
>> the languages would just be able to compile their own GC to 
>> webasm, which seems low-level enough.
>
> That would be difficult to get right.

It's been done, as the FAQ quoted above notes.  If you're talking 
about integrating with javascript and DOM objects, they say they 
plan to support that eventually also.

>> This is nonsense.  They're just dumping in everything they can 
>> think of, that has nothing to do with backwards-compatibility.
>
> Web tech is pretty good at backwards-compatibility. Not sure 
> why anyone would argue against that.

Others have already argued above that it isn't, which I already 
alluded to once earlier in this thread.  But that's not the 
issue: you seemed to be arguing that the reason there's so much 
stuff dumped into the web stack is because they keep the old 
stuff around for backwards-compatibility, whereas I was saying 
they're dumping in _way_ too much new stuff, forget about the old 
stuff.

I love experimentation and trying out new approaches, but ideally 
those should get weeded out and rationalized before being baked 
into the standard.  At this point, there's too much stuff getting 
"standardized," forget about the single-browser experiments.  
It's almost as though the browser itself has become a giant, 
bloated experiment, one that never cuts failed attempts.

>> That doesn't answer the question of why they're using a 
>> bitcode and not a textual IR, as you prefer text for SVG.
>
> Because we don't edit the IR.

So you're editing SVG in the client, ie the browser?  You edit 
your text C++ source on your developer workstation and upload 
bitcode to the server with webasm, which is what the browser 
downloads.  You could do the same with SVG: edit the text SVG on 
your workstation and upload a binary encoding for the server and 
browser.

You claimed that "parsing is not the main issue" with SVG, yet it 
certainly appears to be an issue with webasm.


More information about the Digitalmars-d mailing list