D Programmer Jobs at Sociomantic Labs

Nick Sabalausky SeeWebsiteToContactMe at semitwist.com
Fri Nov 8 07:43:14 PST 2013


On 11/6/2013 1:32 PM, H. S. Teoh wrote:> On Tue, Nov 05, 2013 at 
10:37:38PM -0500, Nick Sabalausky wrote:
 > [...]
 >> I've actually tried to trace back just when it was I started on
 >> Applesoft BASIC, and best I can figure I must have been around 7 or
 >> 8. I know I had already learned to read (obviously), and I also
 >> remember it was definitely before second grade (but not
 >> *immediately* before, IIRC). Not intending to compete on ages of
 >> course, it's just that when you're a kid and you're at the
 >> Grandparent's place for the evening, all the adults are watching the
 >> news, and the only thing remotely toy-like is an Apple II...well,
 >> what else you gonna do? :)
 >
 > Funny, I got an Apple II when I was 8, and was mostly just playing games
 > on it. When I was 10 or 11, I got so sick of playing games that I
 > decided to learn programming instead (i.e., to write my own games :P).
 > Sadly, I never got very far on that front.
 >

I didn't have many games on the Apple II (It was already getting a bit 
dated when I was using it, so software was hard to find. Ironically, 
it's much easier to find software for it now thanks to the Internet and 
an easy-to-build PC <-> Apple II serial cable.) But my brother and 
sister and I loved the rabbit game that came with it on one of the 
tutorial disks. Later on, I did also have 2400 A.D. (I've always loved 
that style of graphics) plus all the BASIC games I typed in from the 
"how to program in BASIC" books at the library.

Initially, the tutorial disks and BASIC were pretty much all I had to do 
on the system, so that's what I did :)

 >
 > [...]
 >> Heh, again, somewhat similar: I *tried* to learn C at 13, but found
 >> it awkward. So I went back to QBASIC, and later on Visual BASIC 3.
 >> When I was around 15, maybe 16, I found this book at the local
 >> "Electronics Boutique" as part of a Starter Kit bundle:
 >>
 >> 
http://www.amazon.com/Teach-Yourself-Game-Programming-Cd-Ro/dp/0672305623
 >>
 >> That book made C (and pointers) finally "click" for me (plus the
 >> occasional 386/486/586 asm in the bottleneck sections).
 >
 > Really? I understood pointers right away 'cos they were just fancy
 > terminology for addresses in assembly language. :)
 >

Admittedly, I hadn't gotten very far with the machine code I had done. 
Mainly just plotting some (giant) "pixels" to the lores screen.

IIRC the *main* thing about pointers I had trouble with was (no pun 
intended): What's the point? From what I had read in the intro to C 
books, they were always just described as another way to refer to a 
named variable. So I thought, "Uhh, so why not just use the actual 
variable instead?" Also the whole notion of "buffers" just 
seemed...advanced.

But then when I started to understand that arrays were nothing more than 
pointers, it all just "clicked". (It wasn't until many years later I 
realized arrays aren't *always* mere pointers depending on the language. 
But by then I'd already understood pointers and memory anyway.)

 >
 >> Then at around 16-17, I got a Breakout/Arkanoid clone in a few of
 >> those several-budget-games-on-one-CD packages that were in all the
 >> computer stores at the time. Didn't make much off it, (even McDonald's
 >> paid more) but I was thrilled :)
 >
 > Nice! I don't think I ever got that far in my attempts to write games at
 > the time. I was bogged down with having too many radical ideas without
 > the skills to actually implement them. :P
 >

Finishing is indeed one of the hardest parts. I've always had far more 
unfinished projects than finished. At the time, I probably never would 
have finished those games if it weren't for the prodding of the 
project's producer.

 >
 >> Of course, a lot of the choice would depend on the audience. For
 >> graduate-level math students, Haskell would be a real possibility.
 >
 > IMNSHO, for graduate-level math students I would *definitely* start with
 > assembly language. It would serve to dispel the misconception that the
 > machine is actually capable of representing arbitrary natural numbers or
 > real numbers, or computing the exact value of transcendental functions,
 > etc.. Mathematicians tend to think in terms of idealized entities --
 > infinite precision, infinite representation length, etc., all of which
 > are impossible on an actual computer. In order to program effectively,
 > the first order of business is to learn the limitations of the machine,
 > and then learn how to translate idealized mathematical entities into the
 > machine's limited representation in such a way that it would be able to
 > compute the desired result.
 >

That's actually a very good point.

 > In fact, now that I think of it, I think the first lecture would be to
 > convince them of how much computers suck -- can't represent natural
 > numbers, can't store real numbers, can't compute transcendental
 > functions, don't have infinite speed/time so only a small subset of
 > mathematical functions are actually computable, etc..
 >

Very good way to start a course, really. Grabs the students' attention. 
It was "unexpected approach" course introductions like that that always 
made me think "Ok, now THIS may turn out to be a pretty good class..."


 > Then the second lecture will explain what the computer *can* do. That
 > would be when they'd start learning assembly language, and see for
 > themselves what the machine is actually doing (and thereby firmly
 > dispelling any remaining illusions they may have about mathematical
 > entities vs. what is actually representable on the machine).
 >
 > Then maybe at the end of the course, after having built up a realistic
 > notion of what the machine is capable (and incapable) of, explain how
 > things may be put together to produce the illusion of mathematical
 > computation, say with a functional language like Haskell.
 >
 > Later on, when they learn the theory of computation, I'd emphasize the
 > fact that even though we computer people speak of Turing completeness
 > all the time, actually no such computer exists that is *actually*
 > Turing-complete, because all physical computers have finite storage, and
 > therefore are no more than glorified finite-state machines. :) Of
 > course, that in no way makes the theory of Turing machines useless --
 > it's a useful idealized abstraction -- but we shouldn't be under the
 > illusion that we actually have access to the capabilities of an actual
 > Turing machine with an infinite tape. Some things are computable in
 > theory, but outright infeasible in practice. Like computing the
 > Ackermann function of Graham's number, say. :-P
 >

Yea, that'd be a really good class.



More information about the Digitalmars-d mailing list