Questions about windows support

H. S. Teoh hsteoh at quickfur.ath.cx
Tue Feb 21 23:02:17 PST 2012


On Wed, Feb 22, 2012 at 03:44:33AM +0100, Adam D. Ruppe wrote:
> On Wednesday, 22 February 2012 at 02:03:58 UTC, H. S. Teoh wrote:
> >Hmm. Let's implement shell utilities in D! (Pointless, yeah, but a
> >fun exercise to see how much cleaner D code can be -- if you've
> 
> I'm sooo tempted again.
> 
> Though, I don't really like shell utilities.... I'd want many
> of them to be accessible through api, and at that point, might
> as well just do them all as built ins.
> 
> I guess if there's another file available, we can call it
> instead to allow easy replacement, but I just think making
> the shell there as a library is a really nice thing.

Totally!! Instead of 500 standalone shell utils, each with the same old
boilerplate command-line parsing, each of which requires fork() and
exec() to use, why not make their functionality directly available as a
library?

It's clear that the trend of computing is toward automation anyway. In
the long-term, it isn't so much about how l337 your CLI skills are in
stringing 20 different commands together in one line, it's about how the
commonly-used functionality in these 20 commands can be reused by
*programs* that call common library functions. Shell-scripting is only a
temporary solution; all those fork/exec calls do add up.

This reminds me of a funny incident at my first job, where there was a
shell script that a coworker was using to generate some reports. It
worked fine up till that point, since it had only been used for
relatively small datasets. That day, someone tried to generate a rather
large report, and it was taking forever. In fact, it took >48 hours to
complete. When the prospect arose that we might need to generate similar
reports again, they approached me to ask if I could do something about
the obvious performance problem of this script.

Turns out the script was really simple, but had to do some amount of
calculations, which it did using code along these lines:

	while [ ! -z "$still_more_data" ]; do
		$data=`cat $inputfile | sed -ne${linenum}p`
		$input=`grep $fieldname $data`
		$result=`expr $result + $input`
		...
	done

There were several *nested* loops like that. I didn't analyze it much
further, but it must had been something like O(n^6) complexity or
something insane like that, due to cat'ing the (same) input file umpteen
times, grepping each line multiple times, saving results to temporary
files and then grepping *those* files (multiple times), and spawning who
knows how many subprocesses along the way.

I rewrote the miserable thing in Perl in a couple of hours, and the Perl
script took 2 minutes to produce the report. :-P

So yeah. A shell utils library would definitely be a big plus.


> class ShellOperations { /* methods here */ }
> then use compile time reflection to build your map for the
> command line processing....
> 
> Our D shell could be just one program.

Yeah, a shell with many commands built-in is a good thing, IMHO. I had
to rescue a remote server using only echo commands (one of the only
commands built into bash). If rm, ls, and friends had been part of bash
instead of needing a fork() and exec() followed by dynamic linking every
single time I typed a command, things would have gone a *lot* smoother.


> though tbh if I was doing a shell, I'd kinda want to do
> a terminal replacement too. But, I don't see myself leaving
> xterm and rxvt. Nor putty.

If you were savvy (or crazy) enough, you could always try your hand at
writing termcap/terminfo entries... :P


> idk, should probably limit the scope a bit just to make it
> realistic to finish. Use gnu readline too and piggyback on
> that pretty nice function for history, editing, completion,
> etc.

True. For a first stab at it, I wouldn't go crazy with trying to rewrite
the terminal. Get some working code first, that's actually usable at the
basic level, then we can get fancy. Nothing more discouraging than a
grandiose uber-design of a utopian system that's sitting in the source
tree with thousands of lines already written but nothing actually
running, and nothing runnable for the foreseeable future.


> Actually that makes it fairly doable as a weekend project.

Go for it!


> >Only? Heh... when I was young and foolish, I wanted to trump Linus
> >and write my own kernel.
> 
> I've tried it before.... but it is terribly boring.
> 
> As cool as it is to say "every line of code on that box is mine and
> mine alone", that's all it is good for imo: bragging.

I dunno, I've always loved low-level coding. I used to keep an eye on
Hurd, because the prospect of moving most kernel functionality into
userspace appealed to me very much. But nowadays I just don't have the
time for highly time-consuming projects like that.


> The interesting stuff is all user territory.

Depends. Linux driver development is a constantly growing area, if
you're into that sorta thing. You won't have fancy graphical sfx to show
for it (unless you're into video driver development), but you'll have
the great satisfaction of knowing that you made that particularly
stubborn piece of hardware actually work nicely with Linux.


> >My concept of shell is to make it a very thin (but scriptable) layer
> >over actual syscalls, so you could do something like:
> 
> Eeeeeeh.... at that point, you might as well just write
> your commands in C (or D).

Well, yes and no. The point was more to play around with syscalls
interactively.

Of course, I also have other shell ideas on the other extreme of the
spectrum, where it's sorta like interactive D, except with a filesystem
navigation / file searching / file manipulation bent, rather than trying
to write a compilable app. (Though it wouldn't hurt if scripting in such
a shell actually was the same as writing an actual D program. :P)


> I actually like bash a lot... but only for 1-3 line things.
> Bigger than that, and screw it, I'd rather just write C/D.
> bash is a huge pain to get anything done for > 3 lines...

For that I use perl.


> But for those 1-3 line things, it is really nice. And, since
> it is a command line shell, that's exactly what it should be
> optimizing!

I can't say I'm too fond of bash syntax. I guess it does the job, and is
relatively expressive, so it's OK. But I'm a programmer-head, so I tend
to turn everything into a programming language. :P (Or Turing tarpits on
bad days.)

On another note, in my first job we had a temp consultant one time, who
had the uncanny ability to write insanely-long bash commands that Just
Work(tm). It's like he thinks in bash, and these insanely elaborate
commands just flow out of him. They are always on a single line, but can
involve 10-15 different programs in a long chain.  They always work on
first try, and work wonders every time. To this day we still don't know
how he does it.


T

-- 
Why do conspiracy theories always come from the same people??


More information about the Digitalmars-d mailing list