[self-interest] Re: Self bytecodes

Douglas Atique datique at alcatel.com.br
Tue Jun 1 11:58:00 UTC 1999


jecel at lsi.usp.br wrote:

> > It all seems fine, but I don´t understand something seemingly
> > straightforward: knowing that in a send the receiver and arguments are
> > popped off stack and the result is pushed onto stack, how is the number
> > of arguments of a send determined, so that my interpreter could know how
> > many pops it should make? It seems that in the fast_compiler, when the
> > machine code for a send is generated, the information of number of
> > arguments is kept somewhere. Where?
>
> You can just count the number of ':' characters in the selector name,
> and that is the number of arguments. Except that if the selector name
> is composed of special characters, then we have a binary selector and
> there is one argument.

All right, but the parser might already have done that, in fact the
fast_compiler code generation reads this arg_count from somewhere.

>
>
> Testing this at every message send is *very* inefficient - the
> Smalltalk bytecodes encode the number of arguments in the send
> bytecode itself. But since the Self bytecodes were meant to be
> compiled away, this wasn't considered a problem.

Jecel,
What I want to do is to switch the compiler, assembler and runtime (part of,
that is) off, plug in an interpreter in every point where this whole subsystem
is used and be able to compile it under Solaris x86 or Linux by defining in the
makefile something like -DPORTABLE, for example.
I mean, my interpreter is to interface directly with the method objects
generated by the parser, not to replace the parser. This would be a temporary
solution to the problem of portability, introducing the inefficiency of the
interpreter. Perhaps this is because I miss the old time when I ran OS/2 2.1
with 4MB RAM :-)

>
>
> For an interpreter, there isn't a good solution if you are going
> to use a standard Self world. If you don't mind creating your own,
> slightly different, world you could separate canonical strings
> representing selectors into different "types". One way to do this
> would be to add a constant slot indicating the number of arguments
> when you canonicalize a string. That way, 'last' would have a
> constant slot with the value 0, while for 'between:And:' that
> slot's value would be 2. This slot would always be in the same
> place in the string's map (if you don't make any other changes)
> so your interpreter can easily access it.

sma at netsurf.de wrote:

> >It all seems fine, but I don´t understand something seemingly
> >straightforward: knowing that in a send the receiver and arguments are
> >popped off stack and the result is pushed onto stack, how is the number
> >of arguments of a send determined, so that my interpreter could know how
> >many pops it should make?
>
> You can derive that from the selector symbol.  Unary selectors (that are
> selectors composed from letters - especially the first character must be a
> lowercase letter - without ':' in it) need no arguments at all.  Binary
> selectors (that are all selectors which neither start with a letter or with
> an '_') have exactly one argument.  For keyword selectors (which are
> composed of sequences of letters (and digits) that end with a colon ':')
> simply count the number of colons.
>
> A different problem is to know when to pop returned objects from the stack
> which aren't used.  I think, you cannot detect that and simply adjust the
> stack when you leave the method.  Here's an example:  3+4. nil
>
> This will generate something along:  push 3, push 4, send #+, push nil.
>
> The + method for integers will pop both 3 and 4 from the stack and push the
> result, 7.  However, this object isn't needed and will use one stack slot
> upton the method returns (with nil).
>
> There might be a way to notice that "7" isn't used anywhere in the method,
> but that's probably to much work for an interpreter.  A compiler that will
> create and analyse a complete parse tree for each method can do this.
>

Well, Stefan, I am using the following principle to get to do something about
Self:
The Self Group has already done a lot to prove Self to be efficient. Now
someone should make it portable. This won't be straightforward if one wants to
incorporate all the benefits of the adaptive compilation, so I want to make
Self run slowly on an x86-platform in a way such that the VM code can be
compiled for SPARC, MIPS, Acorn, etc. Of course, the FIRST step is to
neutralize the processor dependency, so after that we'll have code that is
still dependent on the operating system. This is why I have chosen Solaris x86
as my development platform (it could have been Linux, but I wanted to make sure
there would be the least difference possible from Solaris SPARC). It is the
same as the original environment in which Self ran, but the processor is
different. Next step will be to "make glue" for other systems. You can see the
example of Java, they have first provided a UI class library that was specially
written for each platform (the heavyweight components), then they evolved by
providing Swing, which is platform-independent and the platform-dependent part
of the code is reduced to windows (don't take this too precisely, it isn't). So
can we do, by rewriting code that is implemented as primitives in Self.
In fact, before that I would like to study more carefully the primitives
available to define a minimal primitive set, perhaps a "microkernel VM" :-)
Jecel's ideas about the Squeak Smalltalk system, in which much more is
implemented in the language itself than in Self, make much sense to the
evolution of Self in my opinion.

>
> bye

Thanks for your comments and ideas.
Regards,
Douglas



------------------------------------------------------------------------

eGroups.com home: http://www.egroups.com/group/self-interest
http://www.egroups.com - Simplifying group communications






More information about the Self-interest mailing list