[self-interest] Re: How does the Self VM work?

Jecel Assumpcao Jr jecel at lsi.usp.br
Thu Oct 29 04:25:05 UTC 1998

I'll make some comments on the Self implementation tomorrow, but I
would like to reproduce here a message I sent to the "chipcon" list
at the old Byte site and also to the Squeak mailing list. It is a bit
dry and technical - I was trying to clearly define what I meant by
terms like "virtual machine" and "adaptive compilation":
I will define what I mean by various terms to make the
following discussion clearer:

 - native compilation: the developer invokes some program
   on his/her machine to produce a machine language version
   of the application, which is then distributed to users.
   Examples include VC++, Cobol and many others.

 - virtual machines: code is compiled into the machine language
   of a CPU that is different from the one on which it will
   actually run. Examples include UCSD Pascal P-Code, the
   AS/400, Smalltalk and Java bytecodes, 68K Mac code on
   PowerMacs, X86 code on Alpha NT machines, etc. Note that
   most people would not agree with this definition, saying
   that a virtual machine is a definition of a CPU that
   doesn't actually exist. But this would exclude Java from
   this list once Sun gets its Java chips out the door.

 - interpreted virtual machines: a program on the user's
   computer simulates the CPU the code was compiled for.
   This is a few orders of magnitude slower than native
   compiled code. Examples include most Smalltalks, the
   first Java implementations, the 68K emulator on PowerMacs,
   UCSD Pascal, and others. Since this is much simpler than
   the following technologies, it is usually used to quickly
   get the system "out there".

 - install time compilation: instead of simply copying the
   virtual machine code to disk, the installation software
   compiles it to the user's native CPU code. The only example
   of this that I am aware of is the proposed Architecture
   Neutral Distribution Format (ANDF) for GNU software. This
   kind of system doesn't usually have a virtual machine
   definition that is easily interpreted, but is more like
   the intermediate code between compiler passes.

 - load time compilation: the virtual machine code is stored
   on the user's disk, but when it comes time to load it into
   memory for execution it is translated into native code
   (possibly also stored in a disk cache). Examples include
   the TAO OS and Java Just In Time compilers.

 - dynamic run time compilation: the virtual machine code is
   actually loaded from disk into memory, but the first time
   that some piece of code would be executed it is translated
   into native code and stored in the memory based code cache.
   Examples include some Smalltalks and ARDIs Executor Mac

 - adaptive run time compilation: like the previous one but
   normally uses more than one compiler. When a piece of
   code is first called, it is translated by a simple and
   fast compiler and "instrumented" to generate information
   about its execution characteristics. If this code proves
   to be a bottleneck for system performance, it is compiled
   again with a much better compiler which will use the
   collected execution statistics to adapt the code as much
   as possible to the current runtime environment. Examples
   include Self (from Sun) and Hot Spot (now at Sun - there
   seems to be a pattern here, doesn't it? ;-)

 - hardware: a hardware implementation of a virtual machine
   will get all the benefits of native compilation yet still
   be compatible with the other alternatives. Examples include
   Western Digital's P-Code engine, several Smalltalk chips
   (not SOAR, Smalltalk On A RISC, though) and PicoJava.

One interesting example missing from my list is DEC's FX!32
technology. I don't know enough about it to classify it,
though it seems to combine elements of adaptive compilation
(or at least dynamic compilation) and load time hints.

The first thing we should note is that native code for most
popular CPUs (specially RISC ones) tends to be sufficiently
larger than bytecoded virtual machine code that a low end
Pentium can probably compile bytcodes to native code much
faster than the disk could load the extra sectors needed
to hold the larger native code. So it might have made sense
to save native code on disk in the past (install time and
load time compilation) but this is no longer true (even
dynamic and adaptive compilation schemes have to be very
carefull not to do this indirectly through the virtual
memory system).

Memory size and compilation overhead can be greatly reduced
in adaptive systems by not compiling "uncommon branches".
If is piece of code is not likely to be used, don't waste
time on it until it actually is. Of course, having the
compiler(s) sharing memory with the application at run time
might offset some of these gains. It is important to note
that the quality of native code generated adaptively often
exceeds that of even very good native compilers for they
can do agressive inlining and other things that are not
practical until run time information is available. The
key here is that the adpative compiler can "guess" that
some things will be constant even though the source code
doesn't say it is. The native compiler must be conservative
and always treat this case as variable. If the adaptive
compiler guessd wrong and the value does change, then there
is no harm done - just compile it again fixing this!

I hope this helps at least place the Self VM solution in context.

-- Jecel

Subscribe, unsubscribe, opt for a daily digest, or start a new e-group
at http://www.eGroups.com -- Free Web-based e-mail groups.

More information about the Self-interest mailing list