dynamic deoptimization (was: ARM?)

Jecel Assumpcao Jr. jecel at merlintec.com
Sat Dec 10 21:58:25 UTC 2011


Baltasar,

imagine that you give me a 10 line function in C and ask me to translate
it into Lisp. If you demand that my version keep track of the
intermediate state at the end of each of the ten lines, I will not be
able to do a very good job. If, on the other hand, you allow me to do
anything I want as long as the input arguments are the same and the
result is the same then I can create very good code.

So in Self we defined some "safe points", like when messages are sent or
on the backward branches of loops which have no sends in them and only
demand that the code generated by the optimizing compiler make the state
match what we would expect from the bytecodes at these safe points. What
the code does between these points is its own problem.

What happens if some exception, like a division by zero, occurs between
the safe points? We should go into the debugger, but now we have a
problem: the state of the optimized code can't be understood in terms of
the bytecodes! The solution adopted in Self is to sort of run the
optimization process in reverse, a deoptimization, that can recreate the
state we would have had if we had run the bytecodes from the last safe
point up to where the exception happened.

This is *really* hard to so and I don't know which VMs beside the Self
one can do it. One reason why most VMs don't do it is simply that it is
hard to miss something that you have never had. Just like the industry
thought for decades that garbage collection was a needless complication,
the debugging environments most people are used to have that limitation
but there is nobody demanding that it be fixed.

-- Jecel




More information about the Self-interest mailing list