when should sends be virtual?
rainer at physik3.gwdg.de
Wed Jan 18 15:26:22 UTC 1995
> rainer wrote:
> > i am wondering whether there are any "fundamental" arguments in
> > favor of or against the convention in self that all sends to self
> > are 'virtual' by default.
> We were trying to support maximum flexibility. ...
then you defined flexibility as possibilities for the client (child) to
change the behavior of the server (ancestor). (would YOU want to allow your
child to alter your behavior? ;-) another definition might be
freedom to write code without breaking existing code.
when all sends have virtual semantics, the author of a descendant has to
take care to not accidentally override ANY of the possibly large number of
selectors that its ancestors implement and use (she may of course
deliberately do so).
somtimes flexibility adds to utility, sometimes it doesn't (see assembler
or c). the protocol supported by a parent (and potentially overridden by
the child) is not obvious. you have to use some tool to keep track of
that, this again adds to development tool fatware. and it lowers
productivity, because the programmer has to do something (check for name
clashes) when he wants to do nothing (not override anything).
the programmer knows if she uses existing (ancestor) protocol or demands
`most specialized' (descendant) protocol. to really rely on this
knowledge, the two variants must be explicitly coded. as i see it,
ancestor responsibility is more common than descendant responsibility.
(anyone has any statistics on this question?
i strongly suspect that this depends on the
language being used and previous experience
with other languages.)
therefore it'd be more convenient to mark virtual sends than
> ... Why would you want to say
> to a child "you cannot specialize the use of `foo' in this method, even
> if you want to"? ...
because of reliability reasons, of course.
("in a delegation chain, any object may screw up ancestors' semantics."
"objects in a delegation chain may have been implemented by different
"in every team of programmers, there's at least one bad programmer"
... well, this is of course NOT a valid chain of inference, but you see
the point :)
> If we could trust the method designer to know best and for all time that
> something should never be specialized, then using non-virtuals by
> default might make sense. My belief is that such trust is unwise.
since it is clear that "knowing best and for all" is next to impossible
that kind of trust would indeed be unwise. but trust IS necessary for
effective programming and unless the programmer explicitly states his
intentions, there is nothing to trust in (s.a.).
> ... Most
> created things can satisfy more than their creator's original goals.
another aspect of virtual methods being the default is speed. considering
this, i don't understand why it is as it is when one of the original design
goals was speed. (question: how efficient are resends? less than
self-sends when self is the holder?)
so far, i'm convinced virtual sends should not be the default.
More information about the Self-interest