Self UI

Jecel Mattos Assumpcao Jr vme131!lsi4!jecel at uunet.UU.NET
Mon Oct 29 12:41:22 UTC 1990


Bay writes:
> In an artificial reality model, however, actions such as object creation and
> slot addition most likely would not be achieved by conventional text editing:
> you would probably grab some prototype object, clone it, and add or remove
> slots as necessary by dragging slots onto or off the object.  Editing code
> (in methods) could conceivably also be done in a graphical way, though I
> think it may almost always be easier to type the code conventionally.
> Editing capabilities would be built in to the face of objects with code
> (methods).

James writes:
> It seems to me that editing and cloning objects should be easy to add,
> and unless someone comes up with a good graphical syntax for self code
> (put blocks explicitly into separate objects :-), textual editing for
> that seems the way to go.

I agree that if we were to edit text to change objects we might as well use
a terminal. I was thinking of editing high-level objects like Email or a
painting. Changing objects by dragging boxes around counts as editing also.

There are many graphical notations that could be adapted to represent Self
code. The problem is that they would be usually harder to understand than
text simply by being too large for us to take in all at once. The details
would overshadow the whole more easily than in the compact text representation.
One possibility is to use animation rather than static diagrams. It doesn't
help you see the whole any better, but at least makes for less clutter. You
could scroll forward and backward in time instead of scrolling around a huge
flowchart. Supposing an interface with icons and menus ( I invented this
before I saw Artificial Reality ) you could represent sending an object a
message as highlighting the icon and popping up a menu from which the message
selector is chosen. Since this isn't run-time, the result would be a new,
generic icon whose name would be the name of the original object plus the
message selector. Programming would look a lot like using the system, and 
code animations would always end with a single generic icon whose name would
be the Self expression you would type to obtain the same result. Blocks
could be placed in graphical blocks, although that might turn out to be 
inconvenient. Debugging would be exactly like programming animations, except
that generic icons would be replaced with real objects. Experienced 
programmers would type Self expressions directly as the icon name, rather
than go through the pop-up message charade.

James writes:
> Different views of objects are more tricky. The problem is that an
> abstract view of a given object can be a view of that object and it's
> component objects - that is, more than one low-level object. This
> probably destroys the "integrity of object identity".

An abstract view would really represent a collection of objects, but that
is just the way things are! At the lowest level when I get a pointer to
an object it really is a pointer to the root of a whole object tree. That
the objects is not truly isolated - it contains other objects - doesn't
mean it doesn't have a unique identity.

David writes:
> Your last question (about views) points out the toughest feature about reality--
> the absence of views. Yet, we do manage to function in the real world.
> I remember when the TV repairman would use a mirror to see the front of the set
> while he adjusted the back.

There is a game I saw many years ago on an Apple II where you controlled a
bug in a maze that was infested by killer ants. The maze was very complex and
was entirely shown in the low resolution screen so the ants were barely a pixel 
in size. A rectangle always moved around your bug, however, magnifying that
region some eight times. The total effect was that 15 seconds into the game
you completely forgot about the funny floating lens and would have a single
high-resolution maze in your mind.

What if the high-level representation of a chip was a tiny layout in a fully
functional editor. I could place a lens to get a close up view of what I was
doing ( maybe even two or three lenses ) but I would be working on the chip.
I know this is cheating - a lens is just a window, a tool. But it is a very
special tool in that it draws your mind away from itself to the object being
examined. It's implementation would be simple: mouse and keyboard events on
the lens would be redirected to the appropriate point on the object, and
rendering operations on the screen would be duplicated on the lens ( pixel
magnification like that provided by an Open Windows utility would be useless ).



More information about the Self-interest mailing list