The problem with abstract views (in terms of an concrete artificial
reality interface) is that (unlike the current interface, where
although ab ox can represent the root of a tree of objects, they are
not actually drawn) for most abstract views, the contained objects
will be pictured, often in their own abstract views. For example
consider a fishtank with various fish inside it. Each fish would be an
object, a presumably a member of a set contained in one of the slots
of the fishtank object. (Yes, fishtank could refine set. No, I don't
want to open that can of worms).
Obviously, if you allow multiple views, you could also have a view of
the fish and tank and so on as object, as per the prototype interface.
This presents no problems technically, but breaks the object identity
constraint of the interface.
The real fun begins if you allow editing operations on a higher-level
representation, message sending to objects (or display the UI on a
running program). The display must be updated to keep in step with the
actions of the system. So if I edit the box representation of a fish,
say, and change its species, I expect the higher level view to react
appropriately (redisplay the fish in a different colour). Similarly,
if I run a simulation of the fishtank, I expect both the higher level
and lower level display keep in step with the lower level situation.
> Your last question (about views) points out the toughest feature about reality --
> the absence of views.
That's not a feature, it's a bug :-). It all depends whose reality and
what you are trying to achieve. For example, most large and complex
machines have multiple views built in to their control systems to
oversee their operations. On the other hand, TV sets don't. (And if
the problem is more than trivial, they are usually connected to a
detailed, multiple view user interface). What I am doing is a research
project in visualising and animating programs, what you are doing is a
language user interface.