Hi,
Given that Self can be compiled and run on Solaris 2.4, I wonder if it is may also compile and run on Solaris 2.5 Intel edition. Has this been attempted? Also, is the source code being distributed for academic use?
Thanks,
Evan ------------------------------------------------------------------ Evan Cheng evan@top.cis.syr.edu Dept. CIS - Syracuse University http://www.cat.syr.edu/~evan CASE Research Center (315) 423-7063 ------------------------------------------------------------------
Evan Cheng wrote:
Given that Self can be compiled and run on Solaris 2.4, I wonder if it is may also compile and run on Solaris 2.5 Intel edition.
Just recompiling wouldn't help. You would have to look at all the files in the "asm", "nic" and "fast_compiler" directories and adapt them to generate 386 code instead of Sparc code. There is some "leftover" code that generates 68000 instructions - this could be a good indication of what you have to do.
Has this been attempted?
Several people have talked about this (actually, a port to Linux, but then a port to Solaris Intel would be trivial), but no one is doing it as far as I know.
Also, is the source code being distributed for academic use?
The license that comes with the source code (you can download the sources from http://self.sunlabs.com) says you can do anything you want with it (even commercial use) as long as you give credit to the origianl developers.
-- Jecel Mattos de Assumpcao Jr Laboratorio de Sistemas Integraveis University of Sao Paulo - Brazil mailto:jecel@lsi.usp.br http://www.lsi.usp.br/~jecel/merlin.html
On Fri, 31 May 1996, Jecel Assumpcao Jr wrote:
Evan Cheng wrote:
Has this been attempted?
Several people have talked about this (actually, a port to Linux, but then a port to Solaris Intel would be trivial), but no one is doing it as far as I know.
Hum, did you gave up for tinySelf ? J.C. Mincke seems to be on the right way for his implementation. VCODE seems to be the right tool to attempt this implementation, but the x86 version is not yet ready.
But the result won't be like Self 4.0. I believe that very few Linux machines will have enough memory to run a direct port of Self 4.0. (I can't even find a Sparc usable for it :-()
Thierry. ___________________Thierry.Goubier@enst-bretagne.fr__________________ Je ne suis pas un patriote car je n'ai pas peur de l'etranger http://www-info.enst-bretagne.fr/~goubier/
Hum, did you gave up for tinySelf ?
No, I am still working on it. It is taking me more than the few days I promised on my web page, but it is comming.
J.C. Mincke seems to be on the right way for his implementation. VCODE seems to be the right tool to attempt this implementation, but the x86 version is not yet ready.
That's great! The more implementations there are of the language, the stronger it is (as long as they can run the same programs).
But the result won't be like Self 4.0.
Which is why I said that a port of Self 4.0 to other machines would be very interesting even if we have other implementations on these machines.
I believe that very few Linux machines will have enough memory to run a direct port of Self 4.0. (I can't even find a Sparc usable for it :-()
True, but you can get 32MB of RAM for 600 dollars in Brazil - I imagine it is even cheaper in other places. That is a lot, but you can easily pay more than that for a C++ compiler. If you just want to try Self out to see what it is like, then this isn't practical. But for serious use I think a Linux port would be great.
-- Jecel
On Mon, 3 Jun 1996, Jecel Assumpcao Jr wrote:
Hum, did you gave up for tinySelf ?
No, I am still working on it. It is taking me more than the few days I promised on my web page, but it is comming.
That's good news !
J.C. Mincke seems to be on the right way for his implementation. VCODE seems to be the right tool to attempt this implementation, but the x86 version is not yet ready.
That's great! The more implementations there are of the language, the stronger it is (as long as they can run the same programs).
This is a hard question. Will the implementation runs the same programs ? I don't think so, unless those programs don't use certain features (like the UI, for example).
I'm not particularly found of morphs :-) and the self world is really slow to run, unless you allocate 20 MB to the code cache... There's also some problems with the metaphor used. Is it the response time, or is it the lack of scrollbars, but I think the display of large collections (or arrays) in an outliner is particularly annoying.
But the result won't be like Self 4.0.
Which is why I said that a port of Self 4.0 to other machines would be very interesting even if we have other implementations on these machines.
It may be used as a comparison for theses "other" implementations; I don't think that anybody will manage the quality of the Self 4.0 implementation anytime soon.
A goal would be, anyway, to share enough common ground to run most programs through a rebuilding of the Self part of the image.
By the way, what is the position of the Self group (or owners) about the reuse of the Self 4.0 source ? Not the VM source, but the Self one ? Can we use it to accelerate the development of others implementations ?
I believe that very few Linux machines will have enough memory to run a direct port of Self 4.0. (I can't even find a Sparc usable for it :-()
True, but you can get 32MB of RAM for 600 dollars in Brazil - I imagine it is even cheaper in other places. That is a lot, but you can easily pay more than that for a C++ compiler. If you just want to try Self out to see what it is like, then this isn't practical. But for serious use I think a Linux port would be great.
-- Jecel
Hum, to get a truly efficient Self 4.0, 64 MB isn't enough on a Sparc 20 biprocessor. Even if the x86 code is more compact that the Sparc one, I'd bet 32 MB isn't enough under Linux (and Linux has response problems under heavy swapping).
And the compiler used is gcc...
I think a Self 5.0 will be needed for a Linux port.
Thierry. ___________________Thierry.Goubier@enst-bretagne.fr__________________ Je ne suis pas un patriote car je n'ai pas peur de l'etranger I'm not a patriot because I don't fear foreigners http://www-info.enst-bretagne.fr/~goubier/
Thierry Goubier wrote:
On Mon, 3 Jun 1996, Jecel Assumpcao Jr wrote:
That's great! The more implementations there are of the language, the stronger it is (as long as they can run the same programs).
This is a hard question. Will the implementation runs the same programs ? I don't think so, unless those programs don't use certain features (like the UI, for example).
If it doesn't run at least text-only programs (like most of the bechmarks) then it wouldn't make much sense calling it "Self", right?
I'm not particularly found of morphs :-)
I didn't like morphs initially either. They are rather "heavyweight" and not very modular. I was more used to OO graphic libraries with color wrapper objects, translation wrapper objects, rotation wrapper objects and so on. So what you saw on the screen was the result of a web of fine grained objects. Many of them were not visible by themselves, but I think a core sampler-like tool could easily handle this problem. You could set up the web do certain objects would be independent (in terms of position, color, rotation...) and others would be linked.
I now think that this is the moral equivalent of a class based system. Everything is set up in advance and runtime reorganization is limited. Morphs do a better job of getting the "object based" style to the graphics library. Of course it could be a little better - the fact that paint is used as an object's color makes it hard to have textured objects (in general, that is).
The problem is how to be able to change the color of these two objects together but not that third one. I have an idea that I call "slices", which is just an extension of the idea of "selections" used in window systems. I write more about that some other day.
and the self world is really slow to run, unless you allocate 20 MB to the code cache... There's also some problems with the metaphor used. Is it the response time, or is it the lack of scrollbars, but I think the display of large collections (or arrays) in an outliner is particularly annoying.
I have no more problems dragging a tall object up and down to see it all than using a scroll bar. In fact, I prefer this to the scroll bars. It would be nicer if the object could be made smaller so I could it all at the same time. Both zooming (as in the PAD++ interface) or 3D woud allow this.
By the way, what is the position of the Self group (or owners) about the reuse of the Self 4.0 source ? Not the VM source, but the Self one ? Can we use it to accelerate the development of others implementations ?
The license says you can do anything you want with the program and doesn't make any distinction between the Self part and the VM part. Of course, an answer from Sun's lawyers would be much better than my opinion. I once contacted Jim Anderson of Digitalk to ask about using just the Smalltalk part of Smalltalk V/286. The license said I could do anything I wanted with it as long as I didn't copy the V.EXE virtual machine (which was ok, as I was writing my own anway). He asked for some time to think about it (this was in 1988 and he still hasn't answered...). So even they wouldn't be able to do a thing if I just grabbed the sources part and reused it in my project, it would hardly be a nice thing to do. The moral of the story is that it is always a best to check of the program's authors really meant to be as liberal as the wording on the license agreement seems to imply.
Hum, to get a truly efficient Self 4.0, 64 MB isn't enough on a Sparc 20 biprocessor. Even if the x86 code is more compact that the Sparc one, I'd bet 32 MB isn't enough under Linux (and Linux has response problems under heavy swapping).
I have run Self 4 on a Sparc 10 with 48MB of RAM and *no* cache (don't ask me why the university buys such configurations). It runs very well, even if other people are using the machine at the same time! And the Voyager, the machine they like to demonstrate Self on, is a Sparc 2 class machine (if I remember correctly). I think there is something wrong with your setup (or your expectations?).
And the compiler used is gcc...
Which will make ports to other machines much easier than if the Sun compiler had been used, right?
I think a Self 5.0 will be needed for a Linux port.
Or someone could port Self 2.0. It had much more modest hardware requirements. Much less functionality as well, of course!
-- Jecel
Jecel wrote:
The problem is how to be able to change the color of these two objects together but not that third one. I have an idea
Seems like the perfect opportunity for multiple inheritance, doesn't it? Put the shared properties in a `graphics parent':
(| group-props = (| color = a_paint. origin = 10@20. |).
left-side = (| graphics-parent* = group-props. -- will not work, literals are evalled in lobby -- context, but you get the idea parent* = traits foo_morph. leftish stuff ... . |).
right-side = (| graphics-parent* = group-props. parent* = traits bar_morph. rightish stuff ... . |). ... . |)
Anything wrong with this?
Rainer
Rainer Blome wrote:
Jecel wrote:
The problem is how to be able to change the color of these two objects together but not that third one. I have an idea
Seems like the perfect opportunity for multiple inheritance, doesn't it? Put the shared properties in a `graphics parent':
(| group-props = (| color = a_paint. origin = 10@20. |).
left-side = (| graphics-parent* = group-props. -- will not work, literals are evalled in lobby -- context, but you get the idea parent* = traits foo_morph. leftish stuff ... . |).
right-side = (| graphics-parent* = group-props. parent* = traits bar_morph. rightish stuff ... . |). ... . |)
Anything wrong with this?
The problem is that you will create hundreds of twisty little objects, all different. That will certainly put a huge strain on your compiled code cache.
In fact, this is just an implementation of the wrapper idea. This becomes much easier to see if you use dynamic inheritance: ( | graphicsParent* <-..... You will no longer have a problem with the code cache as now there will only be a few types of objects, but performance will be terrible on most Self implementations.
Another problem with using static inheritance is that most objects will be created by dragging morphs around with the mouse, not by typing in expressions like your example. These automatically generated objects will be harder for the user to understand.
While I am pointing out some problems, it doesn't mean I think it is not a good idea.
-- Jecel
Me writes: [Morphs seem] like the perfect opportunity for multiple inheritance, doesn't it? Put the shared properties in a `graphics parent': [code elided ... ] Anything wrong with this?
Jecel replies: The problem is that you will create hundreds of twisty little objects, all different. ...
Twisty :-). Writhing, slippery, fishy, swarming all over the system ... ;-). Seriously. Why? We'd have at most one state object per morph. If we have hundreds of morphs, then so be it. At the moment, I'm not sure, but every button in a menu has its own color in the current system, doesn't it?
But the intent is to share, so there will be by a large factor less of these shared state objects. The hierarchy that reflects the sharing of graphic state is different from the traits hierarchy and this should be reified. The `second' parent slot of NewtonScript is intended for (although not restricted to) exacly this purpose. For example, all labels should indirectly (via their containers) inherit from the same text color object, of which there are only a handful, e.g. normal, highlight, bright, alert and dim.
In fact, this is just an implementation of the wrapper idea.
I don't see that. I didn't mean the enclosing object to denote a wrapper, the graphicsParent was to be a shared part.
By the way, I just realize that I have made the color slot constant, I meant to make it writable, to allow for corrupting proto... ahem, *cough*, to allow to change the shared state.
More precisely:
(globalGraphics _addSlots: (| normalText <- (| paint <- paints black. font <- fonts times. |). normalBackground <- (| paint <- paints grey. |). ... . -- More faces and paints |).
_addSlots: (| aGroup = (| graphicsParent* = globalGraphics. |). |).
aGroup _addSlots: (| leftSide = (| graphicsParent* = aGroup. parent* = traits fooMorph. ... . -- leftish stuff |). rightSide = (| graphicsParent* = aGroup. parent* = traits barMorph. ... . -- rightish stuff |). ... . |). )
This becomes much easier to see if you use dynamic inheritance: ( | graphicsParent* <- ... . |)
Certainly, you're right, that's what I meant. At least, sort of. Normally, you don't need to rip buttons out of menus, so you don't want to pay for it, so you statically inherit from the menu's graphicsParent. If this requires a more elaborate implementation to be efficient, so be it.
When you do rip a button out however, you are doing something very unusual, and you are willing to pay for it. Which, in this case, means, you replace the static graphicParent slot by a dynamic one, paying for the overhead of moving the object to a new clone family (hopefully shared with other `detached' buttons).
Maybe this works with current Self, maybe it won't, but it would with the right Self implementation. I just want to know which is the right thing, in practice worse sometimes does better, though.
Another problem with using static inheritance is that most objects will be created by dragging morphs around with the mouse, not by typing in expressions like your example. These automatically generated objects will be harder for the user to understand.
I disagree. Menu buttons share color, we should reify that. They should share state and this should be visible to the user.
BTW, I don't think of these systems as languages, I always have the objects (i.e. dragging) in mind. Consequently, I never know how to best code a description of a graphically easily created object. A messy issue anyway, what a literal object description means. Now, it describes the structure (declarative?). Would describing the creation (procedural) be better?
Rainer
Rainer Blome wrote:
Jecel replies: The problem is that you will create hundreds of twisty little objects, all different. ...
Twisty :-). Writhing, slippery, fishy, swarming all over the system ... ;-).
An old Adventure joke from the days of Fortran... don't worry about it :-)
Seriously. Why? We'd have at most one state object per morph. If we have hundreds of morphs, then so be it.
If you have hundreds of morphs that are clones of just a few prototypes, then you will have few maps and your code cache will have few entries to run your user interface. If these morphs have different parents, then you will have as many maps as "normal" object and your code cache fill be overflowing with essentially identical customizations of each method.
We shouldn't have to worry about such implementation details when programming in Self, I'll agree. But if you want an interactive user interface, you will have to take them into account (at least for now).
At the moment, I'm not sure, but every button in a menu has its own color in the current system, doesn't it?
Yes, as well as its position and other information.
But the intent is to share, so there will be by a large factor less of these shared state objects. The hierarchy that reflects the sharing of graphic state is different from the traits hierarchy and this should be reified. The `second' parent slot of NewtonScript is intended for (although not restricted to) exacly this purpose. For example, all labels should indirectly (via their containers) inherit from the same text color object, of which there are only a handful, e.g. normal, highlight, bright, alert and dim.
NewtonScript allows copy-on-write slots, which are a bit more dynamic than what we have in Self. Since that system is a pure interpreter (as far as I know), it can get away with that sort of thing.
But I like to use inheritance to like the Model and the View in a graphic system. It seems neater to have a listView *be* a list rather than including a reference to one. It makes the gui "less deep", more direct.
In fact, this is just an implementation of the wrapper idea.
I don't see that. I didn't mean the enclosing object to denote a wrapper, the graphicsParent was to be a shared part.
I know. But you can "wrap" a red color object around a circle and a rectangle object or you can wrap a circle object around a red color and a blue color object. A good framework should probably allow both. In that case, it might make as much sense for rightStuff to inherit from graphicsParent as the other way around. So it is a little like wrappers even if your example doesn't look like it.
This becomes much easier to see if you use dynamic inheritance: ( | graphicsParent* <- ... . |)
Certainly, you're right, that's what I meant. At least, sort of. Normally, you don't need to rip buttons out of menus, so you don't want to pay for it, so you statically inherit from the menu's graphicsParent. If this requires a more elaborate implementation to be efficient, so be it.
Once again, implementation issues are influencing design issues. You want to use constant parent slots to avoid the horrible costs of dynamic inheritance but then pay other costs because of code cache overflow.
When you do rip a button out however, you are doing something very unusual, and you are willing to pay for it. Which, in this case, means, you replace the static graphicParent slot by a dynamic one, paying for the overhead of moving the object to a new clone family (hopefully shared with other `detached' buttons).
You could replace one constant parent with another constant parent, but I like your idea better.
Maybe this works with current Self, maybe it won't, but it would with the right Self implementation. I just want to know which is the right thing, in practice worse sometimes does better, though.
This works, of course, since it sticks to the currently defined semantics. It just doesn't work very well for the reasons I have already mentioned. You are right that things might be different in other implementations. A pure interpreter would have less problems with very dynamic things, for example (it would be equally bad at everything ;-)
BTW, I don't think of these systems as languages, I always have the objects (i.e. dragging) in mind. Consequently, I never know how to best code a description of a graphically easily created object. A messy issue anyway, what a literal object description means. Now, it describes the structure (declarative?). Would describing the creation (procedural) be better?
I think that creation descriptions would be equally hard to understand, unless they could be linked to some kind of animation.
-- Jecel
On Mon, 3 Jun 1996, Jecel Assumpcao Jr wrote:
If it doesn't run at least text-only programs (like most of the bechmarks) then it wouldn't make much sense calling it "Self", right?
Text-only programs should be OK. A goal would be, if we can't reuse the Self 4.0 sources, to be able to reload part of the Self 4.0 sources to run the benchmarks.
I don't want to spend a long time trying to recreate the Self 4.0 sources just to run the benchmarks !
I'm not particularly found of morphs :-)
I didn't like morphs initially either. They are rather "heavyweight" and not very modular. I was more used to OO graphic libraries with color wrapper objects, translation wrapper objects, rotation wrapper objects and so on. So what you saw on the screen was the result of a web of fine grained objects. Many of them were not visible by themselves, but I think a core sampler-like tool could easily handle this problem. You could set up the web do certain objects would be independent (in terms of position, color, rotation...) and others would be linked.
My opinion is that morphs lack the extensibility of UI architectures like Garnet and Amulet (interactors, etc...) and don't provide a way of linking the functionnal core of an application to the morphs.
The goal of fine-grained objects in interfaces (you describe a glyph-based system) is to reuse as much functionality as possible and to ease modifications. This decomposition is pushed forwards by peoples like the authors of "Design Patterns...". I believe that Self has the power to ease the use of those design patterns.
I now think that this is the moral equivalent of a class based system. Everything is set up in advance and runtime reorganization is limited. Morphs do a better job of getting the "object based" style to the graphics library. Of course it could be a little better - the fact that paint is used as an object's color makes it hard to have textured objects (in general, that is).
Well, I disagree with this. A glyph-based system (a system with light, singleton objects on the interface) has nothing to do with classes, but with a runtime organisation (the accent is on dynamic, instance based, specialisation, a thing Self do extremely well). I'm not sure morphs are more object-based; they're certainly heavier, however.
About paint, that's a lack of dynamicity in the current design :-) Not really a problem, in fact.
The problem is how to be able to change the color of these two objects together but not that third one. I have an idea that I call "slices", which is just an extension of the idea of "selections" used in window systems. I write more about that some other day.
I'll wait for it...
and the self world is really slow to run, unless you allocate 20 MB to the code cache... There's also some problems with the metaphor used. Is it the response time, or is it the lack of scrollbars, but I think the display of large collections (or arrays) in an outliner is particularly annoying.
I have no more problems dragging a tall object up and down to see it all than using a scroll bar. In fact, I prefer this to the scroll bars. It would be nicer if the object could be made smaller so I could it all at the same time. Both zooming (as in the PAD++ interface) or 3D woud allow this.
I've tried to extract the list of primitives to send to J.C. Mincke from the Self world. After many attempts, I've looked into the VM source because I was unable to manage such a large collection.
I believe the PAD++ interface is a nice idea to be tried... Or a 3D interface. As soon as I may be able to run a Self implementation on a PC, I'll try thoses new, fast 3D libraries available (3DR, BRender, etc...). I've already studied a few virtual reality metaphors like the one used in EuroParc. Theses technologies are the right idea, in my opinion (I have no ergonomic data for this, even if my attempt at hypermedia cartography seems correct on this). But to find a metaphor usable in 3D may be hard.
Hum, to get a truly efficient Self 4.0, 64 MB isn't enough on a Sparc 20 biprocessor. Even if the x86 code is more compact that the Sparc one, I'd bet 32 MB isn't enough under Linux (and Linux has response problems under heavy swapping).
I have run Self 4 on a Sparc 10 with 48MB of RAM and *no* cache (don't ask me why the university buys such configurations). It runs very well, even if other people are using the machine at the same time! And the Voyager, the machine they like to demonstrate Self on, is a Sparc 2 class machine (if I remember correctly). I think there is something wrong with your setup (or your expectations?).
My problem is that we use Solaris on Sparc 10/20/5 workstations. 32MB under solaris means 26MB usable; add Openwin on top of that... And the minimum Self process is around 40 MB. What's more is that the stability of Solaris (and memory leaks) oblige us to reboot frequently under heavy usage. The only usable computer was the Sparc 20 with 64 MB, but I can't use it any more now. All others workstations have 32 MB.
My expectations were more along the lines of VisualWorks 2.5 / Envy that I'm using now to programm for my PhD. I'll downgrade my Self code to Smalltalk within a few weeks, sadly.
Thierry. ___________________Thierry.Goubier@enst-bretagne.fr__________________ Je ne suis pas un patriote car je n'ai pas peur de l'etranger I'm not a patriot because I don't fear foreigners http://www-info.enst-bretagne.fr/~goubier/
Thierry Goubier wrote:
My opinion is that morphs lack the extensibility of UI architectures like Garnet and Amulet (interactors, etc...) and don't provide a way of linking the functionnal core of an application to the morphs.
That's true, though no morphic applications so far have had a "functional core" so this hasn't been a problem. You can make a new morph that also inherits from the "core", actually entending the application into the graphical domain rather than just interfacing to it, but using inheritance this way has some problems that I explained in another message.
Well, I disagree with this. A glyph-based system (a system with light, singleton objects on the interface) has nothing to do with classes, but with a runtime organisation (the accent is on dynamic, instance based, specialisation, a thing Self do extremely well). I'm not sure morphs are more object-based; they're certainly heavier, however.
I meant that you have to set up relationships in advance so you limit what can happen at runtime. Sure, you can also change these relationships at runtime too, but so can you change anything in Smalltalk (a class based system). These preset object graphs are like traits, which are not a very object-based style (compare with Kevo, for example). Like traits, they may turn out the be the best solution.
[ "slices" posting comming soon]
I'll wait for it...
There is a short description of the idea on the web page http://www.lsi.usp.br/~jecel/runtime.html There are actually about five ideas I would like to post here when I get around to writing them down.
I believe the PAD++ interface is a nice idea to be tried... Or a 3D interface. As soon as I may be able to run a Self implementation on a PC, I'll try thoses new, fast 3D libraries available (3DR, BRender, etc...).
Look at http://www.cs.tu-berlin.de/~ki/engines.html for a list of other engines. I have looked at many, and Intel's was the closest to what I need. But I still ended up having to develop my own :-( in order to get results on slow 386 machines.
I've already studied a few virtual reality metaphors like the one used in EuroParc. Theses technologies are the right idea, in my opinion (I have no ergonomic data for this, even if my attempt at hypermedia cartography seems correct on this). But to find a metaphor usable in 3D may be hard.
The Self group actually did build a 3D UI, though it was never released (see the list of benchmarks on some recent papers). I don't think we need to start with full 3D - just placing 2D objects in a 3D space would be a good alternative to Kansas.
My problem is that we use Solaris on Sparc 10/20/5 workstations. 32MB under solaris means 26MB usable; add Openwin on top of that... And the minimum Self process is around 40 MB. What's more is that the stability of Solaris (and memory leaks) oblige us to reboot frequently under heavy usage. The only usable computer was the Sparc 20 with 64 MB, but I can't use it any more now. All others workstations have 32 MB.
How strange! The machines at the local university run for weeks at a time without needing to reboot. When they say you need 32MB to run Self, that includes the Solaris and Openwindows overheads. There are four versions of the distribution (at least the last time I checked): for 32MB SunOS machines, more than 32MB SunOS machines, 32MB Solaris machines and for more than 32MB Solaris machines. Are you sure you got the right version?
My expectations were more along the lines of VisualWorks 2.5 / Envy that I'm using now to programm for my PhD. I'll downgrade my Self code to Smalltalk within a few weeks, sadly.
It seems you got lucky with VisualWorks and unlucky with Self, for my experience of these system is not the same.
-- Jecel
On Tue, 4 Jun 1996, Jecel Assumpcao Jr wrote:
Thierry Goubier wrote:
My opinion is that morphs lack the extensibility of UI architectures like Garnet and Amulet (interactors, etc...) and don't provide a way of linking the functionnal core of an application to the morphs.
I must add that the interactors thing rely on a prototype-based extension to the language used (Lisp for Garnet, C++ for Amulet). Something easy to do in Self :-)
Well, I disagree with this. A glyph-based system (a system with light, singleton objects on the interface) has nothing to do with classes, but with a runtime organisation (the accent is on dynamic, instance based, specialisation, a thing Self do extremely well). I'm not sure morphs are more object-based; they're certainly heavier, however.
I meant that you have to set up relationships in advance so you limit what can happen at runtime. Sure, you can also change these relationships at runtime too, but so can you change anything in Smalltalk (a class based system). These preset object graphs are like traits, which are not a very object-based style (compare with Kevo, for example). Like traits, they may turn out the be the best solution.
I don't see why glyph-based architectures limit what may happen at runtime. It's true that they're based on the limitations of class-based languages and that the flexibility gained with an instance-based scheme may be unnecessary in Self.
By the way, what is Kevo ?
[ "slices" posting comming soon]
There is a short description of the idea on the web page http://www.lsi.usp.br/~jecel/runtime.html There are actually about five ideas I would like to post here when I get around to writing them down.
I'm not sure it's the main subject of this list, but I'd like to talk about metaphors in an object reality. Anybody has any idea to start with ? I'll defend the orientation / stability point of view !
I believe the PAD++ interface is a nice idea to be tried... Or a 3D interface. As soon as I may be able to run a Self implementation on a PC, I'll try thoses new, fast 3D libraries available (3DR, BRender, etc...).
Look at http://www.cs.tu-berlin.de/~ki/engines.html for a list of other engines. I have looked at many, and Intel's was the closest to what I need. But I still ended up having to develop my own :-( in order to get results on slow 386 machines.
There's a new one under developpment on the net, something called free3D (I may be wrong). Fast and C-based.
I must admit that intel's 3DR is quite fast on a 486 (DX2/66).
On my next computer, I'll also try theses new S3-based graphics card. Inexpensive and 3D acceleration.
I've already studied a few virtual reality metaphors like the one used in EuroParc. Theses technologies are the right idea, in my opinion (I have no ergonomic data for this, even if my attempt at hypermedia cartography seems correct on this). But to find a metaphor usable in 3D may be hard.
The Self group actually did build a 3D UI, though it was never released (see the list of benchmarks on some recent papers). I don't think we need to start with full 3D - just placing 2D objects in a 3D space would be a good alternative to Kansas.
I see the idea : the merlin theme. I'm disturbed however by the orientering question. I'm quite a good player at 3D killing games and I often get lost without using a map (even with high-res ones like the Mac versions). I can also make any viewer feels sick in others, flight based ones by turning all over the place :-).
I believe that you have to recall the human capacities in orientering through a metaphor that behaves in a similar way, hence the term virtual reality. But you have to map it with the self objects. I have an idea of what may be usefull : a landscape metaphor, with sights to help you remind the places you've already been. The features of the landscape may not be related to the objects themselves : it may not be necessary.
My version of Self is the smallui2 one.
Thierry. ___________________Thierry.Goubier@enst-bretagne.fr__________________ Je ne suis pas un patriote car je n'ai pas peur de l'etranger I'm not a patriot because I don't fear foreigners http://www-info.enst-bretagne.fr/~goubier/
By the way, what is Kevo ?
For an overview of all prototype-based languages I could find info about in the net (12 at last count), visit my Prototypes page at:
http://www.physik3.gwdg.de/~rainer/comp/lang/object-based.html
As far as I know, this is the most comprehensive collection on the subject. If anybody can suggest an addition, please don't hesitate, it can surely use some improvements.
Rainer
An excerpt from the page:
ftp://cs.uta.fi/pub/kevo/
Excerpt from the README: Kevo is a prototypical (= classless) object-oriented system built around a straightforward multitasking threaded code interpreter.
The system has been implemented to experiment with a new kind of an object model that is based neither on conventional inheritance nor delegation (a la Self). Instead, Kevo uses _concatenation_: unrestricted composition of object interfaces.
Syntactically the language resembles Forth, but is far more advanced in many respects. An integrated Macintosh Finder-like iconic object browser is provided for object inspection, definition and manipulation. Currently, only the Macintosh version of Kevo is available publicly.
Directory cs.uta.fi/pub/kevo/Papers
There's also a paper about the related Forth issues
Thierry Goubier wrote:
I'm not sure it's the main subject of this list, but I'd like to talk about metaphors in an object reality. Anybody has any idea to start with ?
Since the Self project is now officially about shared user interfaces at Sun, it seems to me that this would be a good place to talk about these things.
I'll defend the orientation / stability point of view !
I can't comment on this as I haven't heard about this point of view yet, thought I imagine what it is.
On my next computer, I'll also try theses new S3-based graphics card. Inexpensive and 3D acceleration.
If you want to distribute your work to other people, then this isn't a reasonable solution since you can't tell them what hardware to get. My prototype GUI works reasonably well on my friend's 166MHz Pentium, but will have to be rewritten from scratch if it is to be usable on my 40MHz 386. If no one but me was ever going to use it, it would be much easier for me to upgrade to a Pentium. But that isn't the case and I can't afford to leave out 386 users.
I see the idea : the merlin theme. I'm disturbed however by the orientering question. I'm quite a good player at 3D killing games and I often get lost without using a map (even with high-res ones like the Mac versions). I can also make any viewer feels sick in others, flight based ones by turning all over the place :-).
In the standard Merlin "rooms" you can only turn left and right and walk forward and back. I'll bet most people will arrange all objects in a rough circle around them and will never walk at all. You'll be able to "hyperjump" to other rooms. I am trying to merge my GUI and VRML2 (they are already so much alike) so that you can access Merlin systems remotely from any VRML2 browser and user the Merlin GUI to navigate the 3D web directly. So you will be able to fly around in dizzying paths but only in special rooms.
I believe that you have to recall the human capacities in orientering through a metaphor that behaves in a similar way, hence the term virtual reality. But you have to map it with the self objects. I have an idea of what may be usefull : a landscape metaphor, with sights to help you remind the places you've already been. The features of the landscape may not be related to the objects themselves : it may not be necessary.
The Rooms interface at Xerox Parc started out as an infinite scrolling plane much like Kansas. They found out that people tended to group objects in task related clusters and then jump between clusters rather than use the full generality of the plane metaphor. When hyperjumping in logical space there is less need for landmarks than if you "manually" walk between favorite places. The best alternative is to make all styles possible and see which one users prefer in the long run.
-- Jecel
On Mon, 10 Jun 1996, Jecel Assumpcao Jr wrote:
On my next computer, I'll also try theses new S3-based graphics card. Inexpensive and 3D acceleration.
If you want to distribute your work to other people, then this isn't a reasonable solution since you can't tell them what hardware to get. My prototype GUI works reasonably well on my friend's 166MHz Pentium, but will have to be rewritten from scratch if it is to be usable on my 40MHz 386. If no one but me was ever going to use it, it would be much easier for me to upgrade to a Pentium. But that isn't the case and I can't afford to leave out 386 users.
If it runs only "reasonably well" on a Pentium 166, you'll have a really hard time rewritting it. The usable on every kind of computer is a very hard thing to achieve (using Xwindows on a 386 is far from a pleasant experience).
The second point is that the response time helps greatly; if any move is costly, the user will be afraid of moving.
In the standard Merlin "rooms" you can only turn left and right and walk forward and back. I'll bet most people will arrange all objects in a rough circle around them and will never walk at all. You'll be able to "hyperjump" to other rooms. I am trying to merge my GUI and VRML2 (they are already so much alike) so that you can access Merlin systems remotely from any VRML2 browser and user the Merlin GUI to navigate the 3D web directly. So you will be able to fly around in dizzying paths but only in special rooms.
What I believe is that hyperjumps are the crucial feature, susceptible to get anyone lost if they jump too much. The problem is the same as with hypertexts, and has no easy solution without restricting the freedom of organisation.
I prefer the way the 3D file browser is done in a SGI. The best thing thoses new metaphors may gave us is an ability to view more things at the same time. Freedom of movement is important. I like Pad++ for this; there's a great freedom of self-tuning, to view differents aspects.
The Rooms interface at Xerox Parc started out as an infinite scrolling plane much like Kansas. They found out that people tended to group objects in task related clusters and then jump between clusters rather than use the full generality of the plane metaphor. When hyperjumping in logical space there is less need for landmarks than if you "manually" walk between favorite places. The best alternative is to make all styles possible and see which one users prefer in the long run.
Yes.
You have to remind that, when there's hyperjumping and a lack of landmarks, to get lost is easier.
The second point is that automatic / computer based logical spaces are rather bad approximations of the user logical spaces. For example, in hypertexts, they're mostly useless.
For this, I prefer to integrate a support for a user-defined categorisation, but without enforcing a strict one, like completely separated rooms. The second point is that we have, as human, a good ability to look in the large; to extract trends from a mass of data correctly presented. To exploit this, you have to allow differents levels (from an elementary view to a view-in-the-large) and a representation which reacts correctly for each level.
Thierry. ___________________Thierry.Goubier@enst-bretagne.fr__________________ Je ne suis pas un patriote car je n'ai pas peur de l'etranger I'm not a patriot because I don't fear foreigners http://www-info.enst-bretagne.fr/~goubier/
Thierry Goubier wrote:
On Mon, 10 Jun 1996, Jecel Assumpcao Jr wrote:
[current prototype is slooooow]
If it runs only "reasonably well" on a Pentium 166, you'll have a really hard time rewritting it.
I am going to simply throw it away and write a very different one. The current prototype was built just to see how things would look like.
The usable on every kind of computer is a very hard thing to achieve (using Xwindows on a 386 is far from a pleasant experience).
I found Xwindows quite useable on a 386 with Linux, even when I had only 4MB of RAM! It certainly feels faster on my machine than on a SparcStation 2.
I think the my design for a 3D engine should work reasonably well on both slow and fast computers. It works by assuming that people only see in details when things are not changing, so I do quick 2D approximations when objects are moving and then clean things up in the background when they stop. This is ugly, but responsive on slow machines but becomes unnoticable on faster machines. This is much like adaptive compilation.
The second point is that the response time helps greatly; if any move is costly, the user will be afraid of moving.
Exactly, but I think my two step drawing method will solve this.
What I believe is that hyperjumps are the crucial feature, susceptible to get anyone lost if they jump too much. The problem is the same as with hypertexts, and has no easy solution without restricting the freedom of organisation.
Jumping from room to room would be roughly equivalent to changing directories in most systems. Users don't normally do it very much.
I prefer the way the 3D file browser is done in a SGI. The best thing thoses new metaphors may gave us is an ability to view more things at the same time. Freedom of movement is important. I like Pad++ for this; there's a great freedom of self-tuning, to view differents aspects.
I like the PAD interface very much, and was tempted to use it for Merlin. In fact, nothing prevents me from having some room be PAD styled since each room is really a self contained universe with its own rules. The problem with PAD is that while the user can freely move about, the objects feel as if they were "stuck" on a surface and their relations are a bit hard to change.
You have to remind that, when there's hyperjumping and a lack of landmarks, to get lost is easier.
Back to my rooms analogy with directories, just because you have them you don't have to use them. A user can put all his objects in a single room and never have to hyperjump at all. And while landmarks are not built-in, you can drop objects as Hansel and Gretel did and create your own landmarks (as long as no one comes along and eats them ;-).
The second point is that automatic / computer based logical spaces are rather bad approximations of the user logical spaces. For example, in hypertexts, they're mostly useless.
My ideas is that these rooms will be manually organized.
For this, I prefer to integrate a support for a user-defined categorisation, but without enforcing a strict one, like completely separated rooms. The second point is that we have, as human, a good ability to look in the large; to extract trends from a mass of data correctly presented. To exploit this, you have to allow differents levels (from an elementary view to a view-in-the-large) and a representation which reacts correctly for each level.
You don't have to have separate rooms if you don't want to, as I mentioned above. And there should be many levels of organizing objects inside rooms beyond just spreading them around. One object might be a catalog with hundreds of other objects for you to view (a multipage factory morph, for example).
-- Jecel
On Wed, 12 Jun 1996, Jecel Assumpcao Jr wrote:
I think the my design for a 3D engine should work reasonably well on both slow and fast computers. It works by assuming that people only see in details when things are not changing, so I do quick 2D approximations when objects are moving and then clean things up in the background when they stop. This is ugly, but responsive on slow machines but becomes unnoticable on faster machines. This is much like adaptive compilation.
Yes, I think this is a good idea. Something great could be a system that is able to lower the detail level when the response time is too important. A kind of adaptation to 3D of the shadow rectangles in 2D.
But I think this point may be independant of the metaphor used.
What I believe is that hyperjumps are the crucial feature, susceptible to get anyone lost if they jump too much. The problem is the same as with hypertexts, and has no easy solution without restricting the freedom of organisation.
Jumping from room to room would be roughly equivalent to changing directories in most systems. Users don't normally do it very much.
That's a way of avoiding the problem... You imply that there won't be a lot of rooms. The counter argument I'll give is the importance of the object environment (I know I don't follow the *user* is different from me). It's size may ask for a lot of rooms.
I prefer the way the 3D file browser is done in a SGI. The best thing thoses new metaphors may gave us is an ability to view more things at the same time. Freedom of movement is important. I like Pad++ for this; there's a great freedom of self-tuning, to view differents aspects.
I like the PAD interface very much, and was tempted to use it for Merlin. In fact, nothing prevents me from having some room be PAD styled since each room is really a self contained universe with its own rules. The problem with PAD is that while the user can freely move about, the objects feel as if they were "stuck" on a surface and their relations are a bit hard to change.
That's the point which annoys me. Differents physics models in the rooms implies a potential complexity for the user (and I speak of me, here :-)). You have to adapt to a whole different world when you go from one room to another.
My second fear is about theses rooms. I know that I prefer a non-discrete partition of my mind (and I think this way, also), and I'm always disturbed by strict divisions schemes. How may I share objects between rooms other than by copying them ?
You have to remind that, when there's hyperjumping and a lack of landmarks, to get lost is easier.
Back to my rooms analogy with directories, just because you have them you don't have to use them. A user can put all his objects in a single room and never have to hyperjump at all. And while landmarks are not built-in, you can drop objects as Hansel and Gretel did and create your own landmarks (as long as no one comes along and eats them ;-).
I think the problem is that directories are not a really correct metaphor to start with. Yes, it has defficiencies :-)
You don't have to have separate rooms if you don't want to, as I mentioned above. And there should be many levels of organizing objects inside rooms beyond just spreading them around. One object might be a catalog with hundreds of other objects for you to view (a multipage factory morph, for example).
I'd just like to have something unified... Multipage is nice to save screen space, but you have to avoid the microsoft ones. And it adds another metaphor to the 3D world.
Thierry. ___________________Thierry.Goubier@enst-bretagne.fr__________________ Je ne suis pas un patriote car je n'ai pas peur de l'etranger I'm not a patriot because I don't fear foreigners http://www-info.enst-bretagne.fr/~goubier/
self-interest@lists.selflanguage.org