Is Self's JIT code written in such a way that there is a layer that be useful as a library outside of Self?
Also, has OpenGL been considered as an alternative to using xlib for rendering? What rendering API is used on the OSX implementation?
Cheers, Steve OSX freeware and shareware: http://www.dekorte.com/downloads.html
On Tuesday 10 June 2003 05:35, Steve Dekorte wrote:
Is Self's JIT code written in such a way that there is a layer that be useful as a library outside of Self?
No - it is very tightly integrated into the rest of the VM. Some more library-like JITs that might be of interest to you are:
http://www.gnu.org/directory/libs/gnulightning.html
http://www-sor.inria.fr/projects/vvm/realizations/ccg/ccg.html
Hmmm.... I see that Ian Piumarta is a contributor to both projects. I guess it is a small world after all! ;-)
Also, has OpenGL been considered as an alternative to using xlib for rendering? What rendering API is used on the OSX implementation?
The original MacOS port used that systems native graphic library (Carbon). A port to Cocoa (NextStep) is in the "to do" list of the release notes for the latest version.
Since the GUI isn't 3D (though I started a project to make it so back in 1998) I am not sure OpenGL would help much.
-- Jecel
Since the GUI isn't 3D (though I started a project to make it so back in 1998) I am not sure OpenGL would help much.
It would make the GUI code cross platform (at least across the major operating systems), which would be nice.
And once it's there, you can never tell what direction some motivated hacker might take it....
On Tuesday 10 June 2003 12:32, Josh Flowers wrote:
It would make the GUI code cross platform (at least across the major operating systems), which would be nice.
More than nice - it would be great! But my experience with OpenGL has not been very positive, specially on the PC (Windows and Linux). I get the feeling that Microsoft is actively trying to give the standard a bad name so that developers move to DirectX 3D since four of four very different XP installations I have seen had broken OpenGL. Two were fixed by getting new drivers from the video card vendors but that didn't help the other two. Of course, Napoleon used to say not to ascribe to malice what could be explained by incompetence...
But a problem with GL itself regarding GUIs is that it doesn't handle text or 2D textures very well (at least when I last looked into this) and needs help from some external program. Another problem is that it doesn't scale. See http://www.lsi.usp.br/~jecel/gmodel.html
-- Jecel
More than nice - it would be great! But my experience with OpenGL has not been very positive, specially on the PC (Windows and Linux). I get the feeling that Microsoft is actively trying to give the standard a bad name so that developers move to DirectX 3D since four of four very different XP installations I have seen had broken OpenGL. Two were fixed by getting new drivers from the video card vendors but that didn't help the other two. Of course, Napoleon used to say not to ascribe to malice what could be explained by incompetence...
I must admit that I'm a long time Mac user, so on this front, my experience is much more limited than yours.
But a problem with GL itself regarding GUIs is that it doesn't handle text or 2D textures very well (at least when I last looked into this) and needs help from some external program. Another problem is that it doesn't scale.
How does self handle text? I didn't see any glue code that dealt with text, and so I'd hoped that self was doing the text rendering. If that's the case, then GL's text support shouldn't be a problem. I'm not sure what kind of 2D texture support you need....
You're also correct that GL doesn't scale down to low end machines very well, but it should be usable on 90%+ of the desk/laptop machines out there, which would be a good start.
Thanks, this looks to be an interesting read.
josh
On Tuesday 10 June 2003 16:43, Josh Flowers wrote:
How does self handle text? I didn't see any glue code that dealt with text, and so I'd hoped that self was doing the text rendering. If that's the case, then GL's text support shouldn't be a problem.
The 'drawString:At:GC:' method calls platform specific code to do the real work.
I'm not sure what kind of 2D texture support you need....
Imagine a red, metalic cylinder. With a white stripe near the bottom. And an orange circle in the middle wrapped half way around the cylinder.
You're also correct that GL doesn't scale down to low end machines very well, but it should be usable on 90%+ of the desk/laptop machines out there, which would be a good start.
My problem is that is also doesn't scale up. When I get a machine that is 10 times faster than today's, it is going to look exactly the same. It might say it is doing 430 frames per second, but on a monitor with a 70Hz refresh rate that is a lie. You won't notice this, of course, since by then we will have 10 times as many polygons on the screen in applications that don't run at all on today's machines. But today's applications won't be any better and that is my complaint.
Steve Dekorte wrote:
The advantage is much, much faster rendering(2d, or 3d). Self is painfully slow(dragging an outliner around involves waiting for it to catch up with the mouse) on my dual 1.2Ghz Mac and I'm guessing(hoping) it's because of the rendering costs.
I got a 1/3 second delay on a 600MHz Pentium II and less than half that on a 277MHz UltraSparc II. On the latter it is only noticeable if you are testing for the effect.
I suspect Self is like Squeak and handing off chunks of a screen buffer stored in RAM to the desktop rendering APIs. Since this involves heavy use of the CPU(which is poor piece of rendering hw) and pushing each bit across the bus, it's a very slow way to render.
No, the drawing routines in traits canvas are just a thin layer for the native (X or Carbon) graphics API. I think that the reasons for this are more historical (how the first GUI experiments were implemented) than practical (given the neat compiler technology), but I should probably let those who actually know comment on this. In any case, this greatly complicates porting compared to Squeak.
Excellent antialiased text support isn't difficult to add using freetype and a bit of code to cache the rendered characters into a texture. Io does this, and I'm happy to share the code I use. The performance is as good as (and probably better than) desktop font renderers and the quality is much better than Self's current font renderer(at least on OSX). Freetype also allows use of any TrueType font, and there are some good free fonts out there.
Thanks for the tip. Looking at your code, I didn't see where the surface on which you are rendering the text is set. It has been such a long time since I looked at GL (it wasn't Open then ;-) so this looks strangely 2D to me...
BTW, I hope my comments don't make it seem like I am against linking Self and OpenGL. Quite the opposite! OpenGL is here now and this shouldn't be hard to do, so why not?
-- Jecel
On Wednesday, June 11, 2003, at 05:04 PM, Jecel Assumpcao Jr wrote:
I got a 1/3 second delay on a 600MHz Pentium II and less than half
that
on a 277MHz UltraSparc II. On the latter it is only noticeable if you are testing for the effect.
I don't know what the delay time is, but it certainly feels much slower than a normal desktop app. Unfortunately, the result is that people may assume that pure OO systems like Self are too slow for real use.
Thanks for the tip. Looking at your code, I didn't see where the surface on which you are rendering the text is set. It has been such a long time since I looked at GL (it wasn't Open then ;-) so this looks strangely 2D to me...
It is 2D. Were you under the impression that OpenGL is 3d only?
Cheers, Steve OSX freeware and shareware: http://www.dekorte.com/downloads.html
On Wednesday 11 June 2003 21:36, Steve Dekorte wrote:
I don't know what the delay time is, but it certainly feels much slower than a normal desktop app. Unfortunately, the result is that people may assume that pure OO systems like Self are too slow for real use.
Morphic is not exactly famous for its speed in Squeak either. This is the price you have to pay for "liveness" in the GUI. There is a lot of overhead to make the universe run all the time and not just when you move a windows or finish executing a menu command. Open an outliner for a morph and then drag that morph around and resize it while you watch the values in the outliner. That is what the slowness is buying you...
[text]
It is 2D. Were you under the impression that OpenGL is 3d only?
No, but I was hoping it would be text on 3D objects. A 2D overlay is something PHIGS and GL have always had and that is not very interesting to me.
-- Jecel
On Wednesday, June 11, 2003, at 06:32 PM, Jecel Assumpcao Jr wrote:
Morphic is not exactly famous for its speed in Squeak either. This is the price you have to pay for "liveness" in the GUI.
I found all Squeak rendering (not just Morphic) to be slow. I suspect the bottle neck has more to do with the rendering architecture than too many messages sends.
It is 2D. Were you under the impression that OpenGL is 3d only?
No, but I was hoping it would be text on 3D objects. A 2D overlay is something PHIGS and GL have always had and that is not very interesting to me.
The characters are rendered from a texture, so you could certainly render them onto a surface if you wanted to. Either way, you get excellent performance because you keep the data close to the logic and let the graphics hw do the heavy lifting.
Cheers, Steve Io, a small language: http://www.iolanguage.com/
On Wednesday, June 11, 2003, at 08:04 PM, Jecel Assumpcao Jr wrote:
On Tuesday 10 June 2003 16:43, Josh Flowers wrote:
How does self handle text? I didn't see any glue code that dealt with text, and so I'd hoped that self was doing the text rendering. If that's the case, then GL's text support shouldn't be a problem.
The 'drawString:At:GC:' method calls platform specific code to do the real work.
My mistake - and I'd even implemented the draw_text(const char* text, int x, int y) method in my GlutWindow.c file.... It was just too long ago I guess.
I'm not sure what kind of 2D texture support you need....
Imagine a red, metalic cylinder. With a white stripe near the bottom. And an orange circle in the middle wrapped half way around the cylinder.
I can think of any number of ways to do this, but if what you're saying is that the white stripe is one texture, and the orange circle is the other, I get your point.
You're also correct that GL doesn't scale down to low end machines very well, but it should be usable on 90%+ of the desk/laptop machines out there, which would be a good start.
My problem is that is also doesn't scale up. When I get a machine that is 10 times faster than today's, it is going to look exactly the same. It might say it is doing 430 frames per second, but on a monitor with a 70Hz refresh rate that is a lie. You won't notice this, of course, since by then we will have 10 times as many polygons on the screen in applications that don't run at all on today's machines. But today's applications won't be any better and that is my complaint.
Well, having read your idea's about how to implement a more scaleable UI, I think you've got a good solution. There are of course technical details that would need to get worked out, but there always are (for instance, it might be difficult to ensure that the different rendering techniques did not look too different - I've done some very minimal work trying to 'dissolve' from a GL rendered scene to a ray traced one, and the visual differences between the two were just too great (often times objects would have slightly different shapes, or placement). But again, my work was very precursory.
Steve Dekorte wrote:
The advantage is much, much faster rendering(2d, or 3d). Self is painfully slow(dragging an outliner around involves waiting for it to catch up with the mouse) on my dual 1.2Ghz Mac and I'm guessing(hoping) it's because of the rendering costs.
I got a 1/3 second delay on a 600MHz Pentium II and less than half that on a 277MHz UltraSparc II. On the latter it is only noticeable if you are testing for the effect.
I can vouch for Steve, Self has always been very sluggish on a Mac.
BTW, I hope my comments don't make it seem like I am against linking Self and OpenGL. Quite the opposite! OpenGL is here now and this shouldn't be hard to do, so why not?
Well said - now if only I could find more time to work on it. Anyone is welcome to my GlutWindow.c file if they'd like.
josh
"there must be in the indians' social bond something singularly captivating, and far superior to be boasted of among us; for thousands of Europeans are Indians, and we have no examples of even one of those Aborigines having from choice become Europeans."
Michel Guillaume Jean de Crevecoeur - Letters from an American Farmer
On Wednesday 11 June 2003 22:31, Josh Flowers wrote:
Imagine a red, metalic cylinder. With a white stripe near the bottom. And an orange circle in the middle wrapped half way around the cylinder.
I can think of any number of ways to do this, but if what you're saying is that the white stripe is one texture, and the orange circle is the other, I get your point.
I don't really care if they are multiple textures or one as long as they look right as I zoom closer and they are "live" (I can move the circle a littler further up, for example). Making them separate helps meet my requirements, but is not the only way as Mike's example below shows.
Well, having read your idea's about how to implement a more scaleable UI, I think you've got a good solution. There are of course technical details that would need to get worked out, but there always are (for instance, it might be difficult to ensure that the different rendering techniques did not look too different - I've done some very minimal work trying to 'dissolve' from a GL rendered scene to a ray traced one, and the visual differences between the two were just too great (often times objects would have slightly different shapes, or placement). But again, my work was very precursory.
Two systems that do different approximations have images that don't match exactly, as you said. Even getting polygons that have a common edge in 3D space to continue to do so when projected into 2D was a problem that took a very long time to solve.
On Wed, 11 Jun 2003 19:18:33 -0700, Steve Dekorte wrote:
The characters are rendered from a texture, so you could certainly render them onto a surface if you wanted to. Either way, you get excellent performance because you keep the data close to the logic and let the graphics hw do the heavy lifting.
Ok. OpenGL is an immediate mode renderer, right? This wouldn't be as easy in a retained mode one.
On Fri, 13 Jun 2003 04:39:06 -0000, Mike Austin wrote:
MacOS Quartz Extreme is an OpenGL desktop, and I'd say 2D textures are supported very well (each window is a texture).
Thanks for the tip. I took at look at this presentation:
http://www.opengl.org/developers/code/features/siggraph2002_bof/sg2002bof_apple.pdf
This thread started with the idea of a portable graphics system for Self. What Apple did couldn't be used directly, but I suppose it is an example of what can be done.
SGI scales pretty well :)
No it doesn't. My first web site was hosted on a SGI machine with 8 MIPS3000 33MHz processors. It cost more than all the other computers in the lab put together and visitors would marvel at the environment mapping on a VW Beatle or the cartoon-like flight simulator. I am absolutely sure that in 11 years you will look back on the 128 processor Onyx with its 8 graphics pipelines in exactly the same way as we now think of that old "super computer".
-- Jecel
On Friday, June 13, 2003, at 07:27 PM, Jecel Assumpcao Jr wrote:
On Wed, 11 Jun 2003 19:18:33 -0700, Steve Dekorte wrote:
The characters are rendered from a texture, so you could certainly render them onto a surface if you wanted to. Either way, you get excellent performance because you keep the data close to the logic and let the graphics hw do the heavy lifting.
Ok. OpenGL is an immediate mode renderer, right? This wouldn't be as easy in a retained mode one.
I'm not sure what you mean. If you mean you want to keep a copy of a frame buffer and draw over it, you can do that in-hardware using OpenGL too.
Maybe it's best to put this to a test. This test isn't perfect, but it's simple and might be useful for ball-park figure comparisons. Here's a sample program:
http://www.dekorte.com/Library/GLTest/polygons.c
It's set up to draw 15000 polygons and then swap the buffers. It measures the performance about once per second. Here are the results on my year-old OSX box:
[max:~] steve% ./a.out polygons per second = 911764, frames per second = 62 polygons per second = 1014705, frames per second = 69 polygons per second = 980198, frames per second = 66 polygons per second = 1014705, frames per second = 69 polygons per second = 1009900, frames per second = 68 polygons per second = 950495, frames per second = 64 polygons per second = 980198, frames per second = 66
So about a million smooth shaded 24bit color polygons per second. I would guess that performance on texture mapped polygons (such as text) wouldn't be much different and would more dramatically illustrate the performance difference, but I wanted to keep this test simple.
Can someone put together a similar Xlib/Linux or CoreGraphics/OSX demo to compare this to?
Cheers, Steve Io, a small language: http://www.iolanguage.com/
On Sunday 15 June 2003 23:48, Steve Dekorte wrote:
On Friday, June 13, 2003, at 07:27 PM, Jecel Assumpcao Jr wrote:
Ok. OpenGL is an immediate mode renderer, right? This wouldn't be as easy in a retained mode one.
I'm not sure what you mean. If you mean you want to keep a copy of a frame buffer and draw over it, you can do that in-hardware using OpenGL too.
I am sorry that this reply is so late, and even sorrier that it is so off-topic for this list. But it wouldn't be right to leave this subject pending an explanation.
In some graphics systems you can choose between the immediate and retained modes, while others always work in one or the other. In the immediate mode any command you give will result in a change to the image, while in the retained mode the command will alter a data structure kept by the graphics system which will later convert that structure into an image.
To generate a slightly different image, you have to issue all the commands once again with one or two differences in the first mode, but only the commands to change the data structure in the second.
The main advantage of the retained mode is that the bandwidth between the application and the renderer can be very limited and another advantage is that as the renderer is aware of what is changing some things, like motion blur, are easier to do.
The main problem with the retained mode is that so the application can know what commands to send it will need to keep its own copy of the data structure representing the seen. Everything is done twice, in practice. You move that sphere in your local copy, then send a command which makes the renderer move the same sphere in its own copy.
Most early 3D graphics systems were retained mode (Phigs, if I remember correctly, was an example of this). Renderman let you choose, while most modern systems (OpenGL, I think) are immediate mode.
-- Jecel
On Friday, June 20, 2003, at 02:34 PM, Jecel Assumpcao Jr wrote:
In some graphics systems you can choose between the immediate and retained modes, while others always work in one or the other. In the immediate mode any command you give will result in a change to the image, while in the retained mode the command will alter a data structure kept by the graphics system which will later convert that structure into an image.
I see. Yes, I think OpenGL is basically immediate. You can have display lists though - that is, define a set of drawing commands that can be invoked with a handle. But the typically implementations still push all drawing commands from RAM to the video card.
That said, the drawing commands may be less data than sending a bitmap and even if not, faster overall as the CPU is a poor renderer.
Cheers, Steve
On Tuesday, June 10, 2003, at 12:20 PM, Jecel Assumpcao Jr wrote:
But a problem with GL itself regarding GUIs is that it doesn't handle text or 2D textures very well (at least when I last looked into this) and needs help from some external program. Another problem is that it doesn't scale. See http://www.lsi.usp.br/~jecel/gmodel.html
Excellent antialiased text support isn't difficult to add using freetype and a bit of code to cache the rendered characters into a texture. Io does this, and I'm happy to share the code I use. The performance is as good as (and probably better than) desktop font renderers and the quality is much better than Self's current font renderer(at least on OSX). Freetype also allows use of any TrueType font, and there are some good free fonts out there. For example: http://savannah.nongnu.org/projects/freefont/
Here's the code Io uses for font rendering using GL: http://www.iolanguage.com/Source/release/Io/IoDesktop/FreeType/base/ GLFont.h http://www.iolanguage.com/Source/release/Io/IoDesktop/FreeType/base/ GLFont.c
Cheers, Steve Io, a small language: http://www.iolanguage.com/
Regarding OpenGL sacalability -
MacOS Quartz Extreme is an OpenGL desktop, and I'd say 2D textures are supported very well (each window is a texture). SGI scales pretty well :) - http://www.sgi.com/visualization/onyx/ip/
It's all microsofts fault, Direct3D garbage. Why can't they pick an open standard instead of an api that changes at every release!
I'm on windows, and I've never touched D3D. I will never touch it. Sorry, I'm ranting. :)
Mike
--- In self-interest@yahoogroups.com, Jecel Assumpcao Jr <jecel@m...> wrote:
On Tuesday 10 June 2003 12:32, Josh Flowers wrote:
It would make the GUI code cross platform (at least across the
major
operating systems), which would be nice.
More than nice - it would be great! But my experience with OpenGL
has
not been very positive, specially on the PC (Windows and Linux). I
get
the feeling that Microsoft is actively trying to give the standard
a
bad name so that developers move to DirectX 3D since four of four
very
different XP installations I have seen had broken OpenGL. Two were fixed by getting new drivers from the video card vendors but that didn't help the other two. Of course, Napoleon used to say not to ascribe to malice what could be explained by incompetence...
But a problem with GL itself regarding GUIs is that it doesn't
handle
text or 2D textures very well (at least when I last looked into
this)
and needs help from some external program. Another problem is that
it
doesn't scale. See http://www.lsi.usp.br/~jecel/gmodel.html
-- Jecel
On Tuesday, June 10, 2003, at 08:10 AM, Jecel Assumpcao Jr wrote:
On Tuesday 10 June 2003 05:35, Steve Dekorte wrote:
Is Self's JIT code written in such a way that there is a layer that be useful as a library outside of Self?
No - it is very tightly integrated into the rest of the VM. Some more library-like JITs that might be of interest to you are:
http://www.gnu.org/directory/libs/gnulightning.html http://www-sor.inria.fr/projects/vvm/realizations/ccg/ccg.html
I've tried lightning but it's buggy on PPC. I hadn't seen ccg before - it looks great. Thanks!
Since the GUI isn't 3D (though I started a project to make it so back in 1998) I am not sure OpenGL would help much.
The advantage is much, much faster rendering(2d, or 3d). Self is painfully slow(dragging an outliner around involves waiting for it to catch up with the mouse) on my dual 1.2Ghz Mac and I'm guessing(hoping) it's because of the rendering costs. I suspect Self is like Squeak and handing off chunks of a screen buffer stored in RAM to the desktop rendering APIs. Since this involves heavy use of the CPU(which is poor piece of rendering hw) and pushing each bit across the bus, it's a very slow way to render.
Cheers, Steve "Statically typed languages are like American sports cars. They go fast, but only in a straight line"
I know you have been discussing this for a while, but I invite you to try the latest Self release, 4.2. You may find it to be much snappier on a Mac. The reason that dragging a large outliner is slow, is partly because Self does no bitmap caching. It repaints damaged regions by rerendering them, in order to support things that are changing. (I think someone, Jecel?, already mentioned this.) So the cost is running all that code, not just graphics. That's why a better compiler may well make a difference.
- Dave
On Tuesday, June 10, 2003, at 04:58 PM, Steve Dekorte wrote:
On Tuesday, June 10, 2003, at 08:10 AM, Jecel Assumpcao Jr wrote:
On Tuesday 10 June 2003 05:35, Steve Dekorte wrote:
Is Self's JIT code written in such a way that there is a layer that be useful as a library outside of Self?
No - it is very tightly integrated into the rest of the VM. Some more library-like JITs that might be of interest to you are:
http://www.gnu.org/directory/libs/gnulightning.html http://www-sor.inria.fr/projects/vvm/realizations/ccg/ccg.html
I've tried lightning but it's buggy on PPC. I hadn't seen ccg before - it looks great. Thanks!
Since the GUI isn't 3D (though I started a project to make it so back in 1998) I am not sure OpenGL would help much.
The advantage is much, much faster rendering(2d, or 3d). Self is painfully slow(dragging an outliner around involves waiting for it to catch up with the mouse) on my dual 1.2Ghz Mac and I'm guessing(hoping) it's because of the rendering costs. I suspect Self is like Squeak and handing off chunks of a screen buffer stored in RAM to the desktop rendering APIs. Since this involves heavy use of the CPU(which is poor piece of rendering hw) and pushing each bit across the bus, it's a very slow way to render.
Cheers, Steve "Statically typed languages are like American sports cars. They go fast, but only in a straight line"
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
On Saturday, June 21, 2003, at 04:01 PM, David Ungar wrote:
I know you have been discussing this for a while, but I invite you to try the latest Self release, 4.2.
It seems about the same to me (dual 1.2Ghz G4). The pause when opening an outliner seems like it might be quicker, but dragging still feels about as slow. I don't see any reason why dragging an outliner in Self should be any slower than dragging an OSX window on the desktop.
Cheers, Steve OSX freeware and shareware: http://www.dekorte.com/downloads.html
On Sunday 22 June 2003 11:30, Steve Dekorte wrote:
It seems about the same to me (dual 1.2Ghz G4). The pause when opening an outliner seems like it might be quicker, but dragging still feels about as slow. I don't see any reason why dragging an outliner in Self should be any slower than dragging an OSX window on the desktop.
Dave actually explained this in the email you were replying to. By rendering directly to the screen the hardware can't help you. If you render to an off-screen buffer and then do an accelerated copy to the screen it will be much faster.
I tested Self 4.2 on a 600MHz G3 iBook and, as you wrote, the dragging is just as slow as before. So it wasn't a compiler issue. In fact, it is still slower than on a 600MHz Pentium II machine which doesn't have the SIC. And the Sparc, the slowest machine, beats them all. An interesting experiment would be to use X Window on the Mac and see if that makes any difference.
-- Jecel
Interesting...thank you Jecel. Which outliner are you dragging? Are there other outliners behind it? Does the size or amount of text in the outliner matter?
- Dave
On Tuesday, June 24, 2003, at 09:49 AM, Jecel Assumpcao Jr wrote:
On Sunday 22 June 2003 11:30, Steve Dekorte wrote:
It seems about the same to me (dual 1.2Ghz G4). The pause when opening an outliner seems like it might be quicker, but dragging still feels about as slow. I don't see any reason why dragging an outliner in Self should be any slower than dragging an OSX window on the desktop.
Dave actually explained this in the email you were replying to. By rendering directly to the screen the hardware can't help you. If you render to an off-screen buffer and then do an accelerated copy to the screen it will be much faster.
I tested Self 4.2 on a 600MHz G3 iBook and, as you wrote, the dragging is just as slow as before. So it wasn't a compiler issue. In fact, it is still slower than on a 600MHz Pentium II machine which doesn't have the SIC. And the Sparc, the slowest machine, beats them all. An interesting experiment would be to use X Window on the Mac and see if that makes any difference.
-- Jecel
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
On Tuesday, June 24, 2003, at 07:25 PM, David Ungar wrote:
Which outliner are you dragging?
I just opened a few random ones. They were all about the same.
Are there other outliners behind it?
No, I'm only opening one at a time. I also tried dragging it to an area where only it and the navigator where there - this didn't change the speed noticeably.
Does the size or amount of text in the outliner matter?
I didn't notice that. I also tried different Self window sizes with no effect. Are you unable to repeat this? Are they as fast as dragging an OSX window for you?
Cheers, Steve OSX freeware and shareware: http://www.dekorte.com/downloads.html
On Tuesday, June 24, 2003, at 09:49 AM, Jecel Assumpcao Jr wrote:
Dave actually explained this in the email you were replying to. By rendering directly to the screen the hardware can't help you.
The graphics hw is not just for compositing. It has custom logic for doing complex operations like polygon rendering, line drawing, antialiasing and much more. Right now it's sounds like these operations are being done on the CPU which is not designed for doing these operations.
If you render to an off-screen buffer and then do an accelerated copy to the screen it will be much faster.
Then the CPU has to process every pixel. As far as I can see, the CPU shouldn't need to touch *any* pixels for the type of graphics I see in Self.
Can you write a demo to see how many polygons per second Self can render using it's current system? (stripping away everything but the demo itself)
I think that may be a good simple test of Self's rendering engine that we can compare to alternate techniques.
Cheers, Steve Io, a small language: http://www.iolanguage.com/
On Tue, 24 Jun 2003 19:25:32 -0700, David Ungar wrote:
Which outliner are you dragging?
My tests used an outliner for 'globals' with the bottom two categories expanded.
Are there other outliners behind it?
No, there is a shell outliner in the corner (this is the BareBones snapshot) but I was not moving globals over it.
But on the iBook I used the Demo snapshot which also has a mostly empty initial screen but also includes a radarView. I remembered that each morph has to be rendered twice (on the world and a smaller version on the radarView) and thought that this might have been the reason it was slower on the iBook than on the PC (on the Sparc, which is the fastest of all, I also used the Demo snapshot). But opening a radarView on the PC made no difference since it seems to update only every two seconds or so.
Does the size or amount of text in the outliner matter?
Yes. If the outliner is totally closed or only has the category list showing then there is no lag at all. Expanding the last category slows things down considerably and expanding a second category causes it to be even slower by a slight, but noticeable, amount.
On Wednesday 25 June 2003 02:18, Steve Dekorte wrote:
The graphics hw is not just for compositing. It has custom logic for doing complex operations like polygon rendering, line drawing, antialiasing and much more. Right now it's sounds like these operations are being done on the CPU which is not designed for doing these operations.
The famous "wheel of hardware reincarnation" ;-) You will find that the more a graphics processor does, the more it looks just like your main CPU. Until you finally say "hey, let's just put a second processor there of the same kind for graphics instead of a special one. Then it can be use for other stuff when the graphics load is low". Later you start to add extra hardware to help the second processor do the most basic graphics stuff...
Self just calls the X Window library when it wants to draw a polygon or show some text. If the hardware isn't being used then it is the X driver's fault. I haven't looked at how things work on the Mac side.
If you render to an off-screen buffer and then do an accelerated copy to the screen it will be much faster.
Then the CPU has to process every pixel. As far as I can see, the CPU shouldn't need to touch *any* pixels for the type of graphics I see in Self.
Ok, so use the graphics processor to render to an off-screen buffer and then use the graphics processor to copy to the screen (frame1). Then use the graphics processor to copy to another position on the screen (frame2).
This was how Microsoft's Talisman system did it and the first version of my graphics engine design was like this as well. Then I decided I couldn't afford off-screen buffers on a 4MB machine.
Can you write a demo to see how many polygons per second Self can render using it's current system? (stripping away everything but the demo itself)
How about a Morph that draws lots of random polygons in its baseDrawOn: method? It could time this and add a polygons/second text to itself. By dragging it around the screen you would force it to update itself so you could see how that number would vary.
I think that may be a good simple test of Self's rendering engine that we can compare to alternate techniques.
But it is X's rendering engine, so I am not sure what we would learn from this little experiment.
-- Jecel
On Thursday, June 26, 2003, at 10:19 AM, Jecel Assumpcao Jr wrote:
The famous "wheel of hardware reincarnation" ;-) You will find that the more a graphics processor does, the more it looks just like your main CPU. Until you finally say "hey, let's just put a second processor there of the same kind for graphics instead of a special one. Then it can be use for other stuff when the graphics load is low". Later you start to add extra hardware to help the second processor do the most basic graphics stuff...
If this is true, it should be easy for you to implement some sample code that demonstrates the CPU rendering at the speed of the graphics card just as I've provided sample code to test OpenGL.
How about a Morph that draws lots of random polygons in its baseDrawOn: method? It could time this and add a polygons/second text to itself. By dragging it around the screen you would force it to update itself so you could see how that number would vary.
How about just a Self version of that one page C program I posted earlier?
I think that may be a good simple test of Self's rendering engine that we can compare to alternate techniques.
But it is X's rendering engine, so I am not sure what we would learn from this little experiment.
If it turns out that technology X renders 100x slower than technology Y, and we're using X and having what we hope are rendering performance problems, then we might consider giving Y a try.
Cheers, Steve OSX freeware and shareware: http://www.dekorte.com/downloads.html
On Friday 27 June 2003 07:09, Steve Dekorte wrote:
On Thursday, June 26, 2003, at 10:19 AM, Jecel Assumpcao Jr wrote:
[GPU will look more and more like CPU as time goes by]
If this is true, it should be easy for you to implement some sample code that demonstrates the CPU rendering at the speed of the graphics card just as I've provided sample code to test OpenGL.
But the graphics card doesn't have a processor, it has dozens of them. I would need a system with dozens of CPUs for a fair comparison. Please wait about two months...
How about just a Self version of that one page C program I posted earlier?
X Windows only renders 2D polygons in a flat color. The only way to get the same image you did is to draw the polygons pixel by pixel. Since Self's color management system is a very heavy weight one, I would be hardly surprised to get one polygon every few seconds.
There is something very strange in your benchmark - you draw each polygon exactly on top of the previous one. How does the z-buffer comparison turn out in this case? If it decides the new polygon is in front, then you will get what you expect. If it decides it is behind then nothing will be drawn (though the image will look right since the two polygons are the same) and your results will be very inflated. Since you only got about 1 million pol/sec I guess the first case was true.
But it is X's rendering engine, so I am not sure what we would learn from this little experiment.
If it turns out that technology X renders 100x slower than technology Y, and we're using X and having what we hope are rendering performance problems, then we might consider giving Y a try.
By "X" I meant "X Windows".
I would love an OpenGL Self or even just knowing where the bottlenecks are. Who knows where the time goes? Unfortunately this is not a trivial project and everyone is busy with other things.
Adding OpenGL bindings to Self should be easy enough with the primitive maker. You wouldn't need any integration with Morph just to run some tests. None of my own machines does a good job with OpenGL, however, so there isn't a point in me trying this.
-- Jecel
Also, has OpenGL been considered as an alternative to using xlib for rendering? What rendering API is used on the OSX implementation?
As an occasional self hacker, I've looked into doing this. It doesn't seem too hard from a theoretical standpoint, but because of the lack of documentation, it's fairly tricky as a practical matter. Unfortunately my time is very limited these days, and so I haven't been able to do as much as I'd like.
self-interest@lists.selflanguage.org