I was pottering on my OurSelf system over the weekend (improving the simple web management interface) when my 4 and 6 year olds rampaged up and demanded to be part of what I was doing.
So I pulled up a Self world on the iPad (thought the Safari browser) and opened up the factory morph. The 6 yo got the idea pretty quickly, and started dragging little purple circles to the desktop and arranging them in patterns.
When I realised he was trying to drag them all, we discussed how it was making a new copy of the circle each time and so there would never stop being more circles if he needed them.
We also opened a radar morph, and moved around a bit. This wasn’t as intuitive, as the big jumps didn’t provide visual cues as to what was happening. It looked too much as if we were either moving to another ‘page’ or (scarily) as if we were deleting all of our hard work in placing coloured circles, squares and rectangles.
I think an ideal alternative would either show some visual indication of movement, not just a jump, or allow actual dragging of the desktop around. Having used iPads, both kids are really comfortable with dragging things on the screen with their fingers, and in fact keep on trying to drag things around on my computer monitor (even though they know it doesn’t work). At the moment, dragging with the main mouse button down creates the lasso rectangle - I’m tempted to swap that to a second mouse button and make the dragging work to move your viewpoint instead.
The iPad of course relies on a a single button ‘mouse’ (ie a finger) and gestures, which doesn’t really map perfectly to the Self expectation of a multi button mouse without gestures. This makes pop up menus difficult (you have to use a VNC menu to temporarily change the meaning of a ‘click’).
Some obvious solutions which come to mind are either
- create a MacOS style menubar, but this requires the elements on the screen to have a concept of being ‘selected’, and changes the GUI interaction to ‘select then choose action’
- put lots of buttons everywhere to bring up context menus, but this requires, um, lots of buttons. This could look messy, and also would leave us with lots of small buttons. Even I have a bit of trouble clicking morphic buttons on the iPad, my kids really struggled which was frustrating for them.
- create an action palette morph, a la Photoshop. The interaction then is ‘select the action (eg draw line, context menu open etc) then click on the element to apply it. This can work, but is maybe a bit modal.
The other source of confusion is the way in which the iPad allows multiple fingers dragging at once. Two small fingers trying to drag different morphs simultaneously was interpreted by morphic as a single finger bouncing all over the screen. Fixing this in morphic is conceptually possible I think, but would require complete replumbing of the event handling mechanism - ouch!
We also wrote their names using labelMorphs, and dragged them around. Changing the text of the labelMorph required bringing up an outliner, sending a ‘label:’ message etc which was beyond them but the 6yo seemed to understand the ‘Get it’ and ‘Do it’ buttons as a concept, if not the syntax specifics of colons, single quotes etc.
clockMorph worked, but needs some way to set the local time. Would be nicer if it was an analogue clock not just a text line. The beep button unfortunately didn’t work - I will have to look into whether we can do sound over VNC somehow.
Anyhow, it was fun,