Ever since we started putting the actual glove together, I’ve been thinking about how the software side of this project should take into account what I expect to be a pretty low-resolution input system. When I was making a drawing interface for what was purely an ICM project, I was assuming an average level of mouse dexterity. The stroke-width picker that I made last Wednesday, for instance, is obviously too small to be operated with our crude pointer.
So what kind of interface would work?
I did some Googling around, to get ideas of what kind of controls can be operated with little dexterity. In particular, I looked for things you can do while wearing mittens, since I figure that that’s approximately the level of control our hypothetical user will have. Some findings:
Some military radios are touted as being usable even while wearing Arctic mittens: “The desired frequency is set by four knobs on the side of the radio which can be operated even while the operator is wearing Arctic mittens, or in the dark by counting clicks from the end-stops” (“Clansman PRC-349 / RT-349 VHF Transceiver,” Armyradio.com). Its that idea of click feedback that I find interesting. If you’re choosing from a limited set of options—say, brush sizes— on a very small display, it’s not too annoying to cycle through them one at a time to get to what you want. This is the way my cell phone, toaster, and camera menus work. And they all provide beeps or clicks for feedback. What kind of feedback will we give to let our user know what’s going on?
Other physical controls that can be operated while wearing mittens include a camera’s zoom ring and aperture and shutter speed dials (Matthew G. Monroe, “How to Shoot in Freezing Temperatures and Keep Your Hands Toasty Warm,” digital-photography-school.com); Velcro closure tabs on outerwear; and KEYnetic’s rock ‘n’ scroll cell phone interface (via LyteByte).
On the screen side of things, some relevant input methods include Morse code and onscreen keyboards. Again, I like the simplicity of tapping, though I can also see uses for a matrix of a few very large buttons.
Obviously, with two weeks to go, I don’t have time to really get into learning interface design for this project, but I think where I’m going is toward a system that has a drawing mode and a tool selection mode. When you’re drawing, there’s nothing else onscreen except maybe one or two tools or hints—how to turn the drawing line off, how to activate the control menu. You can’t select these controls onscreen, because how would you do so without drawing a line all the way to the button? And whatever gesture you use to activate the menu must be able to be done without moving the drawing pointer.
The Rock ‘n’ Scroll model is pretty good for us within one mode or another, but you have to be able to switch between the two without moving your drawing pointer. So that would involve . . . the thumb? It’s possible, though not necessarily easy, to move your thumb without moving the rest of your hand. So maybe as long as your thumb is tucked in, the pen is on, and if you stick it out, the pen is off.
Once we’ve got a way to separate the two modes, I was thinking the control screen would be super-simple, with only one control editable at a time. So, something like this: