Anyone who’s used a pen with a tablet appreciates how precisely the instrument allows them to write, draw, and manipulate objects. A pen is natural input device, one that’s much more nuanced than a mouse or touchpad. Despite its precision and ease of use, many tablet applications still need menus, buttons, and widgets for a user to switch between tools, to set their attributes, and to issue commands.
“Let’s say you’re working on a tablet and have a document open you want annotate. Using the digital pen, you select pen mode and scribble a note in the margin,” he explains. “Then, say, you want to switch the pen’s colour from blue to red. You go to the top of the tablet’s screen, open the colour menu and choose red. You then return to where you were annotating. But if you’ve made a typo or changed your mind, you have to switch to erase mode or click undo to delete what you have just written.”
This back and forth between commands and modes — choosing colours, switching from drawing to erasing — is inefficient, Vogel said, adding that it was from this frustration combined with a related idea proposed by Fabrice Matulic, then a postdoc in Vogel’s group, that inspired the research project.
People typically hold a pen using a tripod grip — the pen is held precisely between the thumb, index and middle fingers. “Because of the way we hold a pen, a couple of fingers and other parts of your hand are available to make postures on the surface of a tablet,” Vogel explains. “We wanted to see how far this could be pushed.”
Vogel and Drini Cami, an undergraduate computer science student and research assistant with the HCI group, began by considering posture combinations you could make with your hand while it holds the pen. For example, the palm of the hand could touch with its side or heel or it could float above the surface of a tablet; the index finger and thumb could slide down to touch a tablet’s surface beside the pen’s tip; and the middle, ring and pinky fingers could touch a tablet surface inside or outside the palm area.
Working through the combinations, in total 324 theoretical postures are possible. However, most are impractical, uncomfortable or difficult to perform while holding a pen so the team narrowed down the theoretical postures to a set of 33 candidates to evaluate.
“We recruited 12 participants to assess the accuracy of these 33 postures to see if this was a worthwhile idea to pursue,” Cami said. “We found that most postures are reasonable to use, but some were clearly faster and more precise to make. For example, if your palm floats above the tablet you tend to be faster but less accurate with the pen.”
During these evaluations, Cami also collected all touch input data. This raw data from the touch screen was used to train a deep neural network to recognize 10 postures that could be performed consistently with high accuracy. The training was done by Matulic and colleagues after he joined a major deep learning startup in Japan called Preferred Networks. Using the 10 postures, Cami conducted another study with five participants to see how well the postures could be used with two common pen applications — document annotation and vector drawing.
Vogel notes how exceptional this was: “This is a very elaborate project that Drini conducted during two co-op placements and while working as a part-time undergraduate research assistant in the HCI lab. He basically did a master’s–level project while an undergraduate, then presented it to 200 conference attendees at UIST 2018, the top international conference on user interface software and technology.”
“Our technique could work on current pen tablets like the Apple iPad and Microsoft Surface,” Vogel said. “All that’s needed is for those companies to provide app developers with the raw capacitive signal, and you could be quickly switching between drawing and erasing in the next version of Adobe Photoshop.”
To learn more about this research, please see Drini Cami, Fabrice Matulic, Richard G. Calland, Brian Vogel, Daniel Vogel, “Unimanual pen+touch input using variations of precision grip postures,” in UIST ’18: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany, October 14–14, 2018, pp. 825–37.