top of page

HandyTrak: Recognizing the Holding Hand on a Commodity Smartphone from Body Silhouette Images

To appear on  UIST ’21

Hyunchul LimDavid Lin, Jessica Tweneboah, Cheng Zhang

Fig1_research_idea.png
Fig2_hand_mode.png

Understanding which hand a user holds a smartphone with can help improve the mobile interaction experience. For instance, thelayout of the user interface (UI) can be adapted to the holding hand. In this paper, we present HandyTrak, an AI-powered softwaresystem that recognizes the holding hand on a commodity smartphone using body silhouette images captured by the front-facingcamera. The silhouette images are processed and sent to a customized user-dependent deep learning model (CNN) to infer how theuser holds the smartphone (left, right, or both hands). We evaluated our system on each participant’s smartphone at five possiblefront camera positions in a user study with ten participants under two hand positions (in the middle and skewed) and three commonusage cases (standing, sitting, and resting against a desk). The results showed that HandyTrak was able to continuously recognize theholding hand with an average accuracy of 89.03% (SD: 8.98%) at a 2 Hz sampling rate. We also discuss the challenges and opportunitiesto deploy HandyTrak on different commodity smartphones and potential applications in real-world scenarios

bottom of page