Training Deep Learning Models on Virtual Reality Spatial Motion Data for Implicit User Identification

Authors

  • Eliana Wang Department of Computer Science, George Mason University, Fairfax, VA
  • Xiaokuan Zhang Department of Computer Science, George Mason University, Fairfax, VA

Abstract

Virtual reality (VR) is a rapidly growing technology that attracts millions of users. Implicit user identification has many applications in VR, such as removing the possibility for external threats to observe password entry for authentication or maintaining the high degree of immersion that a smooth VR experience requires. Past studies have utilized behavioral biometrics to implicitly identify users in VR by distinguishing patterns between users’ movements. An existing study by Liebers et al.  conducted a user study to record spatial motion data and trained two deep learning models to classify user data, reaching an identification accuracy of up to 90%. It remains to be seen if the findings reported are robust upon replication or if the results are generalizable to other deep learning models. As such, we aim to both replicate the data analysis portion of the study as reported and expand it to include two additional deep learning models. Using the dataset published by Liebers et al., we train MLP, KNN, SVM, and LSTM models across four feature sets, two task scenarios, and four types of body normalization, and evaluate the accuracies of the models. Preliminary data show discrepancies between evaluated accuracies and reported accuracies in Liebers et al. For example, the MLP model reached up to a difference of 25 accuracy points or 79% error. These discrepancies highlight the challenges faced when replicating studies without source code. Additional analysis is needed to evaluate the generalizability of the study to other deep learning models.

Published

2024-10-13

Issue

Section

College of Engineering and Computing: Department of Computer Science