Eyeditor - Smart Glasses for On-The-Go Text Editing
On-the-go text editing is difficult, yet frequently done in everyday life, forcing users into a heads-down posture which can be undesirable and unsafe. We present EYEditor, a heads-up smart glass-based solution that displays the text on a see-through peripheral display and allows text editing with voice and manual input. The results of a controlled experiment showed that EYEditor outperformed smartphones as either the path OR the task became more difficult, yet became less salient when both the editing and navigation were difficult.
Scope:
The scope of this project included conducting Academic HCI research and publishing at a HCI Conference.
Role:
In this project, I helped with Literature Research, Designing and Conducting an Experiment, and Analysing and Visualizing the final results for the final research paper document.
Introduction
Although mobile phone-based word processing helps greatly with people’s on-the-go information and communication needs, it forces the user into a heads-down posture where the user loses awareness of his environment, requiring a significant amount of his physical and cognitive resources, and forces him into an awkward posture that can cause significant health problems in the long run.
To overcome this problem, we propose an on-the-go heads-up text editing solution, EYEditor, which allows editing text on a smart glass. EYEditor adopts a hybrid approach of voice and manual input. Voice is used to modify the text content, while manual input through a wearable ring-mouse is used for text navigation and selection. Text content is rendered visually on the smart glass screen with a sentence-by-sentence presentation.
Comparative Study
In this study, our objective is to understand how the platform compares with the phone in handling the trade-offs between the users’ text-editing and path-navigation needs for on-the-go text-editing tasks.
To investigate the trade-offs in different situations, we consider two important factors: the difficulty of the editing/correction task and path difficulty.
Our exploration is based on three research questions —
Q1: How does each platform handle the visual/cognitive demands of editing on the go?
Q2: What role does posture play in the usability of each platform on various path-types?
Q3: Is our solution viable for future considerations
Study Design
A repeated measures design with
2 Technique (Glass, Phone) x 3 Path-type (Simple Path, Obstacle Path, Stair Path) x 2 Task-Complexity (Easy, Hard)
resulted in 12 conditions per participant.
The Paths simulate realistic paths and obstacles encountered in on-the-go scenarios.
For the Correction Tasks, each text comprised of 8 logically connected sentences on a given topic (selected from a diverse range of general topics) and were embedded with errors (avoiding bias due to error processing depth) that served as correction opportunities for the participants.
Participants
12 volunteers (6M, 6F) aged between 18 to 36 years.
An equal number of native and non-native English speakers were recruited to minimize any potential bias due to speech recognition accuracy.
Apparatus
For the glass technique, EYEditor was used.
For the phone technique, we let participants use their own mobile phones, being able to edit on their preferred note-taking application, to allow for a realistic comparison of the two platforms, using any existing typing/correction aids like auto-correct, auto-complete, swipe typing, voice- input, etc.
Procedure
1. Environmental Conditions
Our study was conducted in indoor lighting conditions to ensure maximum text visibility on the smart glass.
2. Briefing the Participant
The experiment started with a briefing of the tasks.
3. Establishing the Preferred Walking Speed (PWS)
Then participants walked a 20 meter segment of each of the 3 paths twice, at their normal walking speed. The two trials were averaged to compute each participant’s Preferred Walking Speed (PWS) on each path.
4. Training and Performing Task conditions
The glass block started with a reading exercise where the participants familiarized themselves to read text on the glass screen. This was followed by a training session and a single warm-up session for practice. The reading, training and practice sessions, combined, lasted between 20 to 25 minutes. The phone block was preceded by a warm-up session where participants corrected a sample text on their phone while walking. For both the techniques, we instructed the participants to correct as many errors as possible while walking the path at their comfortable walking speed.
5. Questionnaire
After each block, participants filled out an unweighted NASA TLX questionnaire to report their subjective task load after each technique block.
6. Interview
At the end of the study, they completed a subjective preference questionnaire and were then interviewed for about 5 to 7 minutes.
Procedure
1. Environmental Conditions
Our study was conducted in indoor lighting conditions to ensure maximum text visibility on the smart glass.
2. Briefing the Participant
The experiment started with a briefing of the tasks.
3. Establishing the Preferred Walking Speed (PWS)
Then participants walked a 20 meter segment of each of the 3 paths twice, at their normal walking speed. The two trials were averaged to compute each participant’s Preferred Walking Speed (PWS) on each path.
4. Training and Performing Task conditions
The glass block started with a reading exercise where the participants familiarized themselves to read text on the glass screen. This was followed by a training session and a single warm-up session for practice. The reading, training and practice sessions, combined, lasted between 20 to 25 minutes. The phone block was preceded by a warm-up session where participants corrected a sample text on their phone while walking. For both the techniques, we instructed the participants to correct as many errors as possible while walking the path at their comfortable walking speed.
5. Questionnaire
After each block, participants filled out an unweighted NASA TLX questionnaire to report their subjective task load after each technique block.
6. Interview
At the end of the study, they completed a subjective preference questionnaire and were then interviewed for about 5 to 7 minutes.
Results/Findings
Comparison of platforms on the task performance
The glass had a significant advantage over the phone when either the path or the task presented a high cognitive load.
Moreover, there was a general consensus among participants that alternating attention between the text on the glass screen and the visual surrounding felt much easier and more seamless as compared to the phone.
75% agreed that the glass would be more suitable for navigating difficult paths. However, 25% of the participants believed that the glass’s flexibility can instill a false sense of security while in effect drawing their attention away from hazards.
Effect of posture on task performance on each platform
Although there was no significant difference between ObstPath and StairPath in terms of participants’ average performance measures, the glass outperformed the phone on ObstPath but not on StairPath.
From the analysis of the participants’ video logs and interview data, it was revealed that the path challenges of StairPath were more easily detected heads-down.
However, despite the path-navigation advantage of using the phone heads-down on the stairs, the performance of the glass was on par with the phone.
Viability of Solution
EYEditor indeed satisfied our criterion for viability as it offered significant task performance and path-navigation benefits over the phone for visually challenging conditions.
75% of the participants mentioned that the learning curve for using our system felt short and they could easily and quickly adapt to the system.
A key component of the acceptance came from the ability to correct by re-speaking. Using re-speaking was easier and faster. In general, Select-to-Edit was used as a fall-back when re-speaking failed due to either limitation of the algorithm or speech recognition errors.
42% of the participants likened Select-to-Edit to phone-based editing, while 75% preferred re-speaking to even typing on the phone. On the other hand, 25% of the participants had an ongoing preference for the phone. They reported that they were more confident with the phone as they were familiar with it.
Design Effectiveness
Based on our results and user feedback, we believe that the smart glass-based display and the support for respeaking-based correction were the key contributors to the effectiveness of our approach. Yet, the other design considerations enhance the viability of our solution—while our proposed content presentation style allows optimal utilization of the display, the Select-to-Edit mode allows the user a finer-grained control over the editing process, and the manual input is efficient for text navigation and selection.
Participants’ preferred technique.
Participants’ walking speeds in different conditions.
Participants’ text correction performance in different conditions.
Conclusion
Overall, the study shows that our smart glass solution, EYEditor, offered certain advantages over the phone and helped maintain better path awareness. Hence, EYEditor might potentially be safer to use on the go. Yet, we found there is a cognitive bottleneck when both the editing and navigation were more challenging, where the advantages of EYEditor become less salient especially while walking downstairs.
Although we tested on-the-go scenarios with realistic path challenges, we could not extend the study outdoor to preserve maximum text visibility on the smart glass display. How ambient light and ambient noise of outdoor conditions would affect our system’s performance remains to be seen. Also, voice interaction can sometimes be undesirable in public for security/social reasons. An in-depth study of safety and social factors warrants further investigation in future work.