A Virtual Haptic Platform to Assist Seeing Impaired Learning: Proof of Concept

By Yaser Dhaher and Robert Clements

Yaser Dhaher is a Professor in the Department of Mathematical Sciences at Kent State University.

Robert Clements is an Assistant Professor in the Department of Biological Sciences at Kent State University.

Abstract

This project is designed to remove the boundaries and enhance the ability of teachers to engage visually impaired learners by providing tools to help visualize virtual spatial concepts via the use of touch. Studies indicate that the ability to touch a virtual representation of a digital object affords a similar effect on blind people as a diagram on people with sight. Here we present a proof of concept and an inexpensive haptic platform in the form of a tactile display using vibration motors for tactile feedback to provide users the ability to touch virtual objects. The platform is cost effective, simple to use via a browser-based interface, and provides the ability to load textures and height maps, as well as meshes to convey many types of spatial concepts. Our proof of concept device provides a means to enable the use of existing resources to provide enriched experiences for the blind and visually impaired.

Keywords

Haptic learning, virtual learning, blind and visually impaired, proof of concept

Introduction

Visual impairment is a major educational barrier for students, as well as instructors, and the development of novel tools and experiences is of paramount importance to removing this obstacle. Evidence suggests that blind people have an enhanced ability to detect objects using haptic discrimination as well as augmented tactile acuity (Norman & Bartholomew, 2011). In addition, it is known that as well as enhanced tactile acuity, blind people activate regions in the visual cortex (the same region activated when sighted people view objects) upon vibrotactile stimulation (Burton, Sinclair, & McLaren, 2004). This pathway offers an intriguing way to activate regions involved in vision perception via the use of touch to enhance learning of spatial and visual concepts in blind students. Further, exploiting our endogenous capability for neural plasticity, multisensory perceptual learning can be used to provide sight in the visually impaired without visual stimulation by conveying information via touch or sound (Proulx, Brown, Pasqualotto, & Meijer, 2014). The ability for adaptive plasticity is exemplified in functional magnetic resonance imaging (fMRI) studies indicating that braille reading in the blind is associated with activation of visual cortex as well as altered responses in areas associated with non-visual stimuli (Burton, McLaren, & Sinclair, 2006). This indicates that the use of touch may be exploited to enhance visually impaired learning outcomes. The current project is designed to harness these pathways and the inherent capability to reorganize neural pathways and provide blind and visually impaired learners the ability to visualize spatial concepts and objects to enhance their ability to learn.

Existing studies point to the use of computer-based haptics for supporting the visually impaired in learning environments (Sallnas, Moll, & Severinson-Eklundh, 2007) and indicate that they have significant potential for blind computer users (Colwell, Petrie, Kombrot, Hardwick, & Furner, 1998; Petrie et al., 1997; Sjöström, & Rassmus-Gröhn, 1999). The potential benefits include navigating unknown spaces (Lahav, & Mioduser, 2015; Lahav, Schloerb, Kumar, & Srinivasan, 2012), learning braille, visualization of microscope imagery (Zhang, Duerstock, & Wachs, 2017), and as an accessible keyboard device (Southern, Clawson, Frey, Abowd, & Romero, 2012). In addition, strategies have been identified for the creation of computer-based interfaces for the blind (Sjöström, 2001) as well as techniques for rendering images for haptic devices (Kurze, 1997). It should also be noted that interface design is critical to the adoption and use of any haptic system designed for the blind and visually impaired (Levesque, 2005). Clearly, haptic interaction with computer-based software provides a unique and accessible method to augment visually impaired learning and beyond.

Recently our group has acquired a haptic device (Geomagic, 2016; Massie, & Salisbury, 1994) to test the feasibility of using tactile feedback for teaching the blind. The device consists of a stylus attached to a computer, which allows the user to physically manipulate and feel objects displayed on screen. Existing studies utilized a similar approach (Darrah, 2013; Kyung, Lee, & Park, 2008) to convey haptic representations of computer-generated objects and suggest that the method allows users to develop a more three dimensional (3D) understanding of spatial objects (Jones, Bokinski, Tretter, & Negishi, 2005). Some of this research has resulted in commercially available hardware and software (the eTouchSciences platform1) and underscores the potential of the technology. In addition, pilot studies in our lab using the device yielded exciting results whereby the visually impaired were able to autonomously recognize geometric shapes/representations in less than 10 seconds (a feat practically impossible without an available physical model). The ability to rapidly generate any model, and create dynamic models to display complex dynamic phenomena relevant to teaching science, technology, engineering, and math (STEM), such as representations of closed planner curves, regular polygons, smooth closed curves, and 3D convex solids is particularly attractive. While the initial studies show great promise, a number of significant drawbacks to this technology have been uncovered. The stylus approach can provide rapid understanding of some geometric shapes, but it does not simulate natural touch in an accessible way (Patomäki, Raisamo, Salo, Pasto, & Hippula, 2004). Devices are available that simulate touch in a more natural way2 but these are even more costly and out of reach for most (>$50,000). Additionally, developing software for these devices requires significant programming knowledge, skill, and time. The goal of the current project is to develop an inexpensive and accessible hardware and software platform for the presentation of 3D and 2D objects for teaching spatial concepts to the blind and visually impaired via the use of touch. The completed platform will enhance the ability of instructors to engage visually impaired learners by providing tools to help visualize STEM relevant spatial concepts through the use of touch.

Development of the Platform

The developed haptic platform consists of hardware components designed to track hand movement in three dimensional space and a glove type device to provide feedback to the fingertips when touching a virtual object. The software component of the platform uses a web-based interface to load virtual objects and provide information as to when tracked fingertips are touching the virtual models. Hand position data, as well as whether fingertips are touching an object, is passed to a websocket (Fette & Melnikov, 2011) server and sent via a software interface to a hardware controller that activates the tactile sensation on the users fingertips (Fig. 1).

Figure. 1 Software and Hardware implementation. The diagram below depicts the information workflow including hardware and software components of the system. Finger position data is captured by the hardware Leap Motion3 hand tracking device (A) and sent to the software threeJs web interface (B). The threeJs software component detects whether fingertips are touching a virtual object and sends data via a software websocket server (C) to the software TCL/TK interface (D). The software TCL/TK Arduino interface transmits finger touch data to the hardware Arduino device (E) that subsequently activates vibration motors on the user’s hands as necessary (F).

Figure 1 is a diagram depicting the hardware and software organization of the platform. The top box displays the software components (including the threejs web interface, websocket server and Arduino interface) while the lower box displays hardware (Leap Motion and Arduino).

Hardware

Hand Tracking: Hands are tracked using the commercially available Leap Motion device. This device is capable of accurately and efficiently tracking multiple hands in space located above the device as well as identifying hand gestures. Further, the cost of the device is relatively low (~$70) and there exists a simple programming interface to access the hardware functionality.

Haptic Feedback: Haptic feedback is accomplished by the use of inexpensive (<$1) 10mm flat vibration micro motors that are attached via a glove like device to the fingertips of the user. An Arduino programming board, flashed with the TCL implementation (TFirmata, n.d.) of the Arduino Firmata protocol (Arduino, 2017), is used to activate each vibration motor when hand tracking and the software interface indicates that the finger is located coincident with the virtual object.

Software

Web interface: Hand tracking is performed within a cross-platform browser interface written using JavaScript utilizing the ThreeJS (Doob, 2016) library for drawing interactive 3D content utilizing WebGL (Jackson & Gilbert, 2016), as well as the JavaScript client of the leap motion controller, LeapJS (Ehrlich, 2016). Using the LeapJS interface the leap motion device can be tracked and drawn within a ThreeJS 3D scene. In addition, we developed a simple file loader to load partially transparent 2D textures, height maps, or polygonal meshes (.obj files) (Bourke, 2016). Using this approach a user’s fingertip position in relation to 3D objects, textures, or terrain can be accurately tracked within 3D space. If a fingertip is in the same position as a digital object, the browser sends a message via the websocket protocol (Fette & Melnikov, 2011) to the locally running websocket server.

Websocket Server: NodeJS was used as the underlying interface for the websocket server. NodeJS (2016) has a websocket protocol module (Souza, 2016) critical to implementation of the project. The websocket server is started and the user automatically connects to the server directly from within the browser when the tracking site is loaded. As the positions of the hands are moved within space, the browser detects when fingertips are in the same position as a virtual object within the 3D scene. Subsequently, the client browser sends a value of 0 or 1 for each finger depending on whether the fingertips are touching a virtual object. As incoming messages are received, the websocket server forwards the messages to the software controlling the Arduino.

Arduino Activation: The final step in our pipeline is to provide user feedback in the form of vibration motors that are attached to an Arduino programming platform (What is Arduino?, 2017) running the firmata protocol. The firmata protocol is a standard script that can be uploaded to the Arduino, and provides a basic interface to many of Arduino’s capabilities on the connected host computer. We opted to use the Tool Command scripting language (TCL) (Tcl/Tk [8.6], 2016) to receive websocket messages from the server and activate the Arduino using tfirmata, a TCL implementation of Arduino Firmata 2.3. As messages are received by the TCL module from the NodeJS server, events are processed and the vibration motors are activated or deactivated as appropriate. In this scenario, as a user’s hands are tracked in the browser and a fingertip touches a virtual object, the websocket server is notified and passes the message to the TCL module that activates the appropriate vibration motor on the physical finger.

Results

The completed haptic platform performs with little perceivable lag and provides an interactive experience for touching virtual objects. The final hardware and software setup of the haptic platform are displayed in Figures 2 and 3 respectively. 

Figure 2. The final hardware configuration of the haptic display consists of (A) Leap motion tracking device surrounded by a plastic frame holding a horizontal plastic screen approximately 24 inches by 24 inches, and (B) vibration motors situated on the user’s second and middle fingers and attached to an Arduino board via long electrical wires. To utilize the haptics display (C), the user places fingertips on the horizontal acrylic glass screen to trace virtual objects appearing on top of the screen.

Figure 2 contains three images of the working system. The first image displays a plastic frame with an acrylic sheet placed on top and a leap motion tracking device on the table within the middle of the frame below. The second and third images are of a hand with vibration motors attached to the first and second fingers placed on top of the acrylic sheet displaying the system in use.

Figure 3. Software components of the haptic system consist of (A) browser-based visualization of tracked hands and any objects in the 3D scene, (B) a websocket server that receives incoming messages regarding finger position from the frontend browser component, and (C) TCL program that receives websocket events and provides visual feedback about finger status and activates the Arduino controlled vibration motors.

Figure 3 displays three images of the software in use. The top image is a digital representation of two hands tracked in space. The fingers are represented as cylinders and fingertips as spheres. The second image is a command prompt readout of messages being passed within the software elements. The final image contains 10 numbered boxes representing each finger in the tracked hand, the boxes turn red when a finger is touching a virtual object and vibration motor triggered.

Upon construction of the haptic display a number of issues surfaced that resulted in the final design. After initial tests it was apparent that localizing an object within 3D space is conceptually a difficult task without any knowledge of the scene and object. Essentially, users had difficulty finding objects within the unknown virtual space. In order to alleviate this issue we created a frame holding an acrylic glass sheet perpendicular to the tracking device to limit finger movement within space to a two dimensional plane. First, the acrylic glass provides a standardized starting position for the fingertips and a place to digitally place an object that could be rapidly located by the user. The acrylic glass also provides a frame of reference and physically limits the user from moving beyond the virtual tracking window. In addition, audio feedback, in the form of a continuous beep, indicates when the users’ hands are no longer being tracked and should be moved back into the tracking window. Essentially, this created a haptic display where touching different regions of the acrylic glass sheet provided feedback regarding the presence of digital objects. The implementation of the haptic display localized to a plane in virtual space requires calibration of the software to ensure that objects are placed directly on top of the virtual plane. This is simply accomplished by 1) localizing the position of the hands in space and modifying the height variable of objects placed in the virtual scene, or, 2) physically manipulating the height of the acrylic glass sheet to match the virtual plane position. The initial design of the platform utilized five finger tracking and feedback on both hands. Early tests indicated that it is difficult to place all 10 fingers flat on the screen and does not simulate visually impaired strategies for reading braille that typically utilize the index and middle finger on both hands. For these reasons, we decided to provide feedback to only the middle and index finger on each hand. However, the platform still natively supports ten finger feedback for both hands. 

For extensibility, and to provide the ability to load any type of 3D object, input routines were incorporated into the platform to load 2D transparent textures (Fig. 4a), terrain height maps (Fig. 4b) and 3D models (Fig. 4c). Permitting input of these types of varying data dramatically increases the impact and capability of the system for conveying any spatial concept. This includes two dimensional geometric structures (triangle, hexagon), 3D terrain maps (3D atlas, complex mathematical functions) and 3D meshes (cubes, cylinders, anatomical models, and digitally-preserved historical artifacts). The next phase is to evaluate the efficacy of the haptic display in the laboratory and classroom for instructing visually impaired learners and to extend the platform via the creation of learning modules. 

Figure 4. The ability to load different data types into the software provides end users with an extensible platform capable of loading any spatial concept. An interface to load: (A) 2D transparent textures to convey geometric shapes. Here, a blue triangle appears in front of a visualization of finger position in the virtual scene. (B) Heightmaps to display complex surfaces or mathematical functions. A portion of heightmap data from the moon appears as a landscape with craters, and (C) 3D meshes to present any spatial structure are provided within the platform. A torus mesh is displayed behind the users’ digital finger representation.

Figure 4 includes three pictures of the systems virtual display in use. In each of the three images the tracked finger position (displayed as cylinders for fingers and spheres for fingertips) is displayed with a digital object for interaction. The first image contains a simple 2D triangle with tracked fingers, the second image contains heightmap data and fingers (a topographical map of the surface of the moon) and the final picture incudes a 3D object known as a torus that displayed as a wireframe mesh.

Implications for Practitioners and Families

Here we present a new proof of concept platform in the form of a haptic display designed to augment visually impaired learning. The device is inexpensive, simple to use, and provides an interface to load 3D objects, 2D images, and terrain/height map-based data. In practice, the device allows a user to trace an object appearing on top of the virtual display with tactile feedback in the form of vibration motors. Compared to existing systems we have focused on reducing cost (~$100) and providing a simplistic interface without the need for programming. Commercially available systems cost thousands of dollars and require coding knowledge that is prohibitive for most instructors teaching the blind and visually impaired. In addition, the platform may be exploited for use in both 2D and 3D environments whereby the screen is removed and the user may interact with spatial objects. The latter also has potential applications in virtual reality experiences that require the use of hand representations coupled with haptic feedback. Moving forward, we aim to develop specific modules designed to teach foundational concepts to the visually impaired as well as incorporate dynamic data into the system to convey four dimensional concepts. As the new materials are developed we will begin empirically testing efficacy in the lab and classroom, as well as continue our current electroencephalography (EEG) studies designed to further identify neurobiological mechanisms underlying spatial concept formation associated with tactile activation.

The developed platform will impact the learning of disabled students by partially removing visual barriers permitting blind students to visualize virtual objects and concepts through the use of touch. A major benefit of the current platform is the ability to generate a haptic representation of any existing digital object relevant to learning outcomes. With integration into our classroom and development of new learning materials, we will be able to provide enhanced and unique experiences for disabled students aimed at improving their academic competence. In addition, we are in the process of engaging collaborators active in the regional blind community for outreach activities with the blind and visually impaired and we intend to make the products of this project available via an online portal.

References

Arduino: Firmata library. (2017). [Source Code]. Retrieved from https://www.arduino.cc/en/Reference/Firmata

Bourke P. (2016). Object files (.obj). Retrieved from http://paulbourke.net/dataformats/obj/

Burton, H., McLaren, D. G., & Sinclair, R. J. (2006). Reading embossed capital letters: An fMRI study in blind and sighted individuals. Human Brain Mapping, 27(4), 325-339. doi: 10.1002/hbm.20188

Burton, H., Sinclair, R. J., & McLaren, D. G. (2004). Cortical activity to vibrotactile stimulation: An fMRI study in blind and sighted individuals. Human Brain Mapping, 23(4), 210-228. doi: 10.1002/hbm.20064

Colwell, C., Petrie, H., Kombrot, D., Hardwick, A., & Furner, S. (1998). Haptic virtual reality for blind computer users. In ASSETS'98: Proceedings of the Third International ACM Conference on Assistive Technologies (pp. 92-99). New York, New York: ACM Press.

Darrah, M. A. (2013). Computer haptics: A new way of increasing access and understanding of math and science for students who are blind and visually impaired. Journal of Blindness Innovation and Research, 3(2). Retrieved from https://nfb.org/images/nfb/publications/jbir/jbir13/jbir030202abs.html. doi: http://dx.doi.org/10.5241/3-47

Doob, M. (2016). JavaScript 3D Library [Source code]. Retrieved from https://threejs.org/

Ehrlich, P. (2016). JavaScript client for the Leap Motion Controller [Source code]. Retrieved from https://github.com/leapmotion/leapjs

Fette, I., & Melnikov, A. (2011). The WebSocket Protocol. Retrieved from https://tools.ietf.org/html/rfc6455

Geomagic. (2016). The Geomagic Touch Haptic Device. Retrieved from http://www.geomagic.com/en/products/phantom-omni/overview

Jackson, D., & Gilbert, J. (2016). WebGL 2.0 Specification. Work in progress. Retrieved from https://www.khronos.org/registry/webgl/specs/latest/2.0/

Jones, M. G., Bokinski, A., Tretter, T., & Negishi, A. (2005). A comparison of learning with haptic and visual modalities. Haptics-e: The electronic journal of haptics research, 3(6). Retrieved from http://www.haptics-e.org/Vol_03/he-v3n6.pdf

Kurze, M. (1997). Rendering Drawings for Interactive Haptic Perception. CHI '97: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (pp. 423-430). New York, New York: ACM.

Kyung, K.-U., Lee, J.-Y., & Park, J. (2008). Haptic stylus and empirical studies on braille, button, and texture display. Journal of Biomedicine and Biotechnology, 2008. doi: 10.1155/2008/369651

Lahav, O., & Mioduser, D. (2015). Multisensory virtual environment for supporting blind persons' acquisition of spatial cognitive mapping – A case study. Retrieved from https://www.researchgate.net/publication/228605342_Multisensory_Virtual_
Environment_for_Supporting_Blind_Persons'_Acquisition_of_Spatial_Cognitive_Mapping-a_Case_Study

Lahav, O., Schloerb, D. W., Kumar, S., & Srinivasan, M. A. (2012). A virtual environment for people who are blind – A usability study. Journal of Assistive Technologies, 6(1). doi: 10.1108/17549451211214346

Levesque, V. (2005). Blindness, Technology and Haptics (Report No. CIM-TR-05.08). Retrieved from Centre for Intelligent Machines, McGill University: http://www.cim.mcgill.ca/~haptic/pub/VL-CIM-TR-05.pdf

Massie, T. H., & Salisbury, J. K. (1994). The PHANToM haptic interface: A device for probing virtual objects. Proceedings ASME Symposium Haptic Interfaces for Virtual Environment and Teleoperator Systems, 55(1), 295–300. Retrieved from https://alliance.seas.upenn.edu/~medesign/wiki/uploads/Courses/Massie94-DSC-Phantom.pdf

Node.js [Computer program]. (2016). Retrieved from https://nodejs.org/en/

Norman, J. F., & Bartholomew, A. N. (2011). Blindness enhances tactile acuity and haptic 3-D shape discrimination. Attention, Perception, & Psychophysics, 73(7), 2323–2331. doi: 10.3758/s13414-011-0160-4

Patomäki, P., Raisamo, R., Salo, J., Pasto, V., & Hippula, A. (2004). Experiences on haptic interfaces for visually impaired young children. In ICMI ’04: Proceedings of the 6th international conference on multimodal interfaces (pp. 281-288). doi: 10.1145/1027933.1027979

Petrie, H., Johnson, V., McNally, P., O'Neill, A-M., Majoe, D., & Morley, S. (February, 1997). Inexpensive tactile interaction for blind computer users: two application domains. Paper presented at IEE Colloquium on Developments in Tactile Displays, London, United Kingdom. doi: 10.1049/ic:19970081

Proulx, M. J., Brown, D. J, Pasqualotto, A., & Meijer, P. (2014). Multisensory perceptual learning and sensory substitution. Neuroscience and Biobehavioral Reviews, 41, 16–25.

Sallnas, K. E.-L., Moll, J., & Severinson-Eklundh, K. (2007). Group work about geometrical concepts among blind and sighted pupils using haptic interfaces. In World Haptics 2007: Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, (pp. 330-335). doi: 10.1109/WHC.2007.61

Sjöström, C. (2001). Designing haptic computer interfaces for blind people. Proceedings of the Sixth International Symposium on Signal Processing and its Applications, 1, 68-71. doi: 10.1109/ISSPA.2001.949777

Sjöström, C., & Rassmus-Gröhn, K. (1999). The sense of touch provides new computer interaction techniques for disabled people. Technology & Disability Volume 10(1), 45-52.

Southern, C., Clawson, J., Frey, B., Abowd, G., & Romero, M. (2012). BrailleTouch: Mobile touchscreen text entry for the visually impaired. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services Companion (pp. 155-156). Mobile HCI ‘12. doi: 10.1145/2371664.2371696

Souza G. (2016). A nodejs module for websocket server and client. Retrieved from https://www.npmjs.com/package/nodejs-websocket

Tcl/Tk (8.6) [Programming language and graphical user interface toolkit]. (2016). Retrieved from https://tcl.tk/

TFirmata: Tcl implementation of Arduino Firmata 2.3. (n.d.) Retrieved from http://pdt.github.io/tfirmata.html

Three.JS JavaScript 3D Library [Source Code] (2017). Retrieved from https://github.com/mrdoob/three.js/

What is Arduino? (2017). Retrieved from https://www.arduino.cc/ 

Zhang, T., Duerstock, B. S., & Wachs, J. P. (2017). Multimodal perception of histological images for persons who are blind or visually impaired. ACM Transactions on Accessible Computing (TACCESS), 9(3). doi: 10.1145/3026794

1. eTouchSciences Platform, http://etouchsciences.com/  


The Journal of Blindness Innovation and Research is copyright (c) 2017 to the National Federation of the Blind.