Here, we take a different approach for brain-computer interfaces that augments traditional input devices such as the mouse and keyboard and that targets a wider group of users. We use brain sensor data as a passive, implicit input channel that expands the bandwidth between the human and computer by providing extra information about the user.
The Brainput system learns to identify brain activity patterns occurring during multitasking. It provides a continuous, supplemental input stream to an interactive human-robot system, which uses this information to modify its behavior to better support multitasking. We demonstrate that we can use non-invasive methods to detect signals coming from the brain that users naturally and effortlessly generate while using a computer system. If used with care, this additional information can lead to systems that respond appropriately to changes in the user’s state. Our experimental study shows that Brainput significantly improves several performance metrics, as well as the subjective NASA-Task Load Index scores in a dual task human-robot activity.
Related Publications
- E.T. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli, S. Fantini, R.J.K. Jacob, Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input, Proc. ACM Conference on Human Factors in Computing Systems CHI’12, ACM Press (2012). Best Paper Nominee. [Acceptance Rate: 5%] [link]