Touch Sensitive Knitted Fabric for HCI

We are working to prototype, build and evaluate novel systems that integrate knit sensors with virtual and augmented reality. These emerging technologies are complementary and would open up new possbilities for improving human performance as well as creating playful experiences. Recent work has shown that versatile and durable knit touch sensors can be manufactured through standard fabrication methods, with little post-processing. These have potential for being integrated into clothing, hats, furniture, vehicles and anywhere that fabric can be found today. These smart fabrics enable accurate touch input through capacitive sensing, providing an always-available input modality. They also may have additional sensors to monitor aspects of human performance, behavior and health.

The information provided by VR or AR could provide supplemental information (e.g. status updates, confirmation of input), but also could directly augment the fabric to provide indications of available functionality on the touch sensor. For example, a generic fabric-based sensor, could have multiple purposes, depending on the situation, just as a mouse or keyboard currently are highly multi-purpose. By augmenting fabrics with additional virtual information, the touchable areas could have visual affordances, indicating functionality directly on the general purpose fabric sensor. This would enable the same piece of fabric to be used in different ways, mediated by the VR/AR system and software, which changes dynamically.


Publications

McDonald, D.Q., Mahajan, S., Vallett, R., Dion, G., Shokoufandeh, A., Solovey, E.T. (2022). Interaction with Touch-Sensitive Knitted Fabrics: User Perceptions and Everyday Use Experiments. In CHI Conference on Human Factors in Computing Systems (CHI ’22), April 29-May 5, 2022, New Orleans, LA, USA. ACM, New York, NY, USA, 20 pages.

Mahajan, S., McDonald, D.Q., Vallett, R.J., Liu, F., Solovey, E.T. (2022). Exploring Use of AR and Soft Knitted Sensor Technology for Co-located Parent-Child Quality Time. ACM GROUP’22 Workshop on Technologies for Children at Home Exploring Ways To Support Caregivers With Child-friendly Media Technologies For The Home. 5 pages.

McDonald, D.Q., Vallet, R., Solovey, E., Dion, G., Shokoufandeh, A. (2020). Knitted Sensors: Designs and Novel Approaches for Real-Time, Real-World Sensing In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 4 (4), Article 145 (December 2020), 25 pages.

Social Media Polarization

Effects of Digital Jury Moderation on the Polarization of Social Media Users

As polarization among political officials has increased dramatically in recent years, the social media landscape has followed suit. The increased prevalence of disinformation, inflammatory rhetoric, and harassment online has augmented polarization in turn, propelling a feedback loop resulting in the erosion of democratic norms. Effective moderation of social media platforms can help solve this problem.

This project explores how implementing a democratic, peer-based “digital jury” moderation system for social media platforms would impact polarization online, compared to traditional, “top-down” moderation that is conducted by employees of the platforms themselves.

Towards Biometric Input for Multi-Agent Adaptive Human-Robot Collaboration

The overall goal of our research is to investigate cognitive-physical adaptation during human-robot collaboration using brain and physiological signals in order to develop robot systems that can adapt to the humans who are working with them and to allow humans to better assess how the human-robot team is working as a unit in order to improve the overall outcomes. This project is a collaboration between Erin Solovey, Rodica Neamtu and Yanhua Li at WPI and Holly Yanco, Adam Norton, Yi-Ning Wu, and Pei-Chun Kao at UMass-Lowell. It is funded through the WPI-UML Seed Grants.


Positions

We are looking for motivated undergraduate and graduate research assistants for this project. We have both paid and volunteer positions, or students can enroll in an independent study for credit. Work will involve experiment design, running human subjects experiments, as well as data analysis and machine learning on multidimensional time series data.

Integrating Non-Invasive Neuroimaging and Educational Data Mining to Improve Understanding of Robust Learning Processes

Could a computer detect a person’s emotions? Could it tell when someone is frustrated over something like a tricky math problem on an online tutoring program? Could it detect deep reflection on a problem?

This project, in collaboration with Erin Walker at Arizona State University and Kate Arrington at Lehigh University, is part of NSF’s support of Understanding the Brain and the BRAIN Initiative, a coordinated research effort that seeks to accelerate the development of new neurotechnologies. We will explore the use of measurements of brain activity from lightweight brain sensors alongside student log data to understand important mental activities during learning. It also will allow us to explore novel human-computer interaction paradigms for utilizing sensors that provide passive, continuous, implicit, but noisy input to interactive systems. This has implications for the growing fields of brain-computer interfaces, wearable computing, physiological computing, and ubiquitous computing.


Positions

We are looking for motivated undergraduate and graduate research assistants for this project. We have both paid and volunteer positions, or students can enroll in an independent study for credit. Work will involve experiment design, running human subjects experiments, as well as data analysis and machine learning on multidimensional time series data.


Closed-Loop BCI using Adaptive Kinetic Architectural Design to Regulate Human Emotional States

We aim to examine the link between architectural design features, human behavior, and brain activity.

Joint project with Ali Yousefi (WPI Computer Science) and Soroush Farzin (WPI Civil & Environmental Engineering), funded through WPI’s Transformative Research and Innovation, Accelerating Discovery (TRIAD) program.


Positions

We are looking for motivated undergraduate and graduate research assistants for this project. We have both paid and volunteer positions, or students can enroll in an independent study for credit. Work will involve experiment design, running human subjects experiments, as well as data analysis and machine learning on multidimensional time series data.

Improving Information Accessibility with Sign Language First Technology

This project takes a human-centered computing approach to build a foundation that advances understanding of how deaf individuals could work and learn in environments that are designed with their needs and preferences at the forefront.


Positions

We are recruiting PhD, MS and undergraduate students (including MQP, IQP) to work on this project.

We encourage applicants for graduate studies from the Deaf community for this project. This project is in close collaboration with Jeanne Reis and the Center for Research and Training at the Learning Center for the Deaf. Researchers will be joining a strong Deaf community there.

Prospective Graduate Students: Apply here! (Funded as Research/Teaching Assistant)

In addition to computer science, WPI also has graduate programs in data sciencelearning sciences, and interactive media and game design. If you are interested in this research project, but are not sure which program is best, don’t hesitate to reach out to me and we can discuss which one is right for you.

WPI’s Computer Science department in Worcester, MA (an hour outside of Boston) is an exciting place to study human-computer interaction, with outstanding faculty and a wonderful community. If you are applying to one of the graduate programs, and are interested in working in my group, make sure to indicate this as well as your research interests in your essay.


Related Publications

Jeanne Reis, Erin T. Solovey, Jon Henner, Kathleen Johnson, Robert Hoffmeister. ASL CLeaR: STEM Education Tools for Deaf Students. Proc. ASSETS’15. (Poster paper) ACM. 2015.


NSF Award Abstract

In the United States, American Sign Language (ASL) is the primary language of many deaf adults, and many deaf students receive classroom instruction in ASL while learning English as a second language. However, most interactive computing tools are presented and navigated exclusively in English, even those designed for deaf audiences. Making access to technology contingent upon a sufficient command of a second language creates significant barriers and access delays for deaf individuals.

This project takes a human-centered computing approach to build a foundation that advances understanding of how deaf individuals could work and learn in environments that are designed with their needs and preferences at the forefront. It investigates the feasibility and effectiveness of new SL1 technology, which will provide delivery of signed language (SL) content by allowing deaf signers to navigate, search, and interact with technology completely in their first language (L1). The optimization of SL1-based user interfaces has never before been attempted and could lead to a breakthrough in historic communication and learning barriers; determining preferences, needs and optimized presentation of information for Deaf users will benefit this population and future populations of ASL signers.

Technology that is truly accessible to deaf SL-signers has the power to facilitate lifelong learning, enhance access to educational content such as STEM topics, improve career opportunities, and allow SL-based organization of SL corpora, assessments, dictionaries, learning and employment resources. This work will directly impact deaf individuals, parents, interpreters, teachers, and students studying SL. Direct collaboration with deaf graduate and undergraduate students, deaf faculty, and deaf researchers, along with several partner schools for the deaf will ensure that the Deaf community has an instrumental leadership role in the design of future tools that meet their needs.

Using your Brain for HCI

To further increase the bandwidth from humans to computers, we are investigating methods for sensing signals that users naturally give off while using a computer system. We use this data to augment the explicit input that the user provides through standard input devices.

Using a relatively new brain sensing tool called functional near-infrared spectroscopy (fNIRS), along with a more established brain sensing tool called electroencephalography (EEG), we can detect signals within the brain that indicate various cognitive states. These devices provide data on brain activity while remaining portable and non-invasive. The cognitive state information can be used as input to provide the user with a richer and more supportive environment, particularly in challenging or high workload situations such as management of unmanned aerial vehicles, driving, air traffic control, video games, health care, training, and anything involving information overload, interruptions or multitasking. It may also improve operation at the other end of the spectrum in highly automated systems that require little effort from the human, but that can result in boredom and low performance. In addition, while most of my research has focused on the broader population of healthy users, many of the results would benefit disabled users as well, by providing additional channels of communication in a lightweight manner.


Related Publications

  • R. Moradinezhad, E.T. Solovey. Integrating Brain and Physiological Sensing with Virtual Agents to Amplify Human Perception. In Proceedings of ACM CHI 2017 Workshop on Amplification and Augmentation of Human Perception.
  • J. Chan, P. Siangliulue, D. Q. McDonald, R. Liu, R. Moradinezhad, S. Aman, E.T. Solovey, K. Gajos & S.P. Dow. (2017). Semantically far inspirations considered harmful? Accounting for cognitive states in collaborative ideation. In Proceedings of 2017 ACM Conference on Creativity and Cognition. 
  • S. Keating, E. Walker, A. Motupali, E.T. Solovey. Toward Real-time Brain Sensing for Learning Assessment: Building a Rich Dataset. In Proc. ACM CHI 2016 Extended Abstracts. San Jose, CA. May, 2016. (To Appear)
  • D. Belyusar, B. Mehler, E.T. Solovey, & B. Reimer. The Impact of Repeated Exposure to a Multi-Level Working Memory Task on Physiological Arousal and Driving Performance, Transportation Research Record. Also appeared in (2015) Proceedings of The Transportation Research Board 94th Annual Meeting, Washington, DC.
  • M. Boyer, M.L. Cummings, L.B. Spence, E.T. Solovey. Investigating Mental Workload Changes in a Long Duration Supervisory Control Task, Interacting With Computers (2015). [link]
  • E.T. Solovey, D. Afergan, A. Venkat, D. Belyusar, B. Mehler, B. Reimer. “Enabling Adaptive Autonomy: Brain & Body Sensing for Adaptive Vehicles,” Proc. CHI 2015 Workshop on Autonomous Driving UX. (2015). (To Appear). [link]
  • E.T. Solovey, D. Afergan, E.M. Peck, S. Hincks, R.J.K. Jacob, Designing Implicit Interfaces for Physiological Computing: Guidelines and Lessons Learned using fNIRSACM Transactions on Computer-Human Interaction (TOCHI) Vol. 21, Iss. 6., 2015. [link]
  • C. Hoef, J. Davis, O. Shaer, E.T. Solovey, An In-Depth Look at the Benefits of Immersion Cues on Spatial 3D Problem SolvingACM Symposium on Spatial User Interfaces (Poster), Honolulu, HI, October 4-5, 2014.
  • D. Belyusar, B. Reimer, B. Mehler, D. Afergan, J.F. Coughlin, E.T. Solovey, Utilizing functional near-infrared spectroscopy to identify cognitive processes contributing to workload in a dual-task environmentSociety for Neuroscience Annual Meeting (Poster Presentation), Washington, D.C., November 18, 2014.
  • D. Afergan, E. Peck, E.T. Solovey, A. Jenkins, S. Hincks, E.T. Brown, R. Chang, R.J.K. Jacob. Dynamic Difficulty Using Brain Metrics of Workload. Proc. ACM Conference on Human Factors in Computing Systems CHI ’14, ACM Press (2014).Best Paper Nominee.[Acceptance Rate:5%] [link]
  • A. Girouard, E.T. Solovey, and R.J.K. Jacob, Designing a Passive Brain Computer Interface using Real Time Classification of Functional Near-Infrared SpectroscopyInternational Journal of Autonomous and Adaptive Communications Systems (2013). [link]
  • E.T. Solovey, Real-time fNIRS Brain Input for Adaptive Robot AutonomyProc HRI Pioneers Workshop (2012). [Acceptance Rate: 23%]
  • E.T. Solovey. Real-time fNIRS Brain Input for Enhancing Interactive SystemsPh.D. Dissertation, Computer Science Department, Tufts University, Medford, MA. (2012).
  • E.T. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli, S. Fantini, R.J.K. Jacob, Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input,” Proc. ACM Conference on Human Factors in Computing Systems CHI’12, ACM Press (2012). Best Paper Nominee.[Acceptance Rate:5%] [link]
  • E.T. Solovey, K. Chauncey, F. Lalooses, M. Parasi, D. Weaver, M. Scheutz, P. Schermerhorn, A. Sassaroli, S. Fantini, A. Girouard, R.J.K. Jacob, Sensing Cognitive Multitasking for a Brain-Based Adaptive User InterfaceProc. ACM Conference on Human Factors in Computing Systems CHI’11ACM Press (2011). [Acceptance Rate: 26%] [link]
  • E.T Solovey, R.J.K. Jacob, Meaningful Human-Computer Interaction Using fNIRS Brain SensingProc ACM CHI 2011 Workshop on Brain and Body Interfaces: Designing for Meaningful Interaction (2011). [link]
  • Peck, E.M., Solovey, E.T., Su, S., Jacob, R.J.K., and Chang, R. Near to the Brain: Functional Near-Infrared Spectroscopy as a Lightweight Brain Imaging Technique for VisualizationInfoVis 2011. (Poster Paper). [link]Best Poster Award
  • A. Girouard, E.T. Solovey, R. Mandryk, D. Tan, L. Nacke, R.J.K. Jacob, Brain, Body and Bytes: Psychophysiological User InteractionProc. ACM CHI 2010 Extended Abstracts. (2010). [link]
  • Peck, E. and Solovey, E.T. Neuroscience and Computing. ACM XRDS: Crossroads Magazine 18, 1 (2011), 5-5.[link]
  • Peck, E. and Solovey, E.t. The sensoriumACM XRDS: Crossroads Magazine 18, 1 (2011). 14-17. [link]
  • A. Girouard, E. Solovey, L. Hirshfield, E. Peck, K. Chancey, A. Sassaroli, S. Fantini, and R. Jacob, From Brain Signals to Adaptive Interfaces: Using fNIRS in HCI, in Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction, ed. by A. Nijholt, Springer (2010). [link]
  • E.T. Solovey, R.J.K. Jacob, Using fNIRS to Support User InterfacesfNIRS Conference, Cambridge, MA, Oct 15-17, 2010. (Poster Paper) [abstract] [poster]
  • E.T. Solovey, R.J.K. Jacob, Using fNIRS to Support User InterfacesfNIRS Conference, Cambridge, MA, Oct 15-17, 2010. (Poster Paper) [abstract] [poster]
  • E. Peck, K. Chauncey, A. Girouard, R. Gulotta, F. Lalooses, E.T. Solovey, D. Weaver, and R. Jacob, From Brains to BytesXRDS: Crossroads, The ACM Magazine for Students, vol. 16, no. 4, pp. 42-47 (2010). [link]
  • E.T. Solovey. Using Your Brain for Human-Computer Interaction, Doctoral Symposium, ACM UIST 2009 Symposium on User Interface Software and Technology, ACM Press (2009). [link]
  • E.T. Solovey, A. Girouard, K. Chauncey, L.M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini , and R.J.K. Jacob, Using fNIRS Brain Sensing in Realistic HCI Settings: Experiments and GuidelinesACM UIST 2009 Symposium on User Interface Software and Technology, ACM Press (2009). [Acceptance Rate: 18%] [link] 
  • L.M. Hirshfield, E.T. Solovey, A. Girouard, J. Kebinger, R.J.K. Jacob, A. Sassaroli, and S. Fantini , Brain Measurement for Usability Testing and Adaptive Interfaces: An Example of Uncovering Syntactic Workload with Functional Near Infrared SpectroscopyProc. ACM CHI 2009 Human Factors in Computing Systems Conference, ACM Press (2009). [Acceptance Rate: 24.5%] [link]
  • L.M. Hirshfield, E.T. Solovey, A. Girouard, R.J.K. Jacob, J. Kebinger, M.S. Horn, O. Shaer, J. Zigelbaum, and R.J.K. Jacob, Using Brain Measurement to Evaluate Reality Based InteractionsProc. ACM CHI 2009 Workshop on Challenges in Evaluating Usability and User Experience of Reality-Based Interaction (2009). [link]
  • A. Girouard, E.T. Solovey, L.M. Hirshfield, K. Chauncey, A. Sassaroli, S. Fantini, and R.J.K. Jacob, Distinguishing Difficulty Levels with Non-invasive Brain Activity MeasurementsProc. INTERACT 2009 Conference (2009). [Acceptance Rate: 29%] [link] 
  • L. Hirshfield, K. Chauncey, R. Gulotta, A. Girouard, E. Solovey, R. Jacob, A. Sassaroli, and S. Fantini, Combining Electroencephalograph and Near Infrared Spectroscopy to Explore Users’ Instantaneous and Continuous Mental Workload StatesHCI International 2009 13th International Conference on Human-Computer Interaction, Springer (2009).[link]
  • A. Sassaroli, F. Zheng, M. Coutts, L. M. Hirshfield, A. Girouard, E. T. Solovey, R. J. K. Jacob,Y. Tong, B. deB. Frederick, and S. Fantini, Application of near-infrared spectroscopy for discrimination of mental workloadsSPIE Proceedings 7174 (2009). [link]
  • A. Girouard, L.M. Hirshfield, E. Solovey, and R.J.K. Jacob, Using functional Near-Infrared Spectroscopy in HCI: Toward evaluation methods and adaptive interfacesProc. ACM CHI 2008 Workshop on Brain-Computer Interfaces for HCI and Games (2008). [link]
  • A. Sassaroli, F. Zheng, L.M. Hirshfield, A. Girouard, E.T. Solovey, R.J.K. Jacob, and S. Fantini, Discrimination of Mental Workload Levels in Human Subjects with Functional Near-Infrared SpectroscopyJournal of Innovative Optical Health Sciences (2008). [link]
  • A. Sassaroli, Y. Tong, L. M. Hirshfield, A. Girouard, E. T. Solovey, R. J. K. Jacob, S. Fantini, Real-time assessment of mental workload with near infrared spectroscopy: potential for human-computer interactionOSA topical meeting, BIOMED, BMD14 (2008) [link]
  • E. T. Solovey. Using your brain for human-computer interactionGrace Hopper Celebration of Women in Computing. Poster Session (2008).
  • A. Girouard, E. T. Solovey, L. M. Hirshfield, K. Chauncey, A. Sassoroli, S. Fantini, and R. J. K. Jacob, Distinguishing Difficulty Levels with Non-invasive Brain Activity MeasurementsTechnical Report 2008-3, Department of Computer Science, Tufts University, Medford, Mass. (2008).
  • L.M. Hirshfield, A. Girouard, E.T. Solovey, R.J.K. Jacob, A. Sassaroli, Y. Tong, and S. Fantini , Human-Computer Interaction and Brain Measurement Using Functional Near-Infrared SpectroscopyACM UIST 2007 Symposium on User Interface Software and Technology, ACM Press, Poster paper (2007). [link]

News and Related Articles

  • Corriere della Sera. (May 25, 2012). Il computer che sa metterti a riposo [link]
  • Discovery News. (May 21, 2012). Fix Stress Overload with a ‘Brainput’ System [link] [pdf]
  • Smartplanet Smart Takes. (May 20, 2012). Wearable brain sensor helps workers multitask [link][pdf]
  • IEEE Spectrum Tech Talk (May 17, 2012). Wearable Brain Scanner Tells Your Computer When You’re Overwhelmed [link] [pdf]
  • Haaretz (May 16, 2012). [link] [pdf] [Translated]
  • Wired UK. (May 15, 2012). MIT’s ‘Brainput’ offloads human multitasking to a computer [link] [pdf]
  • Engadget. (May 15, 2012). MIT’s Brainput reads your mind to make multitasking easier [link]  [pdf]
  • Technology Review. (May 14, 2012). A Computer Interface that Takes a Load Off Your Mind [link] [pdf]
  • Extremetech. (May 14, 2012). MIT’s Brainput boosts your brain power by offloading multitasking to a computer [link] [pdf]
  • Mind-reading Computers: It may sound like sci-fi, but one day a computer may sense when you’re stressed and tell you to take a break, Tufts Journal. Feb. 3, 2008. [link] [pdf]
  • Tufts researchers delve into the human brain with cutting-edge ‘light imaging’ technologyTufts Daily. Oct. 18, 2007. [link] [pdf]
  • Mind controls computerEE Times. Oct. 5, 2007. [link] [pdf]
  • Computer Can Tell How Hard You’re WorkingFox News. Oct. 3, 2007. [link] [pdf]
  • Mind-Reading ComputersBBC Focus Magazine. Oct. 3, 2007. [link] [pdf]
  • Building a computer that reads mindsMSNBC. Oct. 2, 2007. [link] [pdf]
  • Technology Could Enable Computers To ‘Read The Minds’ Of Users, Science Daily. Oct. 1, 2007. [link] [pdf]
  • J. C. Keller, Technology May Bridge Emotion Gap between Humans and ComputersEngineering eNews. May 2007. [link] [pdf]

Brain-Based Creativity Support Tools

Creating a system that can accurately detect the mental state of the user is beneficial to determining the type of inspiration that should be delivered for more effective help during the creative process.

We are exploring using fNIRS brain data to infer the user’s mental state in order to build a system that delivers a semantically far or near stimulus at the right time for increased idea quantity and quality.


Related Publications

J. Chan, P. Siangliulue, D. Qori McDonald, R. Liu, R. Moradinezhad, S. Aman, E.T. Solovey, K. Gajos & S.P. Dow. (2017). Semantically far inspirations considered harmful? Accounting for cognitive states in collaborative ideation. In Proceedings of 2017 ACM Conference on Creativity and Cognition.

Divert & Alert

This project aims to reduce the incidence rate of police and other emergency personnel being killed or injured when stopped along the roadside, due to collisions caused by oncoming drivers who are impaired or inattentive.

This project aims to reduce the incidence rate of police and other emergency personnel being killed or injured when stopped along the roadside, due to collisions caused by oncoming drivers who are impaired or inattentive. One aspect of the project focuses on developing projected visible cues to guide drivers around the stop site. The second aspect of the project focuses on developing machine vision algorithms to monitor the roadway and warn emergency personnel of oncoming drivers who appear inattentive or impaired.


Related Publications

  • E.T. Solovey, P. Powale, M.L. Cummings. A field study of multimodal alerts for an autonomous threat detection system. Proc. of HCI International 2017 19th International Conference on Human-Computer Interaction. 2017.
  • S. Teller, B.K. Horn, R. Finman. B. Wu, E. Solovey, B. Wang, J. Karraker, Divert and Alert: Mitigating and Warning of Traffic Threats to Police Stopped Along the RoadsideNational Institute of Justice Conference. (Poster Presentation). Arlington, VA. June 18-20, 2012. [poster]
  • P. Powale. Design and Testing of a Roadside Traffic Threat Alerting MechanismM. Eng. Thesis, MIT Electrical Engineering and Computer Science, Cambridge, MA. 2013. (Thesis supervisor)

Press

MIT CSAIL. (Apr. 2, 2012). “CSAIL scientists, Mass State Police to tackle problem of roadside collisions between drivers and police vehicles”

Machine Learning for Human-Centered Computing

Our work uses machine learning approaches to build adaptive user interfaces that support the user’s changing cognitive state and context.

Our work uses machine learning techniques to build adaptive user interfaces that support the user’s changing cognitive state and context.

Drexel AIR Lab also participates in the Drexel REThink Research Experience for Teachers on Machine Learning to Enhance Human Centered Computing. STEM and Computer Science high school teachers and 2-year college faculty in the Philadelphia Area are encouraged to apply for this program running this summer. Deadline: March 31.


Related Publications

Human-Centered Computing & Health

Related Publications

STEM Education, Outreach, & HCI

WPI’s AIR Lab is interested in improving STEM education and broadening participation in STEM fields.

Improving STEM education and broadening participation in STEM fields is integrated into all of our work. See below for more information about our efforts in these areas, including several research experiences for undergraduates and teachers.

  • We are working to improve STEM personal learning environments by combining intelligent tutoring technologies with unintrusive sensing of brain activity using functional near-infrared spectroscopy (fNIRS). 
    • By developing a better understanding of when and how learning is occurring during pauses in tutoring system use, adaptive interventions within tutoring systems can be better personalized to the needs of the individual. Facilitating more effective math learning could help retain learners who otherwise may not follow through on STEM learning, due to prior distressing experiences with math, and the typically extensive time and effort required to take semesters of developmental math before enrolling in other STEM courses.
  • We are developing and evaluating technology to enable effective STEM concept learning in the Deaf community. 
    • The goal is to provide delivery of signed language (SL) educational content by allowing deaf signers to navigate, learn, and take assessments completely in their first language (L1). Resources that are truly accessible to deaf SL-speakers have the power to build on stronger first language foundations, facilitate lifelong learning, improve access to educational content such as STEM topics, improve career opportunities, and allow SL-based organization of SL corpora, dictionaries, learning resources and assessments.
  • Dr. Solovey was co-director for the NSF-funded Research Experience for Teachers Site for Machine Learning to Enhance Human-Centered Computing at Drexel.  STEM and Computer Science high school teachers and 2-year college faculty in the Philadelphia Area are encouraged to apply for this program running this summer.
  • There are several opportunities for undergraduates to participate in research:
    • Our lab participates in the CRA-W/CDC Distributed Research Experience for Undergraduates (DREU), which allows promising undergraduates from other universities to spend the summer in our lab. The objective of DREU is to increase the number of women and students from underrepresented groups entering graduate studies in the fields of computer science and engineering. This highly selective program matches promising undergraduate women and undergraduate men from groups underrepresented, including ethnic minorities and persons with disabilities, in computing with a faculty mentor for a summer research experience at the faculty member’s home institution. Deadline is usually in February
    • We also participate in the CRA-W/CDC Collaborative Research Experience for Undergraduates. The objective of the CREU program is to increase the number of women and underrepresented groups entering graduate studies in the fields of computer science and computer engineering by exposing them to the joy and potential of research. Drexel students are encouraged to get in touch with Dr. Solovey if they are interested. Deadline is usually in February.
    • And of course, there are MQP and IQP and ISP opportunities!
  • Past work has included developing and studying tangible programming languages in informal learning settings, in collaboration with the Boston Museum of Science.

Related Publications

Teamwork in Human-Agent Teams

In applications such as search and rescue, command and control, and air traffic control, operators in the future will likely need to work in teams together with robots. Our research addresses critical questions, such as determining how to construct the teams and how to design the information-sharing tools to promote performance and reduce workload in these settings.

Related Publications

Brain & Body Sensing in the Car

Drivers have numerous demands and distractions while navigating the vehicle, both on the road as well as from people and technology within the vehicle. As new interfaces and technologies are introduced into vehicles, it is critical to assess the cognitive workload that the driver is experiencing to ensure safe operation of the vehicle. An understanding of the changing cognitive state of a driver in real-time can inform the design of in-vehicle interfaces.

We propose using functional near-infrared spectroscopy (fNIRS) to measure brain activity during driving tasks. Functional NIRS is a relatively new brain sensing technology that is portable and non-invasive, making it possible to sense brain activity in environments that would not be possible using most traditional imaging techniques. This provides us with the opportunity to better understand changes in cognitive state during mobile tasks, such as driving. Our research aims to integrate fNIRS into an existing driving test bed and explore signal processing and classification algorithms to study the sensitivity of fNIRS brain sensing to changes in the driver’s workload level in real-time.


Related Publications

Boredom, Low Workload, & User Performance

Boredom and vigilance problems can be exacerbated by systems with high levels of automation, which leave human operators unengaged for prolonged periods.

As computing systems become increasingly capable and autonomous, the role and expectations of the operator in the system change and the operator no longer needs to directly control all low-level aspects of the system. On the one hand, these advancements could allow operators to work at a higher level and simultaneously supervise numerous tasks. On the other hand, the high levels of autonomy can also lead to long duration, low workload situations, where little is required of the supervisor, resulting in operator boredom and disengagement that can be detrimental when the operator is later called upon to perform a task. Our work investigates physiological sensing of attention states as well as intervention design to counter some of the detrimental aspects of boredom in human-computer interaction.


Related Publications

  • M. Boyer, M.L. Cummings, L.B. Spence, E.T. Solovey. “Investigating Mental Workload Changes in a Long Duration Supervisory Control Task,” Interacting With Computers (2015). [link]
  • A. Mkrtchyan, J. Macbeth, E.T. Solovey, J. Ryan, M. Cummings. “Using Variable-Rate Alerting to Counter Boredom in Human Supervisory Control,” Proc. Human Factors and Ergonomics Society Annual Meeting, 2012. [link]

Human Supervisory Control

In human supervisory control of autonomous systems, the human is on the loop versus in the loop. Some domains where this is common is in aviation (e.g. autopilot), autonomous vehicles, and nuclear power plant supervision. This changes the role of the human in a system. They no longer need to be in direct control of the system or vehicle. With the human out of the direct loop, they are supervising more aspects of the system which can lead to high levels of workload (e.g. supervising multiple UAVs). On the other end of the spectrum are highly automated systems, which require little of the human and lead to very low workload and boredom. Our research looks at interface design, team composition and measurement of workload with fNIRS and other physiological sensors in these contexts.


Related Publications

  • Gao, F., Cummings, M.L. & Solovey, E., Designing for Robust and Effective Teamwork in Human-Agent Teams in The Intersection of Robust Intelligence (RI) and Trust in Autonomous Systems, Ed.: W. Lawless, Springer.
  • M. Boyer, M.L. Cummings, L.B. Spence, E.T. Solovey. “Investigating Mental Workload Changes in a Long Duration Supervisory Control Task,” Interacting With Computers (2015). [link]
  • E.T. Solovey, D. Afergan, A. Venkat, D. Belyusar, B. Mehler, B. Reimer. “Enabling Adaptive Autonomy: Brain & Body Sensing for Adaptive Vehicles,” Proc. CHI 2015 Workshop on Autonomous Driving UX. (2015). (To Appear). [link]
  • F. Gao, M.L. Cummings, E.T. Solovey, “Modeling Teamwork in Supervisory Control of Multiple Robots,” IEEE Transactions on Human-Machine Systems 44(4), 441-453. [link]
  • D. Afergan, E. Peck, E.T. Solovey, A. Jenkins, S. Hincks, E.T. Brown, R. Chang, R.J.K. Jacob. Dynamic Difficulty Using Brain Metrics of Workload. Proc. ACM Conference on Human Factors in Computing Systems CHI ’14, ACM Press (2014).Best Paper Award Honorable Mention. [Awarded to top 5%] [link]
  • A. Mkrtchyan, J. Macbeth, E.T. Solovey, J. Ryan, M. Cummings. “Using Variable-Rate Alerting to Counter Boredom in Human Supervisory Control,” Proc. Human Factors and Ergonomics Society Annual Meeting, 2012. [link]
  • E.T. Solovey, “Real-time fNIRS Brain Input for Adaptive Robot Autonomy,” Proc HRI Pioneers Workshop (2012). [Acceptance Rate: 23%]

Multitasking & fNIRS Brain Sensing

By detecting specific cognitive states that occur when multitasking, we can build user interfaces that better support task switching, interruption management and multitasking.

Multitasking has become an integral part of work environments, even though people are not well-equipped cognitively to handle numerous concurrent tasks effectively. Systems that support such multitasking may produce better performance and less frustration. However, without understanding the user’s internal processes, it is difficult to determine optimal strategies for adapting interfaces, since all multitasking activity is not identical. We conducted two experiments leading toward a system that detects cognitive multitasking processes and uses this information as input to an adaptive interface. Using functional near-infrared spectroscopy sensors, we differentiate four cognitive multitasking processes. These states cannot readily be distinguished using behavioral measures such as response time, accuracy, keystrokes or screen contents. We then developed a human-robot system as a proof-of-concept that uses real-time cognitive state information as input and adapts in response. This prototype system serves as a platform to study interfaces that enable better task switching, interruption management, and multitasking.


Related Publications

  • D. Afergan, E. Peck, E.T. Solovey, A. Jenkins, S. Hincks, E.T. Brown, R. Chang, R.J.K. Jacob. Dynamic Difficulty Using Brain Metrics of Workload. Proc. ACM Conference on Human Factors in Computing Systems CHI ’14, ACM Press (2014). Best Paper Award Honorable Mention. 
  • E.T. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli, S. Fantini, R.J.K. Jacob, “Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain Input,” Proc. ACM Conference on Human Factors in Computing Systems CHI’12, ACM Press (2012). Best Paper Award Honorable Mention. 
  • E.T. Solovey, K. Chauncey, F. Lalooses, M. Parasi, D. Weaver, M. Scheutz, P. Schermerhorn, A. Sassaroli, S. Fantini, A. Girouard, R.J.K. Jacob, Sensing Cognitive Multitasking for a Brain-Based Adaptive User InterfaceProc. ACM Conference on Human Factors in Computing Systems CHI’11, ACM Press (2011). [link]

Video: Chistory

In this video, set one hundred years in the future, we playfully re-envision the early history of HCI.

How might the world view human-computer interaction a century from now? In this video, set one hundred years in the future, we playfully re-envision the early history of HCI. As the video opens, the Great Usability Cataclysm of 2068 has erased all previous knowledge of HCI. The world has been plunged into an age of darkness where terror, fear, and poor usability reign. Unearthing fragments of previously lost archival footage, a disembodied HCI historian (Jonathan Grudin) introduces a first attempt to reconstruct the history of our field. Pioneering systems like NLS and Sketchpad are reviewed alongside more recent work from CHI and related conferences. The results may surprise and perplex as much as they entertain, but most of all, we hope they inspire reflection on the past and future of our field.


Related Publications

  • Bernstein, M., André, P., Luther, K., Solovey, E. T., Poole, E. S., Paul, S. A., Kane, S. K., and Grudin, J. 2009. CHIstory. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 3493-3494. [pdf]  
  • Golden Mouse award winner

fNIRS in HCI: Feasibility and Guidelines

We establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable.

Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.


Related Publications

  • E.T. Solovey, A. Girouard, K. Chauncey, L.M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini , and R.J.K. Jacob, Using fNIRS Brain Sensing in Realistic HCI Settings: Experiments and GuidelinesACM UIST 2009 Symposium on User Interface Software and Technology, ACM Press (2009). [Acceptance Rate: 18%] [link]

HCI & Safety-Critical Systems

With this work, we take steps towards making systems usable enough to operate safely, effectively, and consistently, and facilitate wider adoption of safety-critical technology.

Autonomous robots and vehicles can perform tasks that are unsafe or undesirable for humans to do themselves, such as investigate safety in nuclear reactors or assess structural damage to a building or bridge after an earthquake. In addition, improvements in autonomous modes of such vehicles are making it easier for minimally-trained individuals to operate the vehicles. As the autonomous capabilities advance, the user’s role shifts from a direct teleoperator to a supervisory control role. Since the human operator is often better suited to make decisions in uncertain situations, it is important for the human operator to have awareness of the environment in which the vehicle is operating in order to prevent collisions and damage to the vehicle as well as the structures and people in the vicinity. The Collision and Obstacle Detection and Alerting (CODA) display is a novel interface to enable safe piloting of a Micro Aerial Vehicle with a mobile device in real-world settings.


Related Publications

  • E.T. Solovey, K. Jackson, M.L. Cummings, Collision Avoidance Interface for Safe Piloting of Unmanned Vehicles using a Mobile Device,  Adjunct Proc. of ACM UIST 2012 Symposium on User Interface Software and Technology, ACM Press (2012). [link] [poster]

Brainput: Real-Time fNIRS Input

Here, we take a different approach for brain-computer interfaces that augments traditional input devices such as the mouse and keyboard and that targets a wider group of users. We use brain sensor data as a passive, implicit input channel that expands the bandwidth between the human and computer by providing extra information about the user.

The Brainput system learns to identify brain activity patterns occurring during multitasking. It provides a continuous, supplemental input stream to an interactive human-robot system, which uses this information to modify its behavior to better support multitasking. We demonstrate that we can use non-invasive methods to detect signals coming from the brain that users naturally and effortlessly generate while using a computer system. If used with care, this additional information can lead to systems that respond appropriately to changes in the user’s state. Our experimental study shows that Brainput significantly improves several performance metrics, as well as the subjective NASA-Task Load Index scores in a dual task human-robot activity.


Related Publications

  • E.T. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli, S. Fantini, R.J.K. Jacob, Brainput: Enhancing Interactive Systems with Streaming fNIRS Brain InputProc. ACM Conference on Human Factors in Computing Systems CHI’12, ACM Press (2012). Best Paper Nominee. [Acceptance Rate: 5%] [link]