Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
The full proceedings are available in the ACM Digital Library .
Click a paper title to access the full text in the ACM Digital Library. Papers are free to access for a one year period, starting from the beginning of the CHI 2019 conference.
A Translational Science Model for HCI
Using scientific discoveries to inform design practice is an important, but difficult, objective in HCI. In this paper, we provide an overview of Translational Science in HCI by triangulating literature related to the research-practice gap with interview data from many parties engaged (or not) in translating HCI knowledge. We propose a model for Translational Science in HCI based on the concept of a continuum to describe how knowledge progresses (or stalls) through multiple steps and translations until it can influence design practice. The model offers a conceptual framework that can be used by researchers and practitioners to visualize and describe the progression of HCI knowledge through a sequence of translations. Additionally, the model may facilitate a precise identification of translational barriers, which allows devising more effective strategies to increase the use of scientific findings in design practice.
"They Don’t Leave Us Alone Anywhere We Go": Gender and Digital Abuse in South Asia
South Asia faces one of the largest gender gaps online globally, and online safety is one of the main barriers to gender-equitable Internet access [GSMA, 2015]. To better understand the gendered risks and coping practices online in South Asia, we present a qualitative study of the online abuse experiences and coping practices of 199 people who identified as women and 6 NGO staff from India, Pakistan, and Bangladesh, using a feminist analysis. We found that a majority of our participants regularly contended with online abuse, experiencing three major abuse types: cyberstalking, impersonation, and personal content leakages. Consequences of abuse included emotional harm, reputation damage, and physical and sexual violence. Participants coped through informal channels rather than through technological protections or law enforcement. Altogether, our findings point to opportunities for designs, policies, and algorithms to improve women’s safety online in South Asia.
Guidelines for Human-AI Interaction
Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.
Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making
Machine learning (ML) is increasingly being used in image retrieval systems for medical decision making. One application of ML is to retrieve visually similar medical images from past patients (e.g. tissue from biopsies) to reference when making a medical decision with a new patient. However, no algorithm can perfectly capture an expert’s ideal notion of similarity for every case: an image that is algorithmically determined to be similar may not be medically relevant to a doctor’s specific diagnostic needs. In this paper, we identified the needs of pathologists when searching for similar images retrieved using a deep learning algorithm, and developed tools that empower users to cope with the search algorithm on-the-fly, communicating what types of similarity are most important at different moments in time. In two evaluations with pathologists, we found that these tools increased the diagnostic utility of images found and increased user trust in the algorithm. The tools were preferred over a traditional interface, without a loss in diagnostic accuracy. We also observed that users adopted new strategies when using refinement tools, re-purposing them to test and understand the underlying algorithm and to disambiguate ML errors from their own errors. Taken together, these findings inform future human-ML collaborative systems for expert decision-making.
Seeing with New Eyes: Designing for In-the-Wild Museum Gifting
This paper presents the GIFT smartphone app, an artist-led Research through Design project benefitting from a three-day in-the-wild deployment. The app takes as its premise the generative potential of combining the contexts of gifting and museum visits. Visitors explore the museum, searching for objects that would most appeal to the gift-receiver they have in mind, then photographing those objects and adding audio messages for their receivers describing the motivation for their choices. This paper charts the designers’ key aim of creating a new frame of mind using voice, and the most striking findings discovered during in-the-wild deployment in a museum — ‘seeing with new eyes’ and fostering personal connections. We discuss empathy, motivation, and bottom-up personalisation in the productive space revealed by this combination of contexts. We suggest that this work reveals opportunities for designers of gifting services as well as those working in cultural heritage.
Design and Plural Heritages: Composing Critical Futures
We make theoretical and methodological contributions to the CHI community by introducing comparisons between contemporary Critical Heritage research and some forms of experimental design practice. Beginning by identifying three key approaches in contemporary heritage research: Critical Heritage, Plural Heritages and Future Heritage we introduce these in turn, while exploring their significance for thinking about design, knowledge and diversity. We discuss our efforts to apply ideas integrating Critical Heritage and design through the adoption of known Research through Design techniques in a research project in Istanbul, Turkey describing the design of our study and how this was productive of sensory and speculative reflection on the past. Finally, we reflect on the usefulness of such methods in developing new interactive technologies in heritage contexts and go on to propose a series of recommendations for a future Critical Heritage Design practice.
Connect-to-Connected Worlds: Piloting a Mobile, Data-Driven Reflection Tool for an Open-Ended Simulation at a Museum
Immersive open-ended museum exhibits promote ludic engagement and can be a powerful draw for visitors, but these qualities may also make learning more challenging. We describe our efforts to help visitors engage more deeply with an interactive exhibit’s content by giving them access to visualizations of data skimmed from their use of the exhibit. We report on the motivations and challenges in designing this reflective tool, which positions visitors as a “human in the loop” to understand and manage their engagement with the exhibit. We used an iterative design process and qualitative methods to explore how and if visitors could (1) access and (2) comprehend the data visualizations, (3) reflect on their prior engagement with the exhibit, (4)plan their future engagement with the exhibit, and (5) act on their plans. We further discuss the essential design challenges and the opportunities made possible for visitors through data-driven reflection tools.
Anchored Audio Sampling: A Seamless Method for Exploring Children’s Thoughts During Deployment Studies
Many traditional HCI methods, such as surveys and interviews, are of limited value when working with preschoolers. In this paper, we present anchored audio sampling (AAS), a remote data collection technique for extracting qualitative audio samples during field deployments with young children. AAS offers a developmentally sensitive way of understanding how children make sense of technology and situates their use in the larger context of daily life. AAS is defined by an anchor event, around which audio is collected. A sliding window surrounding this anchor captures both antecedent and ensuing recording, providing the researcher insight into the activities that led up to the event of interest as well as those that followed. We present themes from three deployments that leverage this technique. Based on our experiences using AAS, we have also developed a reusable open-source library for embedding AAS into any Android application.
To Asymmetry and Beyond!: Improving Social Connectedness by Increasing Designed Interdependence in Cooperative Play
Social play can have numerous health benefits but research has shown that not all multiplayer games are effective at promoting social engagement. Asymmetric cooperative games have shown promise in this regard but the design and dynamics of this unique style of play is not yet well understood. To address this, we present the results of two player experience studies using our custom prototype game Beam Me ‘Round, Scotty! 2: the first comparing symmetric cooperative play (e.g., where players have the same interface, goals, mechanics, etc.) to asymmetric cooperative play (e.g., where players have differing roles, abilities, interfaces, etc.) and the second comparing the effect of increasing degrees of interdependence between play partners. Our results not only indicate that asymmetric cooperative games may enhance players’ perceptions of connectedness, social engagement, immersion, and comfort with a game’s controls, but also demonstrate how to further improve these outcomes via deliberate mechanical design changes, such as changes in cooperative action timing and direction of dependence.
DesignABILITY: Framework for the Design of Accessible Interactive Tools to Support Teaching to Children with Disabilities
Developing educational tools aimed at children with disabilities is a challenging process for designers and developers because existing methodologies or frameworks do not provide any pedagogical information and/or do not take into account the particular needs of users with some type of impairment. In this study, we propose a framework for the design of tools to support teaching to children with disabilities. The framework provides the necessary stages for the development of tools (hardware-based or software-based) and must be adapted for a specific disability and educational goal. For this study, the framework was adapted to support literacy teaching and contributes to the design of educational/interactive technology for deaf people while making them part of the design process and taking into account their particular needs. The experts’ evaluation of the framework shows that it is well structured and may be adapted for other types of disabilities.
Transcalibur: A Weight Shifting Virtual Reality Controller for 2D Shape Rendering based on Computational Perception Model
Humans can estimate the shape of a wielded object through the illusory feeling of the mass properties of the object obtained using their hands. Even though the shape of hand-held objects influences immersion and realism in virtual reality (VR), it is difficult to design VR controllers for rendering desired shapes according to the perceptions derived from the illusory effects of mass properties and shape perception. We propose Transcalibur, which is a hand-held VR controller that can render a 2D shape by changing its mass properties on a 2D planar area. We built a computational perception model using a data-driven approach from the collected data pairs of mass properties and perceived shapes. This enables Transcalibur to easily and effectively provide convincing shape perception based on complex illusory effects. Our user study showed that the system succeeded in providing the perception of various desired shapes in a virtual environment.
LightBee: A Self-Levitating Light Field Display for Hologrammatic Telepresence
LightBee is a novel “hologrammatic” telepresence system featuring a self-levitating light field display. It consists of a drone that flies a projection of a remote user’s head through 3D space. The movements of the drone are controlled by the remote user’s head movements, offering unique support for non-verbal cues, especially physical proxemics. The light field display is created by a retro-reflective sheet that is mounted on the cylindrical quadcopter. 45 smart projectors, one per 1.3 degrees, are mounted in a ring, each projecting a video stream rendered from a unique perspective onto the retroreflector. This creates a light field that naturally provides motion parallax and stereoscopy without requiring any headset nor stereo glasses. LightBee allows multiple local users to experience their own unique and correct perspective of the remote user’s head. The system is currently one-directional: 2 small cameras mounted on the drone allow the remote user to observe the local scene.
TabletInVR: Exploring the Design Space for Using a Multi-Touch Tablet in Virtual Reality
Complex virtual reality (VR) tasks, like 3D solid modelling, are challenging with standard input controllers. We propose exploiting the affordances and input capabilities when using a 3D-tracked multi-touch tablet in an immersive VR environment. Observations gained during semi-structured interviews with general users, and those experienced with 3D software, are used to define a set of design dimensions and guidelines. These are used to develop a vocabulary of interaction techniques to demonstrate how a tablet’s precise touch input capability, physical shape, metaphorical associations, and natural compatibility with barehand mid-air input can be used in VR. For example, transforming objects with touch input, “cutting” objects by using the tablet as a physical “knife”, navigating in 3D by using the tablet as a viewport, and triggering commands by interleaving bare-hand input around the tablet. Key aspects of the vocabulary are evaluated with users, with results validating the approach.
RotoSwype: Word-Gesture Typing using a Ring
We propose RotoSwype, a technique for word-gesture typing using the orientation of a ring worn on the index finger. RotoSwype enables one-handed text-input without encumbering the hand with a device, a desirable quality in many scenarios, including virtual or augmented reality. The method is evaluated using two arm positions: with the hand raised up with the palm parallel to the ground; and with the hand resting at the side with the palm facing the body. A five-day study finds both hand positions achieved speeds of at least 14 words-per-minute (WPM) with uncorrected error rates near 1%, outperforming previous comparable techniques.
BeamBand: Hand Gesture Sensing with Ultrasonic Beamforming
BeamBand is a wrist-worn system that uses ultrasonic beamforming for hand gesture sensing. Using an array of small transducers, arranged on the wrist, we can ensem-ble acoustic wavefronts to project acoustic energy at spec-ified angles and focal lengths. This allows us to interro-gate the surface geometry of the hand with inaudible sound in a raster-scan-like manner, from multiple view-points. We use the resulting, characteristic reflections to recognize hand pose at 8 FPS. In our user study, we found that BeamBand supports a six-class hand gesture set at 94.6% accuracy. Even across sessions, when the sensor is removed and reworn later, accuracy remains high: 89.4%. We describe our software and hardware, and future ave-nues for integration into devices such as smartwatches and VR controllers.
Airport Accessibility and Navigation Assistance for People with Visual Impairments
People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports.
Effects of Moderation and Opinion Heterogeneity on Attitude towards the Online Deliberation Experience
Online deliberation offers a way for citizens to collectively discuss an issue and provide input for policymakers. The overall experience of online deliberation can be affected by multiple factors. We decided to investigate the effects of moderation and opinion heterogeneity on the perceived deliberation experience, by running the first online deliberation experiment in Singapore. Our study took place in three months with three phases. In phase 1, our 2,006 participants answered a survey, that we used to create groups of different opinion heterogeneity. During the second phase, 510 participants discussed about the population issue on the online platform we developed. We gathered data on their online deliberation experience during phase 3. We found out that higher levels of moderation negatively impact the experience of deliberation on perceived procedural fairness, validity claim and policy legitimacy; and that high opinion heterogeneity is important in order to get a fair assessment of the deliberation experience.
MilliSonic: Pushing the Limits of Acoustic Motion Tracking
Recent years have seen interest in device tracking and localization using acoustic signals. State-of-the-art acoustic motion tracking systems however do not achieve millimeter accuracy and require large separation between microphones and speakers, and as a result, do not meet the requirements for many VR/AR applications. Further, tracking multiple concurrent acoustic transmissions from VR devices today requires sacrificing accuracy or frame rate. We present MilliSonic, a novel system that pushes the limits of acoustic based motion tracking. Our core contribution is a novel localization algorithm that can provably achieve sub-millimeter 1D tracking accuracy in the presence of multipath, while using only a single beacon with a small 4-microphone array.Further, MilliSonic enables concurrent tracking of up to four smartphones without reducing frame rate or accuracy. Our evaluation shows that MilliSonic achieves 0.7mm median 1D accuracy and a 2.6mm median 3D accuracy for smartphones, which is 5x more accurate than state-of-the-art systems. MilliSonic enables two previously infeasible interaction applications: a) 3D tracking of VR headsets using the smartphone as a beacon and b) fine-grained 3D tracking for the Google Cardboard VR system using a small microphone array.
Casual Microtasking: Embedding Microtasks in Facebook
Microtasks enable people with limited time and context to contribute to a larger task. In this paper we explore casual microtasking, where microtasks are embedded into other primary activities so that they are available to be completed when convenient. We present a casual microtasking experience that inserts writing microtasks from an existing microwriting tool into the user’s Facebook feed. From a two-week deployment of the system with nine people, we observe that casual microtasking enabled participants to get things done during their breaks, and that they tended to do so only after first engaging with Facebook’s social content. Participants were most likely to complete the writing microtasks during periods of the day associated with low focus, and would occasionally use them as a springboard to open the original document in Word. These findings suggest casual microtasking can help people leverage spare micromoments to achieve meaningful micro-goals, and even encourage them to return to work.
Making Sense of Art: Access for Gallery Visitors with Vision Impairments
While there is widespread recognition of the need to provide people with vision impairments (PVI) equitable access to cultural institutions such as art galleries, this is not easy. We present the results of a collaboration with a regional art gallery who wished to open their collection to PVIs in the local community. We describe a novel model that provides three different ways of accessing the gallery, depending upon visual acuity and mobility: virtual tours, self-guided tours and guided tours. As far as possible the model supports autonomous exploration by PVIs. It was informed by a value sensitive design exploration of the values and value conflicts of the primary stakeholders.
Co-Design Beyond Words: ‘Moments of Interaction’ with Minimally-Verbal Children on the Autism Spectrum
Existing co-design methods support verbal children on the autism spectrum in the design process, while their minimally-verbal peers are overlooked. We describe Co-Design Beyond Words (CDBW), an approach which merges existing co-design methods with practice-based methods from Speech and Language Therapy which are child-led and interests-based. These emphasise the rich detail that can be conveyed in the moment, through recognising occurrences of, for example, Joint Attention, Turn Taking and Imitation. We worked in an autism-specific primary school over 20 weeks with ten children, aged 5 to 8. We co-designed a playful prototype, the TangiBall, using the three iterative phases of CDBW; the Foundation Phase (preparation for interaction), the Interaction Phase (designing-and-reflecting in the moment) and the Reflection Phase (reflection-on-action). We contribute a novel co-design approach and present moments of interaction, the micro instances in design in which minimally-verbal children on the spectrum can convey meaning beyond words, through their actions, interactions, and attentional foci. These moments of interaction provide design insight, shape design direction, and reveal unique strengths, interests, and abilities.
Emotional Utility and Recall of the Facebook News Feed
We report a laboratory study (N=53) in which participants browsed their own Facebook news feeds for 10-15 minutes, choosing exactly when to quit, and later rated the overall emotional utility of the episode before attempting to recall threads. Finally, the emotional utility of each encountered thread was rated while looking over a recording of the interaction. We report that Facebook browsing was, overall, an emotionally positive experience; that recall of threads exhibited classic primacy and recency serial order effects; that recalled threads were both more positive and more valenced (less neutral) on average, than forgotten threads; and that overall emotional valence judgments were predicted, statistically, by the peak and end thread judgments. We find no evidence that local quit decisions were driven by the emotional utility of threads. In the light of these findings, we discuss the suggestion that emotional utility might partly explain the attractiveness of reading the news feed, and that an emotional memory bias might further increase the attractiveness of the newsfeed in prospect.
The Breaking Hand: Skills, Care, and Sufferings of the Hands of an Electronic Waste Worker in Bangladesh
While repair work has recently been getting increasing attention in HCI, recycling practices have still remained relatively understudied, especially in the context of the Global South. To this end, building on our eight-month-long ethnography, this paper reports the electronic waste (`e-waste’, henceforth) recycling practices among the e-waste recycler (`bhangari’) communities in Dhaka, Bangladesh. In doing so, this paper offers the work of the bhangaris through an articulation of their hands and their uses. Drawing from a rich body of scholarly work on social science, we define and contextualize three characteristics of the hand of a bhangari: knowledge, care, and skills and collaboration. Our study also highlights the pains and sufferings involved in this profession. By explaining bhangari work through the hand, we also discuss its implications for design, and its connection to HCI’s broader interest in sustainability.
EarTouch: Facilitating Smartphone Use for Visually Impaired People in Mobile and Public Scenarios
Interacting with a smartphone using touch input and speech output is challenging for visually impaired people in mobile and public scenarios, where only one hand may be available for input (e.g., while holding a cane) and using the loudspeaker for speech output is constrained by environmental noise, privacy, and social concerns. To address these issues, we propose EarTouch, a one-handed interaction technique that allows the users to interact with a smartphone using the ear to perform gestures on the touchscreen. Users hold the phone to their ears and listen to speech output from the ear speaker privately. We report how the technique was designed, implemented, and evaluated through a series of studies. Results show that EarTouch is easy, efficient, fun and socially acceptable to use.
Diagnosing and Coping with Mode Errors in Korean-English Dual-language Keyboard
In countries where languages with non-Latin characters are prevalent, people use a keyboard with two language modes namely, the native language and English, and often experience mode errors. To diagnose the mode error problem, we conducted a field study and observed that 78% of the mode errors occurred immediately after application switching. We implemented four methods (Auto-switch, Preview, Smart-toggle, and Preview & Smart-toggle) based on three strategies to deal with the mode error problem and conducted field studies to verify their effectiveness. In the studies considering Korean-English dual input, Auto-switch was ineffective. On the contrary, Preview significantly reduced the mode errors from 75.1% to 41.3%, and Smart-toggle saved typing cost for recovering from mode errors. In Preview & Smart-toggle, Preview reduced mode errors and Smart-toggle handled 86.2% of the mode errors that slipped past Preview. These results suggest that Preview & Smart-toggle is a promising method for preventing mode errors for the Korean-English dual-input environment.
Impact of Contextual Factors on Snapchat Public Sharing
Public sharing is integral to online platforms. This includes the popular multimedia messaging application Snapchat, on which public sharing is relatively new and unexplored in prior research. In mobile-first applications, sharing contexts are dynamic. However, it is unclear how context impacts users’ sharing decisions. As platforms increasingly rely on user-generated content, it is important to also broadly understand user motivations and considerations in public sharing. We explored these aspects of content sharing through a survey of 1,515 Snapchat users. Our results indicate that users primarily have intrinsic motivations for publicly sharing Snaps, such as to share an experience with the world, but also have considerations related to audience and sensitivity of content. Additionally, we found that Snaps shared publicly were contextually different from those privately shared. Our findings suggest that content sharing systems can be designed to support sharing motivations, yet also be sensitive to private contexts.
Cluster Touch: Improving Touch Accuracy on Smartphones for People with Motor and Situational Impairments
We present Cluster Touch, a combined user-independent and user-specific touch offset model that improves the accuracy of touch input on smartphones for people with motor impairments, and for people experiencing situational impairments while walking. Cluster Touch combines touch examples from multiple users to create a shared user-independent touch model, which is then updated with touch examples provided by an individual user to make it user-specific. Owing to this combination, Cluster Touch allows people to quickly improve the accuracy of their smartphones by providing only 20 touch examples. In a user study with 12 people with motor impairments and 12 people without motor impairments, but who were walking, Cluster Touch improved touch accuracy by 14.65% for the former group and 6.81% for the latter group over the native touch sensor. Furthermore, in an offline analysis of existing mobile interfaces, Cluster Touch improved touch accuracy by 8.21% and 4.84% over the native touch sensor for the two user groups, respectively.
MindDot: Supporting Effective Cognitive Behaviors in Concept Map-Based Learning Environments
While prior research has revealed the promising impact of concept mapping on learning, few have comprehensively modeled different cognitive behaviors during concept mapping. In addition, existing concept mapping tools lack effective feedback to support better learning behaviors. This work presents MindDot, a concept map-based learning environment that facilitates the cognitive process of comparing and integrating related concepts via two forms of support. A hyperlink support and an expert template. Study results suggested that both types of support had positive impact on the development of comparative strategies and that hyperlink support enhanced learning. We further evaluated the cognitive learning progress at a fine-grained level with two forms of visualizations. We then extracted several behavioral patterns that provided insights about the cognitive progress in learning. Lastly, we derive design recommendations that we hope will inspire future intelligent tutoring systems that automatically evaluate students’ learning behaviors and foster them in developing effective learning behaviors
Changing Perspective: A Co-Design Approach to Explore Future Possibilities of Divergent Hearing
Conventional hearing aids frame hearing impairment almost exclusively as a problem. In the present paper, we took an alternative approach by focusing on positive future possibilities of ‘divergent hearing’. To this end, we developed a method to speculate simultaneously about not-yet-experienced positive meanings and not-yet-existing technology. First, we gathered already existing activities in which divergent hearing was experienced as an advantage rather than as a burden. These activities were then condensed into ‘Prompts of Positive Possibilities’ (PPP), such as ‘Creating a shelter to feel safe in”. In performative sessions, participants were given these PPPs and ‘Open Probes’ to enact novel everyday activities. This led to 26 possible meanings and according devices, such as “Being able to listen back into the past with a rewinder”. The paper provides valuable insights into the interests and expectations of people with divergent hearing as well as a methodological contribution to a possibility-driven design.
Failing with Style: Designing for Aesthetic Failure in Interactive Performance
Failure is a common artefact of challenging experiences, a fact of life for interactive systems but also a resource for aesthetic and improvisational performance. We present a study of how three professional pianists performed an interactive piano composition that included playing hidden codes within the music so as to control their path through the piece and trigger system actions. We reveal how apparent failures to play the codes occurred for diverse reasons including mistakes in their playing, limitations of the system, but also deliberate failures as a way of controlling the system, and how these failures provoked aesthetic and improvised responses from the performers. We propose that creative and performative interfaces should be designed to enable aesthetic failures and introduce a taxonomy that compares human approaches to failure with approaches to capable systems, revealing new creative design strategies of gaming, taming, riding and serving the system.
The Channel Matters: Self-disclosure, Reciprocity and Social Support in Online Cancer Support Groups
People with health concerns go to online health support groups to obtain help and advice. To do so, they frequently disclose personal details, many times in public. Although research in non-health settings suggests that people self-disclose less in public than in private, this pattern may not apply to health support groups where people want to get relevant help. Our work examines how the use of private and public channels influences members’ self-disclosure in an online cancer support group, and how channels moderate the influence of self-disclosure on reciprocity and receiving support. By automatically measuring people’s self-disclosure at scale, we found that members of cancer support groups revealed more negative self-disclosure in the public channels compared to the private channels. Although one’s self-disclosure leads others to self-disclose and to provide support, these effects were generally stronger in the private channel. These channel effects probably occur because the public channels are the primary venue for support exchange, while the private channels are mainly used for follow-up conversations. We discuss theoretical and practical implications of our work.
Design Goals for Playful Technology to Support Physical Activity Among Wheelchair Users
Playful technology has the potential to support physical activity (PA) among wheelchair users, but little is known about design considerations for this audience, who experience significant access barriers. In this paper, we lever-age the Integrated Behavioural Model (IBM) to understand wheelchair users’ perspectives on PA, technology, and play.First, we present findings from an interview study with eight physically active wheelchair users. Second, we build on the interviews in a survey that received 44 responses from a broader group of wheelchair users. Results show that the anticipation of positive experiences was the strongest predictor of engagement with PA, and that accessibility concerns act as barriers both in terms of PA participation and technology use. We present four design goals – emphasizing enjoyment,involving others, building knowledge and enabling flexibility – to make our findings actionable for researchers and designers wishing to create accessible playful technology to support PA.
Designing ‘True Colors’: A Social Wearable that Affords Vulnerability
Vulnerability is a common experience in everyday life and is frequently perceived as a flaw to be excised in technology design. Yet, research indicates it is an essential aspect of wholehearted living among others. In this paper, we present the design and deployment of ‘True Colors’, a novel wearable device intended to support social interaction in a live action roleplay game (LARP) setting. We describe the Research-through-Design process that helped us to discover and articulate the possibility space of vulnerability in the design of social wearables, as support for producing a sense of social empowerment and connection among wearers within the LARP. We draw conclusions that may be of value to others designing wearables and related technologies aimed at supporting co-located social interaction in games/play.
Investigating Slowness as a Frame to Design Longer-Term Experiences with Personal Data: A Field Study of Olly
We describe the design and deployment of Olly, a domestic music player that enables people to re-experience digital music they listened to in the past. Olly uses its owner’s Last.FM listening history metadata archive to occasionally select a song from their past, but offers no user control over what is selected or when. We deployed Olly in 3 homes for 15 months to explore how its slow pace might support experiences of reflection and reminiscence. Findings revealed that Olly became highly integrated in participants lives with sustained engagement over time. They drew on Olly to reflect on past life experiences and reactions indicated an increase in perceived value of their Last.FM archive. Olly also provoked reflections on the temporalities of personal data and technology. Findings are interpreted to present opportunities for future HCI research and practice.
From HCI to HCI-Amusement: Strategies for Engaging what New Technology Makes Old
Notions of what counts as a contribution to HCI continue to be contested as our field expands to accommodate perspectives from the arts and humanities. This paper aims to advance the position of the arts and further contribute to these debates by actively exploring what a “non-contribution” would look like in HCI. We do this by taking inspiration from Fluxus, a collective of artists in the 1950’s and 1960’s who actively challenged and reworked practices of fine arts institutions by producing radically accessible, ephemeral, and modest works of “art-amusement.” We use Fluxus to develop three analogous forms of “HCI-amusements,” each of which shed light on dominant practices and values within HCI by resisting to fit into its logics.
Evaluating the Impact of a Mobile Neurofeedback App for Young Children at School and Home
About 18% of children in industrialized countries suffer from anxiety. We designed a mobile neurofeedback app, called Mind-Full, based on existing design guidelines. Our goal was for young children in lower socio-economic status schools to improve their ability to self-regulate anxiety by using Mind-Full. In this paper we report on quantitative outcomes from a sixteen-week field evaluation with 20 young children (aged 5 to 8). Our methodological contribution includes using a control group, validated measures of anxiety and stress, and assessing transfer and maintenance. Results from teacher and parent behavioral surveys indicated gains in children’s ability to self-regulate anxiety at school and home; a decrease in anxious behaviors at home; and cortisol tests showed variable improvement in physiological stress levels. We contribute to HCI for mental health with evidence that it is viable to use a mobile app in lower socio-economic status schools to improve children’s mental health.
Geodesy: Self-rising 2.5D Tiles by Printing along 2D Geodesic Closed Path
Thermoplastic and Fused Deposition Modeling (FDM) based 4D printing are rapidly expanding to allow for space- and material-saving 2D printed sheets morphing into 3D shapes when heated. However, to our knowledge, all the known examples are either origami-based models with obvious folding hinges, or beam-based models with holes on the morphing surfaces. Morphing continuous double-curvature surfaces remains a challenge, both in terms of a tailored toolpath-planning strategy and a computational model that simulates it. Additionally, neither approach takes surface texture as a design parameter in its computational pipeline. To extend the design space of FDM-based 4D printing, in Geodesy, we focus on the morphing of continuous double-curvature surfaces or surface textures. We suggest a unique tool path – printing thermoplastics along 2D closed geodesic paths to form a surface with one raised continuous double-curvature tiles when exposed to heat. The design space is further extended to more complex geometries composed of a network of rising tiles (i.e., surface textures). Both design components and the computational pipeline are explained in the paper, followed by several printed geometric examples.
In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound
Collaboration is built on trust, and establishing trust with a creative Artificial Intelligence is difficult when the decision process or internal state driving its behaviour isn’t exposed. When human musicians improvise together, a number of extra-musical cues are used to augment musical communication and expose mental or emotional states which affect musical decisions and the effectiveness of the collaboration. We developed a collaborative improvising AI drummer that communicates its confidence through an emoticon-based visualisation. The AI was trained on musical performance data, as well as real-time skin conductance, of musicians improvising with professional drummers, exposing both musical and extra-musical cues to inform its generative process. Uni- and bi-directional extra-musical communication with real and false values were tested by experienced improvising musicians. Each condition was evaluated using the FSS-2 questionnaire, as a proxy for musical engagement. The results show a positive correlation between extra-musical communication of machine internal state and human musical engagement.
Towards Collaboration Translucence: Giving Meaning to Multimodal Group Data
Collocated, face-to-face teamwork remains a pervasive mode of working, which is hard to replicate online. Team members’ embodied, multimodal interaction with each other and artefacts has been studied by researchers, but due to its complexity, has remained opaque to automated analysis. However, the ready availability of sensors makes it increasingly affordable to instrument work spaces to study teamwork and groupwork. The possibility of visualising key aspects of a collaboration has huge potential for both academic and professional learning, but a frontline challenge is the enrichment of quantitative data streams with the qualitative insights needed to make sense of them. In response, we introduce the concept of collaboration translucence, an approach to make visible selected features of group activity. This is grounded both theoretically (in the physical, epistemic, social and affective dimensions of group activity), and contextually (using domain-specific concepts). We illustrate the approach from the automated analysis of healthcare simulations to train nurses, generating four visual proxies that fuse multimodal data into higher order patterns.
At Your Service: Designing Voice Assistant Personalities to Improve Automotive User Interfaces
This paper investigates personalized voice characters for in-car speech interfaces. In particular, we report on how we designed different personalities for voice assistants and compared them in a real world driving study. Voice assistants have become important for a wide range of use cases, yet current interfaces are using the same style of auditory response in every situation, despite varying user needs and personalities. To close this gap, we designed four assistant personalities (Friend, Admirer, Aunt, and Butler) and compared them to a baseline (Default) in a between-subject study in real traffic conditions. Our results show higher likability and trust for assistants that correctly match the user’s personality while we observed lower likability, trust, satisfaction, and usefulness for incorrectly matched personalities, each in comparison with the Default character. We discuss design aspects for voice assistants in different automotive use cases.
Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services
Algorithmic decision-making systems are increasingly being adopted by government public service agencies. Researchers, policy experts, and civil rights groups have all voiced concerns that such systems are being deployed without adequate consideration of potential harms, disparate impacts, and public accountability practices. Yet little is known about the concerns of those most likely to be affected by these systems. We report on workshops conducted to learn about the concerns of affected communities in the context of child welfare services. The workshops involved 83 study participants including families involved in the child welfare system, employees of child welfare agencies, and service providers. Our findings indicate that general distrust in the existing system contributes significantly to low comfort in algorithmic decision-making. We identify strategies for improving comfort through greater transparency and improved communication strategies. We discuss the implications of our study for accountable algorithm design for child welfare applications.
ActiveInk: (Th)Inking with Data
During sensemaking, people annotate insights: underlining sentences in a document or circling regions on a map. They jot down their hypotheses: drawing correlation lines on scatterplots or creating personal legends to track patterns. We present ActiveInk, a system enabling people to seamlessly transition between exploring data and externalizing their thoughts using pen and touch. ActiveInk enables the natural use of pen for active reading behaviors, while supporting analytic actions by activating any of these ink strokes. Through a qualitative study with eight participants, we contribute observations of active reading behaviors during data exploration and design principles to support sensemaking.
Evaluating the Effect of Feedback from Different Computer Vision Processing Stages: A Comparative Lab Study
Computer vision and pattern recognition are increasingly being employed by smartphone and tablet applications targeted at lay-users. An open design challenge is to make such systems intelligible without requiring users to become technical experts. This paper reports a lab study examining the role of visual feedback. Our findings indicate that the stage of processing from which feedback is derived plays an important role in users’ ability to develop coherent and correct understandings of a system’s operation. Participants in our study showed a tendency to misunderstand the meaning being conveyed by the feedback, relating it to processing outcomes and higher level concepts, when in reality the feedback represented low level features. Drawing on the experimental results and the qualitative data collected, we discuss the challenges of designing interactions around pattern matching algorithms.
Smart and Fermented Cities: An Approach to Placemaking in Urban Informatics
What makes a city meaningful to its residents? What attracts people to live in a city and to care for it? Today, we might see such questions as concerns for HCI, given the emerging agendas of smart and connected cities, IoT, and ubiquitous computing: city residents’ perceptions of and attitudes towards smart city technologies will play a role in technology acceptance. Theories of “placemaking” from humanist geography and urban planning address themselves to such concerns, and they have been taken up in HCI and urban informatics research. This theory offers ideas for developing community attachment, heightening the legibility of the city, and intensifying lived experiences in the city. We add to this body of research with an analysis of several initiatives of City Yeast, a community-based design collective in Taiwan that proposes the metaphor of fermentation as an approach to placemaking. We unpack how this approach shapes their design practice and link its implications to urban informatics research in HCI. We suggest that smart cities can also be pursued by leveraging the knowledge of city residents and helping to facilitate their participation in acts of perceiving, envisioning, and improving their local communities, including but not limited to smart and connected technologies.
Smart Home Security Cameras and Shifting Lines of Creepiness: A Design-Led Inquiry
Through a design-led inquiry focused on smart home security cameras, this research develops three key concepts for research and design pertaining to new and emerging digital consumer technologies. Digital leakage names the propensity for digital information to be shared, stolen, and misused in ways unbeknownst or even harmful to those to whom the data pertains or belongs. Hole-and-corner applications are those functions connected to users’ data, devices, and interactions yet concealed from or downplayed to them, often because they are non-beneficial or harmful to them. Foot-in-the-door devices are product and services with functional offerings and affordances that work to normalize and integrate a technology, thus laying groundwork for future adoption of features that might have earlier been rejected as unacceptable or unnecessary. Developed and illustrated through a set of design studies and explorations, this paper shows how these concepts may be used analytically to investigate issues such as privacy and security, anticipatorily to speculate about the future of technology development and use, and generatively to synthesize design concepts and solutions.
Deaf and Hard-of-hearing Individuals’ Preferences for Wearable and Mobile Sound Awareness Technologies
To investigate preferences for mobile and wearable sound awareness systems, we conducted an online survey with 201 DHH participants. The survey explores how demographic factors affect perceptions of sound awareness technologies, gauges interest in specific sounds and sound characteristics, solicits reactions to three design scenarios (smartphone, smartwatch, head-mounted display) and two output modalities (visual, haptic), and probes issues related to social context of use. While most participants were highly interested in being aware of sounds, this interest was modulated by communication preference–that is, for sign or oral communication or both. Almost all participants wanted both visual and haptic feedback and 75% preferred to have that feedback on separate devices (e.g., haptic on smartwatch, visual on head-mounted display). Other findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
The Impact of User Characteristics and Preferences on Performance with an Unfamiliar Voice User Interface
Voice User Interfaces (VUIs) are increasing in popularity. However, their invisible nature with no or limited visuals makes it difficult for users to interact with unfamiliar VUIs. We analyze the impact of user characteristics and preferences on how users interact with a VUI-based calendar, DiscoverCal. While recent VUI studies analyze user behavior through self-reported data, we extend this research by analyzing both VUI usage data and self-reported data to observe correlations between both data types. Results from our user study (n=50) led to four key findings: 1) programming experience did not have a wide-spread impact on performance metrics while 2) assimilation bias did, 3) participants with more technical confidence exhibited a trial-and-error approach, and 4) desiring more guidance from our VUI correlated with performance metrics that indicate cautious users.
Pinpoint: A PCB Debugging Pipeline Using Interruptible Routing and Instrumentation
Difficulties in accessing, isolating, and iterating on the components and connections of a printed circuit board (PCB) create unique challenges in PCB debugging. Manual probing methods are slow and error prone, and even dedicated PCB testing equipment remains limited by its inability to modify the circuit during testing. We present Pinpoint, a tool that facilitates in-circuit PCB debugging through techniques such as programmatically probing signals, dynamically disconnecting components and subcircuits to test in isolation, and splicing in new elements to explore potential modifications. Pinpoint automatically instruments a PCB design and generates designs for a physical jig board that interfaces the user’s PCB to our custom testing hardware and to software tools. We evaluate Pinpoint’s ability to facilitate the debugging of various PCB issues by instrumenting and testing different classes of boards, as well as by characterizing its technical limitations and by soliciting feedback through a guided exploration with PCB designers.
A Practice-Led Account of the Conceptual Evolution of UX Knowledge
The contours of user experience (UX) design practice have been shaped by a diverse array of practitioners and disciplines, resulting in a diffuse and decentralized body of UX-specific disciplinary knowledge. The rapidly shifting space that UX knowledge occupies, in conjunction with a long-existing research-practice gap, presents unique challenges and opportunities to UX educators and aspiring UX designers. In this paper, we analyzed a corpus of question and answer communication on UX Stack Exchange using a practice-led approach, identifying and documenting practitioners’ conceptions of UX knowledge over a nine year period. Specifically, we used natural language processing techniques and qualitative content analysis to identify a disciplinary vocabulary invoked by UX designers in this online community, as well as conceptual trajectories spanning over nine years which could shed light on the evolution of UX practice. We further describe the implications of our findings for HCI research and UX education.
Understanding Visual Cues in Visualizations Accompanied by Audio Narrations
It is often assumed that visual cues, which highlight specific parts of a visualization to guide the audience’s attention, facilitate visualization storytelling and presentation. This assumption has not been systematically studied. We present an in-lab experiment and a Mechanical Turk study to examine the effects of integral and separable visual cues on the recall and comprehension of visualizations that are accompanied by audio narration. Eye-tracking data in the in-lab experiment confirm that cues helped the viewers focus on relevant parts of the visualization faster. We found that in general, visual cues did not have a significant effect on learning outcomes, but for specific cue techniques (e.g. glow) or specific chart types (e.g heatmap), cues significantly improved comprehension. Based on these results, we discuss how presenters might select visual cues depending on the role of the cues and the visualization type.
Context-Informed Scheduling and Analysis: Improving Accuracy of Mobile Self-Reports
Mobile self-reports are a popular technique to collect participant labelled data in the wild. While literature has focused on increasing participant compliance to self-report questionnaires, relatively little work has assessed response accuracy. In this paper, we investigate how participant context can affect response accuracy and help identify strategies to improve the accuracy of mobile self-report data. In a 3-week study we collect over 2,500 questionnaires containing both verifiable and non-verifiable questions. We find that response accuracy is higher for questionnaires that arrive when the phone is not in ongoing or very recent use. Furthermore, our results show that long completion times are an indicator of a lower accuracy. Using contextual mechanisms readily available on smartphones, we are able to explain up to 13% of the variance in participant accuracy. We offer actionable recommendations to assist researchers in their future deployments of mobile self-report studies.
BBeep: A Sonic Collision Avoidance System for Blind Travellers and Nearby Pedestrians
We present an assistive suitcase system, BBeep, for supporting blind people when walking through crowded environments. BBeep uses pre-emptive sound notifications to help clear a path by alerting both the user and nearby pedestrians about the potential risk of collision. BBeep triggers notifications by tracking pedestrians, predicting their future position in real-time, and provides sound notifications only when it anticipates a future collision. We investigate how different types and timings of sound affect nearby pedestrian behavior. In our experiments, we found that sound emission timing has a significant impact on nearby pedestrian trajectories when compared to different sound types. Based on these findings, we performed a real-world user study at an international airport, where blind participants navigated with the suitcase in crowded areas. We observed that the proposed system significantly reduces the number of imminent collisions.
From Gender Biases to Gender-Inclusive Design: An Empirical Investigation
In recent years, research has revealed gender biases in numerous software products. But although some researchers have found ways to improve gender participation in specific software projects, general methods focus mainly on detecting gender biases — not fixing them. To help fill this gap, we investigated whether the GenderMag bias detection method can lead directly to designs with fewer gender biases. In our 3-step investigation, two HCI researchers analyzed an industrial software product using GenderMag; we derived design changes to the product using the biases they found; and ran an empirical study of participants using the original product versus the new version. The results showed that using the method in this way did improve the software’s inclusiveness: women succeeded more often in the new version than in the original; men’s success rates improved too; and the gender gap entirely disappeared.
How to Work in the Car of the Future?: A Neuroergonomical Study Assessing Concentration, Performance and Workload Based on Subjective, Behavioral and Neurophysiological Insights
Autonomous driving provides new opportunities for the use of time during a car ride. One such important scenario is working. We conducted a neuroergonomical study to compare three configurations of a car interior (based on lighting, visual stimulation, sound) regarding their potential to support productive work. We assessed participants? concentration, performance and workload with subjective, behavioral and EEG measures while they carried out two different concentration tasks during simulated autonomous driving. Our results show that a configuration with a large-area, bright light with high blue components, and reduced visual and auditory stimuli promote performance, quality, efficiency, increased concentration and lower cognitive workload. Increased visual and auditory stimulation paired with linear, darker light with very few blue components resulted in lower performance, reduced subjective concentration, and higher cognitive workload, but did not differ from a normal car configuration. Our multi-method approach thus reveals possible car interior configurations for an ideal workspace.
Sensing Posture-Aware Pen+Touch Interaction on Tablets
Many status-quo interfaces for tablets with pen + touch input capabilities force users to reach for device-centric UI widgets at fixed locations, rather than sensing and adapting to the user-centric posture. To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from. To achieve this, our system combines three sensing modalities: 1) raw capacitance touchscreen images, 2) inertial motion, and 3) electric field sensors around the screen bezel for grasp and hand proximity detection. We show how these sensors enable posture-aware pen+touch techniques that adapt interaction and morph user interface elements to suit fine-grained contexts of body-, arm-, hand-, and grip-centric frames of reference.
VirtualBricks: Exploring a Scalable, Modular Toolkit for Enabling Physical Manipulation in VR
Often Virtual Reality (VR) experiences are limited by the design of standard controllers. This work aims to liberate a VR developer from these limitations in the physical realm to provide an expressive match to the limitless possibilities in the virtual realm. VirtualBricks is a LEGO based toolkit that enables construction of a variety of physical-manipulation enabled controllers for VR, by offering a set of feature bricks that emulate as well as extend the capabilities of default controllers. Based on the LEGO platform, the toolkit provides a modular, scalable solution for enabling passive haptics in VR. We demonstrate the versatility of our designs through a rich set of applications including re-implementations of artifacts from recent research. We share a VR Integration package for integration with Unity VR IDE, the CAD models for the feature bricks, for easy deployment of VirtualBricks within the community.
Painting with CATS: Camera-Aided Texture Synthesis
We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other. This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings.
A Comparison of Notification Techniques for Out-of-View Objects in Full-Coverage Displays
Full-coverage displays can place visual content anywhere on the interior surfaces of a room (e.g., a weather display near the coat stand). In these settings, digital artefacts can be located behind the user and out of their field of view – meaning that it can be difficult to notify the user when these artefacts need attention. Although much research has been carried out on notification, little is known about how best to direct people to the necessary location in room environments. We designed five diverse attention-guiding techniques for full-coverage display rooms, and evaluated them in a study where participants completed search tasks guided by the different techniques. Our study provides new results about notification in full-coverage displays: we showed benefits of persistent visualisations that could be followed all the way to the target and that indicate distance-to-target. Our findings provide useful information for improving the usability of interactive full-coverage environments.
An Evaluation of Radar Metaphors for Providing Directional Stimuli Using Non-Verbal Sound
We compared four audio-based radar metaphors for providing directional stimuli to users of AR headsets. The metaphors are clock face, compass, white noise, and scale. Each metaphor, or method, signals the movement of a virtual arm in a radar sweep. In a user study, statistically significant differences were observed for accuracy and response time. Beat-based methods (clock face, compass) elicited responses biased to the left of the stimulus location, and non-beat-based methods (white noise, scale) produced responses biased to the right of the stimulus location. The beat methods were more accurate than the non-beat methods. However, the non-beat methods elicited quicker responses. We also discuss how response accuracy varies along the radar sweep between methods. These observations contribute design insights for non-verbal, non-visual directional prompting.
Integrated Workflows: Generating Feedback Between Digital and Physical Realms
As design thinking shifted away from conventional methods with the rapid adoption of computer-aided design and fabrication technologies, architects have been seeking ways to initiate a comprehensive dialogue between the virtual and the material realms. Current methodologies do not offer embodied workflows that utilize the feedback obtained through a subsequent transition process between physical and digital design. Therefore, narrowing the separation between these two platforms remains as a research problem. This literature review elaborates the divide between physical and digital design, testing and manufacturing techniques in the morphological process of architectural form. We first review the digital transformation in the architectural design discourse. Then, we proceed by introducing a variety of methods that are integrating digital and physical workflows and suggesting an alternative approach. Our work unveils that there is a need for empirical research with a focus on integrated approaches to create intuitively embodied experiences for architectural designers.
Hackathons as Participatory Design: Iterating Feminist Utopias
Breastfeeding is not only a public health issue, but also a matter of economic and social justice. This paper presents an iteration of a participatory design process to create spaces for re-imagining products, services, systems, and policies that support breastfeeding in the United States. Our work contributes to a growing literature around making hackathons more inclusive and accessible, designing participatory processes that center marginalized voices, and incorporating systems- and relationship-based approaches to problem solving. By presenting an honest assessment of the successes and shortcomings of the first iteration of a hackathon, we explain how we re-structured the second “Make the Breast Pump Not Suck” hackathon in service of equity and systems design. Key to our re-imagining of conventional innovation structures is a focus on experience design, where joy and play serve as key strategies to help people and institutions build relationships across lines of difference. We conclude with a discussion of design principles applicable not only to designers of events, but to social movement researchers and HCI scholars trying to address oppression through the design of technologies and socio-technical systems.
Project Sidewalk: A Web-based Crowdsourcing Tool for Collecting Sidewalk Accessibility Data At Scale
We introduce Project Sidewalk, a new web-based tool that enables online crowdworkers to remotely label pedestrian-related accessibility problems by virtually walking through city streets in Google Street View. To train, engage, and sustain users, we apply basic game design principles such as interactive onboarding, mission-based tasks, and progress dashboards. In an 18-month deployment study, 797 online users contributed 205,385 labels and audited 2,941 miles of Washington DC streets. We compare behavioral and labeling quality differences between paid crowdworkers and volunteers, investigate the effects of label type, label severity, and majority vote on accuracy, and analyze common labeling errors. To complement these findings, we report on an interview study with three key stakeholder groups (N=14) soliciting reactions to our tool and methods. Our findings demonstrate the potential of virtually auditing urban accessibility and highlight tradeoffs between scalability and quality compared to traditional approaches.
HistoryTracker: Minimizing Human Interactions in Baseball Game Annotation
The sport data tracking systems available today are based on specialized hardware (high-definition cameras, speed radars, RFID) to detect and track targets on the field. While effective, implementing and maintaining these systems pose a number of challenges, including high cost and need for close human monitoring. On the other hand, the sports analytics community has been exploring human computation and crowdsourcing in order to produce tracking data that is trustworthy, cheaper and more accessible. However, state-of-the-art methods require a large number of users to perform the annotation, or put too much burden into a single user. We propose HistoryTracker, a methodology that facilitates the creation of tracking data for baseball games by warm-starting the annotation process using a vast collection of historical data. We show that HistoryTracker helps users to produce tracking data in a fast and reliable way.
Moments of Change: Analyzing Peer-Based Cognitive Support in Online Mental Health Forums
Clinical psychology literature indicates that reframing ir- rational thoughts can help bring positive cognitive change to those suffering from mental distress. Through data from an online mental health forum, we study how these cognitive processes play out in peer-to-peer conversations. Acknowledging the complexity of measuring cognitive change, we first provide an operational definition of a “moment of change” based on sentiment change in online conversations. Using this definition, we propose a predictive model that can identify whether a conversation thread or a post is associated with a moment of cognitive change. Consistent with psychological literature, we find that markers of language associated with sentiment and and affect are the most predictive. Further, cultural differences play an important role: predictive models trained on one country generalize poorly to others. To understand how a moment of change happens, we build a model that explicitly tracks topic and associated sentiment in a forum thread.
Increasing the Transparency of Research Papers with Explorable Multiverse Analyses
We present explorable multiverse analysis reports, a new approach to statistical reporting where readers of research papers can explore alternative analysis options by interacting with the paper itself. This approach draws from two recent ideas: i) multiverse analysis, a philosophy of statistical reporting where paper authors report the outcomes of many different statistical analyses in order to show how fragile or robust their findings are; and ii) explorable explanations, narratives that can be read as normal explanations but where the reader can also become active by dynamically changing some elements of the explanation. Based on five examples and a design space analysis, we show how combining those two ideas can complement existing reporting approaches and constitute a step towards more transparent research papers.
A Player-Centric Approach to Designing Spatial Skill Training Games
Certain video games show promise as tools for training spatial skills, one of the strongest predictors of future success in STEM. However, little is known about the gaming preferences of those who would benefit the most from such interventions: low spatial skill students. To provide guidance on how to design training games for this population, we conducted a survey of 350 participants from three populations: online college-age, students from a low SES high school, and students from a high SES high school. Participants took a timed test of spatial skills and then answered questions about their demographics, gameplay habits, preferences, and motivations. The only predictors of spatial skill were gender and population: female participants from online and low SES high school populations had the lowest spatial skill. In light of these findings, we provide design recommendations for game-based spatial skill interventions targeting low spatial skill students.
When Do People Trust Their Social Groups?
Trust facilitates cooperation and supports positive outcomes in social groups, including member satisfaction, information sharing, and task performance. Extensive prior research has examined individuals’ general propensity to trust, as well as the factors that contribute to their trust in specific groups. Here, we build on past work to present a comprehensive framework for predicting trust in groups. By surveying 6,383 Facebook Groups users about their trust attitudes and examining aggregated behavioral and demographic data for these individuals, we show that (1) an individual’s propensity to trust is associated with how they trust their groups, (2) smaller, closed, older, more exclusive, or more homogeneous groups are trusted more, and (3) a group’s overall friendship-network structure and an individual’s position within that structure can also predict trust. Last, we demonstrate how group trust predicts outcomes at both individual and group level such as the formation of new friendship ties.
Concept-Driven Visual Analytics: an Exploratory Study of Model- and Hypothesis-Based Reasoning with Visualizations
Visualization tools facilitate exploratory data analysis, but fall short at supporting hypothesis-based reasoning. We conducted an exploratory study to investigate how visualizations might support a concept-driven analysis style, where users can optionally share their hypotheses and conceptual models in natural language, and receive customized plots depicting the fit of their models to the data. We report on how participants leveraged these unique affordances for visual analysis. We found that a majority of participants articulated meaningful models and predictions, utilizing them as entry points to sensemaking. We contribute an abstract typology representing the types of models participants held and externalized as data expectations. Our findings suggest ways for rearchitecting visual analytics tools to better support hypothesis- and model-based reasoning, in addition to their traditional role in exploratory analysis. We discuss the design implications and reflect on the potential benefits and challenges involved.
Pictorial System Usability Scale (P-SUS): Developing an Instrument for Measuring Perceived Usability
We have developed a pictorial multi-item scale, called P-SUS (Pictorial System Usability Scale), which aims to measure the perceived usability of mobile devices. The scale is based on the established verbal usability questionnaire SUS (System Usability Scale). A user-centred design process was employed to develop and refine its 10 pictorial items. The scale was tested in a first validation study (N=60) using student participants. Psychometric properties (convergent validity, criterion-related validity, sensitivity, and reliability), as well as the motivation to fill in the scale were assessed. The results indicated satisfactory convergent validity for about two-thirds of the items. Furthermore, strong correlations were obtained for the sum scores between verbal and pictorial SUS, and the pictorial scale was perceived as more motivating than the verbal questionnaire. The P-SUS represents a first attempt to provide a pictorial usability scale for the evaluation of (mobile) devices.
Designing Second-Screening Experiences for Social Co-Selection and Critical Co-Viewing of Reality TV
Public commentary related to reality TV can be overwhelmed by thoughtless reactions and negative sentiments, which often problematically reinforce the cultural stereotyping typically employed in such media. We describe the design, and month-long evaluation, of a mobile “second-screening” application, Screenr, which uses co-voting and live textual tagging to encourage more critical co-viewing in these contexts. Our findings highlight how Screenr supported interrogation of the production qualities and claims of shows, promoted critical discourse around the motivations of programmes, and engaged participants in reflecting on their own assumptions and views. We situate our results within the context of existing second-screening co-viewing work, discuss implications for such technologies to support critical engagement with socio-political media, and provide design implications for future digital technologies in this domain.
TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction
Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.
Threats, Abuses, Flirting, and Blackmail: Gender Inequity in Social Media Voice Forums
HCI4D researchers and practitioners have leveraged voice forums to enable people with literacy, socioeconomic, and connectivity barriers to access, report, and share information. Although voice forums have received impassioned usage from low-income, low-literate, rural, tribal, and disabled communities in diverse HCI4D contexts, the participation of women in these services is almost non-existent. In this paper, we investigate the reasons for the low participation of women in social media voice forums by examining the use of Sangeet Swara in India and Baang in Pakistan by marginalized women and men. Our mixed-methods approach spanning content analysis of audio posts, quantitative analysis of interactions between users, and qualitative interviews with users indicate gender inequity due to deep-rooted patriarchal values. We found that women on these forums faced systemic discrimination and encountered abusive content, flirts, threats, and harassment. We discuss design recommendations to create social media voice forums that foster gender equity in use of these services.
Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspectives of Creepy Technologies
In HCI, adult concerns about technologies for children have been studied extensively. However, less is known about what children themselves find concerning in everyday technologies. We examine children’s technology-related fears by probing their use of the colloquial term “creepy.” To understand children’s perceptions of “creepy technologies,” we conducted four participatory design sessions with children (ages 7 – 11) to design and evaluate creepy technologies, followed by interviews with the same children. We found that children’s fear reactions emphasized physical harm and threats to their relationships (particularly with attachment figures). The creepy signals from technology the children described include: deception, lack of control, mimicry, ominous physical appearance, and unpredictability. Children acknowledged trusted adults will mediate the relationship between creepy technology signals and fear responses. Our work contributes a close examination of what children mean when they say a technology is “creepy.” By treating these concerns as principal design considerations, developers can build systems that are more transparent about the risks they produce and more sensitive to the fears they may unintentionally raise.
Causeway: Scaling Situated Learning with Micro-Role Hierarchies
While educational technologies such as MOOCs have helped scale content-based learning, scaling situated learning is still challenging. The time it takes to define a real-world project and to mentor learners is often prohibitive, especially given the limited contributions that novices are able to make. This paper introduces micro-role hierarchies, a form of coordination that integrates workflows and hierarchies to help short-term novices predictably contribute to complex projects. Individuals contribute through micro-roles, small experiential assignments taking roughly 2 hours. These micro-roles support execution of the desired work process, but also sequence into learning pathways, resulting in a learning dynamic similar to moving up an organizational hierarchy. We demonstrate micro-role hierarchies through Causeway, a platform for learning web development while building websites for nonprofits. We carry out a proof-of-concept study in which learners built static websites for refugee resettlement agencies in 2 hour long roles.
Modeling Mobile Interface Tappability Using Crowdsourcing and Deep Learning
Tapping is an immensely important gesture in mobile touchscreen interfaces, yet people still frequently are required to learn which elements are tappable through trial and error. Predicting human behavior for this everyday gesture can help mobile app designers understand an important aspect of the usability of their apps without having to run a user study. In this paper, we present an approach for modeling tappability of mobile interfaces at scale. We conducted large-scale data collection of interface tappability over a rich set of mobile apps using crowdsourcing and computationally investigated a variety of signifiers that people use to distinguish tappable versus not-tappable elements. Based on the dataset, we developed and trained a deep neural network that predicts how likely a user will perceive an interface element as tappable versus not tappable. Using the trained tappability model, we developed TapShoe, a tool that automatically diagnoses mismatches between the tappability of each element as perceived by a human user—predicted by our model, and the intended or actual tappable state of the element specified by the developer or designer. Our model achieved reasonable accuracy: mean precision 90.2% and recall 87.0%, in matching human perception on identifying tappable UI elements. The tappability model and TapShoe were well received by designers via an informal evaluation with 7 professional interaction designers.
CodeGazer: Making Code Navigation Easy and Natural With Gaze Input
Navigating source code, an activity common in software development, is time consuming and in need of improvement. We present CodeGazer, a prototype for source code navigation using eye gaze for common navigation functions. These functions include actions such as “Go to Definition” and “Find All Usages” of an identifier, navigate to files and methods, move back and forth between visited points in code and scrolling. We present user study results showing that many users liked and even preferred the gaze-based navigation, in particular the “Go to Definition” function. Gaze-based navigation is also holding up well in completion time when compared to traditional methods. We discuss how eye gaze can be integrated into traditional mouse & keyboard applications in order to make “look up” tasks more natural.
Analyzing Value Discovery in Design Decisions Through Ethicography
HCI scholarship is increasingly concerned with the ethical impact of socio-technical systems. Current theoretically driven approaches that engage with ethics generally prescribe only abstract approaches by which designers might consider values in the design process. However, there is little guidance on methods that promote value discovery, which might lead to more specific examples of relevant values in specific design contexts. In this paper, we elaborate a method for value discovery, identifying how values impact the designer’s decision making. We demonstrate the use of this method, called Ethicography, in describing value discovery and use throughout the design process. We present analysis of design activity by user experience (UX) design students in two lab protocol conditions, describing specific human values that designers considered for each task, and visualizing the interplay of these values. We identify opportunities for further research, using the Ethicograph method to illustrate value discovery and translation into design solutions.
A Field Study of Computer-Security Perceptions Using Anti-Virus Customer-Support Chats
Understanding users’ perceptions of suspected computer-security problems can help us tailor technology to better protect users. To this end, we conducted a field study of users’ perceptions using 189,272 problem descriptions sent to the customer-support desk of a large anti-virus vendor from 2015 to 2018. Using qualitative methods, we analyzed 650 problem descriptions to study the security issues users faced and the symptoms that led users to their own diagnoses. Subsequently, we investigated to what extent and for what types of issues user diagnoses matched those of experts. We found, for example, that users and experts were likely to agree for most issues, but not for attacks (e.g., malware infections), for which they agreed only in 44% of the cases. Our findings inform several user-security improvements, including how to automate interactions with users to resolve issues and to better communicate issues to users.
DataSelfie: Empowering People to Design Personalized Visuals to Represent Their Data
Many personal informatics systems allow people to collect and manage personal data and reflect more deeply about themselves. However, these tools rarely offer ways to customize how the data is visualized. In this work, we investigate the question of how to enable people to determine the representation of their data. We analyzed the Dear Data project to gain insights into the design elements of personal visualizations. We developed DataSelfie, a novel system that allows individuals to gather personal data and design custom visuals to represent the collected data. We conducted a user study to evaluate the usability of the system as well as its potential for individual and collaborative sensemaking of the data.
PlaneVR: Social Acceptability of Virtual Reality for Aeroplane Passengers
Virtual reality (VR) headsets allow wearers to escape their physical surroundings, immersing themselves in a virtual world. Although escape may not be realistic or acceptable in many everyday situations, air travel is one context where early adoption of VR could be very attractive. While travelling, passengers are seated in restricted spaces for long durations, reliant on limited seat-back displays or mobile devices. This paper explores the social acceptability and usability of VR for in-flight entertainment. In an initial survey, we captured respondents’ attitudes towards the social acceptability of VR headsets during air travel. Based on the survey results, we developed a VR in-flight entertainment prototype and evaluated this in a focus group study. Our results discuss methods for improving the acceptability of VR in-flight, including using mixed reality to help users transition between virtual and physical environments and supporting interruption from other co-located people.
B-Script: Transcript-based B-roll Video Editing with Recommendations
In video production, inserting B-roll is a widely used technique to enrich the story and make a video more engaging. However, determining the right content and positions of B-roll and actually inserting it within the main footage can be challenging, and novice producers often struggle to get both timing and content right. We present B-Script, a system that supports B-roll video editing via interactive transcripts. B-Script has a built-in recommendation system trained on expert-annotated data, recommending users B-roll position and content. To evaluate the system, we conducted a within-subject user study with 110 participants, and compared three interface variations: a timeline-based editor, a transcript-based editor, and a transcript-based editor with recommendations. Users found it easier and were faster to insert B-roll using the transcript-based interface, and they created more engaging videos when recommendations were provided.
A Rough Sketch of the Freehand Drawing Process: Blending the Line between Action and Artifact
Dynamic elements of the drawing process (e.g., order of compilation, speed, length, and pressure of strokes) are considered important because they can reveal the technique, process, and emotions of the artist. To explore how sensing, visualizing, and sharing these aspects of the creative process might shape art making and art viewing experiences, we designed a research probe which unobtrusively tracks and visualizes the movement and pressure of the artist’s pencil on an easel. Using our probe, we conducted studies with artists and art viewers, which reveal digital and physical representations of creative process as a means of reflecting on a multitude of factors about the finished artwork, including technique, style, and the emotions of the artists. We conclude by discussing future directions for HCI systems that sense and visualize aspects of the creative process in digitally-mediated arts, as well as the social considerations of sharing and curating intimate process information.
What Can Gestures Tell?: Detecting Motor Impairment in Early Parkinson’s from Common Touch Gestural Interactions
Parkinson’s disease (PD) is a chronic neurological disorder causing progressive disability that severely affects patients’ quality of life. Although early interventions can provide significant benefits, PD diagnosis is often delayed due to both the mildness of early signs and the high requirements imposed by traditional screening and diagnosis methods. In this paper, we explore the feasibility and accuracy of detecting motor impairment in early PD via sensing and analyzing users’ common touch gestural interactions on smartphones. We investigate four types of common gestures, including flick, drag, pinch, and handwriting gestures, and propose a set of features to capture PD motor signs. Through a 102-subject (35 early PD subjects and 67 age-matched controls) study, our approach achieved an AUC of 0.95 and 0.89/0.88 sensitivity/specificity in discriminating early PD subjects from healthy controls. Our work constitutes an important step towards unobtrusive, implicit, and convenient early PD detection from routine smartphone interactions.
Understanding the Boundaries between Policymaking and HCI
There is a growing body of literature in HCI examining the intersection between policymaking and technology research. However, what it means to engage in policymaking in our field, or the ways in which evidence from HCI studies is translated into policy, is not well understood. We report on interviews with 11 participants working at the intersection of technology research and policymaking. Analysis of this data highlights how evidence is understood and made sense of in policymaking processes, what forms of evidence are privileged over others, and the work that researchers engage in to meaningfully communicate their work to policymaking audiences. We discuss how our findings pose challenges for certain traditions of research in HCI, yet also open up new policy opportunities for those engaging in more speculative research practices. We conclude by discussing three ways forward that the HCI community can explore to increase engagement with policymaking contexts.
Towards an Effective Digital Literacy Intervention to Assist Returning Citizens with Job Search
Returning citizens (formerly incarcerated individuals) face great challenges finding employment, and these are exacerbated by the need for digital literacy in modern job search. Through 23 semi-structured interviews and a pilot digital literacy course with returning citizens in the Greater Detroit area, we explore tactics and needs with respect to job search and digital technology. Returning citizens exhibit great diversity, but overall, we find our participants to have striking gaps in digital literacy upon release, even as they are quickly introduced to smartphones by friends and family. They tend to have employable skills and ability to use offline social networks to find opportunities, but have little understanding of formal job search processes, online or offline. They mostly mirror mainstream use of mobile technology, but they have various reasons to avoid social media. These and other findings lead to recommendations for digital literacy programs for returning citizens.
Comparing Data from Chatbot and Web Surveys: Effects of Platform and Conversational Style on Survey Response Quality
This study aims to explore the feasibility of a text-based virtual agent as a new survey method to overcome the web survey’s common response quality problems, which are caused by respondents’ inattention. To this end, we conducted a 2 (platform: web vs. chatbot) × 2 (conversational style: formal vs. casual) experiment. We used satisficing theory to compare the responses’ data quality. We found that the participants in the chatbot survey, as compared to those in the web survey, were more likely to produce differentiated responses and were less likely to satisfice; the chatbot survey thus resulted in higher-quality data. Moreover, when a casual conversational style is used, the participants were less likely to satisfice-although such effects were only found in the chatbot condition. These results imply that conversational interactivity occurs when a chat interface is accompanied by messages with effective tone. Based on an analysis of the qualitative responses, we also showed that a chatbot could perform part of a human interviewer’s role by applying effective communication strategies.
Direct Finger Manipulation of 3D Object Image with Ultrasound Haptic Feedback
In this study, we prototype and examine a system that allows a user to manipulate a 3D virtual object with multiple fingers without wearing any device. An autostereoscopic display produces a 3D image and a depth sensor measures the movement of the fingers. When a user touches a virtual object, haptic feedback is provided by ultrasound phased arrays. By estimating the cross section of the finger in contact with the virtual object and by creating a force pattern around it, it is possible for the user to recognize the position of the surface relative to the finger. To evaluate our system, we conducted two experiments to show that the proposed feedback method is effective in recognizing the object surface and thereby enables the user to grasp the object quickly without seeing it.
ExerCube vs. Personal Trainer: Evaluating a Holistic, Immersive, and Adaptive Fitness Game Setup
Today’s spectrum of playful fitness solutions features systems that are clearly game-first or fitness-first in design; hardly any sufficiently incorporate both areas. Consequently, existing applications and evaluations often lack in focus on attractiveness and effectiveness, which should be addressed on the levels of body, controller, and game scenario following a holistic design approach. To contribute to this topic and as a proof-of-concept, we designed the ExerCube, an adaptive fitness game setup. We evaluated participants’ multi-sensory and bodily experiences with a non-adaptive and an adaptive ExerCube version and compared them with personal training to reveal insights to inform the next iteration of the ExerCube. Regarding flow, enjoyment and motivation, the ExerCube is on par with personal training. Results further reveal differences in perception of exertion, types and quality of movement, social factors, feedback, and audio experiences. Finally, we derive considerations for future research and development directions in holistic fitness game setups.
Tough Times at Transitional Homeless Shelters: Considering the Impact of Financial Insecurity on Digital Security and Privacy
Addressing digital security and privacy issues can be particularly difficult for users who face challenging circumstances. We performed semi-structured interviews with residents and staff at 4 transitional homeless shelters in the U.S. San Francisco Bay Area (n=15 residents, 3 staff) to explore their digital security and privacy challenges. Based on these interviews, we outline four tough times themes — challenges experienced by our financially insecure participants that impacted their digital security and privacy — which included: (1) limited financial resources, (2) limited access to reliable devices and Internet, (3) untrusted relationships, and (4) ongoing stress. We provide examples of how each theme impacts digital security and privacy practices and needs. We then use these themes to provide a framework outlining opportunities for technology creators to better support users facing security and privacy challenges related to financial insecurity.
The Effect of Audiences on the User Experience with Conversational Interfaces in Physical Spaces
How does the presence of an audience influence the social interaction with a conversational system in a physical space? To answer this question, we analyzed data from an art exhibit where visitors interacted in natural language with three chatbots representing characters from a book. We performed two studies to explore the influence of audiences. In Study 1, we did fieldwork cross-analyzing the reported perception of the social interaction, the audience conditions (visitor is alone, visitor is observed by acquaintances and/or strangers), and control variables such as the visitor’s familiarity with the book and gender. In Study 2, we analyzed over 5,000 conversation logs and video recordings, identifying dialogue patterns and how they correlated with the audience conditions. Some significant effects were found, suggesting that conversational systems in physical spaces should be designed based on whether other people observe the user or not.
Unobtrusively Enhancing Reflection-in-Action of Teachers through Spatially Distributed Ambient Information
Reflecting on their performance during classroom-teaching is an important competence for teachers. Such reflection-in-action (RiA) enables them to optimize teaching on the spot. But RiA is also challenging, demanding extra thinking in teachers’ already intensive routines. Little is known on how HCI systems can facilitate teachers’ RiA during classroom-teaching. To fill in this gap, we evaluate ClassBeacons, a system that uses spatially distributed lamps to depict teachers’ ongoing performance on how they have divided their time and attention over students in the classroom. Empirical qualitative data from eleven teachers in 22 class periods show that this ambient information facilitated teachers’ RiA without burdening teaching in progress. Based on our theoretical grounding and field evaluation, we contribute empirical knowledge about how an HCI system enhanced teachers’ process of RiA as well as a set of design principles for unobtrusively supporting RiA.
Towards Effective Foraging by Data Scientists to Find Past Analysis Choices
Data scientists are responsible for the analysis decisions they make, but it is hard for them to track the process by which they achieved a result. Even when data scientists keep logs, it is onerous to make sense of the resulting large number of history records full of overlapping variants of code, output, plots, etc. We developed algorithmic and visualization techniques for notebook code environments to help data scientists forage for information in their history. To test these interventions, we conducted a think-aloud evaluation with 15 data scientists, where participants were asked to find specific information from the history of another person’s data science project. The participants succeed on a median of 80% of the tasks they performed. The quantitative results suggest promising aspects of our design, while qualitative results motivated a number of design improvements. The resulting system, called Verdant, is released as an open-source extension for JupyterLab.
I Don’t Even Have to Bother Them!: Using Social Media to Automate the Authentication Ceremony in Secure Messaging
The privacy guaranteed by secure messaging applications relies on users completing an authentication ceremony to verify they are using the proper encryption keys. We examine the feasibility of social authentication, which partially automates the ceremony using social media accounts. We implemented social authentication in Signal and conducted a within-subject user study with 42 participants to compare this with existing methods. To generalize our results, we conducted a Mechanical Turk survey involving 421 respondents. Our results show that users found social authentication to be convenient and fast. They particularly liked verifying keys asynchronously, and viewing social media profiles naturally coincided with how participants thought of verification. However, some participants reacted negatively to integrating social media with Signal, primarily because they distrust social media services. Overall, automating the authentication ceremony and distributing trust with additional service providers is promising, but this infrastructure needs to be more trusted than social media companies.
Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing
The home is filled with a rich diversity of sounds from mundane beeps and whirs to dog barks and children’s shouts. In this paper, we examine how deaf and hard of hearing (DHH) people think about and relate to sounds in the home, solicit feedback and reactions to initial domestic sound awareness systems, and explore potential concerns. We present findings from two qualitative studies: in Study 1, 12 DHH participants discussed their perceptions of and experiences with sound in the home and provided feedback on initial sound awareness mockups. Informed by Study 1, we designed three tablet-based sound awareness prototypes, which we evaluated with 10 DHH participants using a Wizard-of-Oz approach. Together, our findings suggest a general interest in smarthome-based sound awareness systems particularly for displaying contextually aware, personalized and glanceable visualizations but key concerns arose related to privacy, activity tracking, cognitive overload, and trust.
Implicit Communication of Actionable Information in Human-AI teams
Humans expect their collaborators to look beyond the explicit interpretation of their words. Implicature is a common form of implicit communication that arises in natural language discourse when an utterance leverages context to imply information beyond what the words literally convey. Whereas computational methods have been proposed for interpreting and using different forms of implicature, its role in human and artificial agent collaboration has not yet been explored in a concrete domain. The results of this paper provide insights to how artificial agents should be structured to facilitate natural and efficient communication of actionable information with humans. We investigated implicature by implementing two strategies for playing Hanabi, a cooperative card game that relies heavily on communication of actionable implicit information to achieve a shared goal. In a user study with 904 completed games and 246 completed surveys, human players randomly paired with an implicature AI are 71% more likely to think their partner is human than players paired with a non-implicature AI. These teams demonstrated game performance similar to other state of the art approaches.
StreetWise: Smart Speakers vs Human Help in Public Slum Settings
This paper explores the use of conversational speech question and answer systems in the challenging context of public spaces in slums. A major part of this work is a comparison of the source and speed of the given responses; that is, either machine-powered and instant or human-powered and delayed. We examine these dimensions via a two-stage, multi-sited deployment. We report on a pilot deployment that helped refine the system, and a second deployment involving the installation of nine of each type of system within a large Mumbai slum for a 40-day period, resulting in over 12,000 queries. We present the findings from a detailed analysis and comparison of the two question-answer corpora; discuss how these insights might help improve machine-powered smart speakers; and, highlight the potential benefits of multi-sited public speech installations within slum environments.
Ways of Knowing When Research Subjects Care
This paper investigates a hidden dimension of research with real world stakes: research subjects who care — sometimes deeply — about the topic of the research in which they participate. They manifest this care, we show, by managing how they are represented in the research process, by exercising politics in shaping knowledge production, and sometimes in experiencing trauma in the process. We draw first-hand reflections on participation in diversity research on Wikipedia, transforming participants from objects of study to active negotiators of research process. We depict how care, vulnerability, harm, and emotions shape ethnographic and qualitative data. We argue that, especially in reflexive cultures, research subjects are active agents with agendas, accountabilities, and political projects of their own. We propose ethics of care and collaboration to open up new possibilities for knowledge production and socio-technical intervention in HCI.
Design and Evaluation of Service Robot’s Proactivity in Decision-Making Support Process
As service robots are envisioned to provide decision-making support (DMS) in public places, it is becoming essential to design the robot’s manner of offering assistance. For example, robot shop assistants that proactively or reactively give product recommendations may impact customers’ shopping experience. In this paper, we propose an anticipation-autonomy policy framework that models three levels of proactivity (high, medium and low) of service robots in DMS contexts. We conduct a within-subject experiment with 36 participants to evaluate the effects of DMS robot’s proactivity on user perceptions and interaction behaviors. Results show that a highly proactive robot is deemed inappropriate though people can get rich information from it. A robot with medium proactivity helps reduce the decision space while maintaining users’ sense of engagement. The least proactive robot grants users more control but may not realize its full capability. We conclude the paper with design considerations for service robot’s manner.
Empowerment on the Margins: The Online Experiences of Community Health Workers
Research in Human-Computer Interaction for Development (HCI4D) routinely relies on and engages with the increasing penetration of smartphones and the internet. We examine the mobile, internet, and social media practices of women community health workers, for whom internet access has newly become possible. These workers are uniquely positioned at the intersections of various communities of practice—their familial units, workplaces, networks of health workers, larger communities, and the online world. However, they remain at the margins of each, on account of difference in gender, class, literacies, professional expertise, and more. Our findings unpack the legitimate peripheral participation of these workers; examining how they appropriate smartphones and the internet to move away from the peripheries to fully participate in these communities. We discuss how their activities are motivated by moves towards empowerment, digitization, and improved healthcare provision. We consider how future work might support, leverage, and extend their efforts.
Collaborative Practices with Structured Data: Do Tools Support What Users Need?
Collaborative work with data is increasingly common and spans a broad range of activities – from creating or analysing data in a team, to sharing it with others, to reusing someone else’s data in a new context. In this paper, we explore collaboration practices around structured data and how they are supported by current technology. We present the results of an interview study with twenty data practitioners, from which we derive four high-level user needs for tool support. We compare them against the capabilities of twenty systems that are commonly associated with data activities, including data publishing software, wikis, web-based collaboration tools, and online community platforms. Our findings suggest that data-centric collaborative work would benefit from: structured documentation of data and its lifecycle; advanced affordances for conversations among collaborators; better change control; and custom data access. The findings help us formalise practices around data teamwork, and build a better understanding how people’s motivations and barriers when working with structured data.
RayCursor: A 3D Pointing Facilitation Technique based on Raycasting
Raycasting is the most common target pointing technique in virtual reality environments. However, performance on small and distant targets is impacted by the accuracy of the pointing device and the user’s motor skills. Current pointing facilitation techniques are currently only applied in the context of the virtual hand, i.e. for targets within reach. We propose enhancements to Raycasting: filtering the ray, and adding a controllable cursor on the ray to select the nearest target. We describe a series of studies for the design of the visual feedforward, filtering technique, as well as a comparative study between different 3D pointing techniques. Our results show that highlighting the nearest target is one of the most efficient visual feedforward technique. We also show that filtering the ray reduces error rate in a drastic way. Finally we show the benefits of RayCursor compared to Raycasting and another technique from the literature.
Finding Information on Non-Rectangular Interfaces
With upcoming breakthroughs in free-form display technologies, new user interface design challenges have emerged. Here, we investigate a question, which has been widely explored on traditional GUIs but unexplored on non-rectangular interfaces: what are the user strategies in terms of visual search when information is not presented in a traditional rectangular layout? To achieve this, we present two complementary studies investigating eye movements in different visual search tasks. Our results unveil which areas are seen first according to different visual structures. By doing so we address the question of where to place relevant content for the UI designers of non-rectangular displays.
Augmentation not Duplication: Considerations for the Design of Digitally-Augmented Comic Books
Digital-augmentation of print-media can provide contextually relevant audio, visual, or haptic content to supplement the static text and images. The design of such augmentation–its medium, quantity, frequency, content, and access technique–can have a significant impact on the reading experience. In the worst case, such as where children are learning to read, the print medium can become a proxy for accessing digital content only, and the textual content is avoided. In this work, we examine how augmented content can change the reader’s behaviour with a comic book. We first report on the usage of a commercially available augmented comic for children, providing evidence that a third of all readers converted to simply viewing the digital media when printed content is duplicated. Second, we explore the design space for digital content augmentation in print media. Third, we report a user study with 136 children that examined the impact of both content length and presentation in a digitally-augmented comic book. From this, we report a series of design guidelines to assist designers and editors in the development of digitally-augmented print media.
Swire: Sketch-based User Interface Retrieval
Sketches and real-world user interface examples are frequently used in multiple stages of the user interface design process. Unfortunately, finding relevant user interface examples, especially in large-scale datasets, is a highly challenging task because user interfaces have aesthetic and functional properties that are only indirectly reflected by their corresponding pixel data and meta-data. This paper introduces Swire, a sketch-based neural-network-driven technique for retrieving user interfaces. We collect the first large-scale user interface sketch dataset from the development of Swire that researchers can use to develop new sketch-based data-driven design interfaces and applications. Swire achieves high performance for querying user interfaces: for a known validation task it retrieves the most relevant example as within the top-10 results for over 60% of queries. With this technique, for the first time designers can accurately retrieve relevant user interface examples with free-form sketches natural to their design workflows. We demonstrate several novel applications driven by Swire that could greatly augment the user interface design process.
DataToon: Drawing Dynamic Network Comics With Pen + Touch Interaction
Comics are an entertaining and familiar medium for presenting compelling stories about data. However, existing visualization authoring tools do not leverage this expressive medium. In this paper, we seek to incorporate elements of comics into the construction of data-driven stories about dynamic networks. We contribute DataToon, a flexible data comic storyboarding tool that blends analysis and presentation with pen and touch interactions. A storyteller can use DataToon rapidly generate visualization panels, annotate them, and position them within a canvas to produce a visually compelling narrative. In a user study, participants quickly learned to use DataToon for producing data comics.
`I make up a silly name’: Understanding Children’s Perception of Privacy Risks Online
Children under 11 are often regarded as too young to comprehend the implications of online privacy. Perhaps as a result, little research has focused on younger kids’ risk recognition and coping. Such knowledge is, however, critical for designing efficient safeguarding mechanisms for this age group. Through 12 focus group studies with 29 children aged 6-10 from UK schools, we examined how children described privacy risks related to their use of tablet computers and what information was used by them to identify threats. We found that children could identify and articulate certain privacy risks well, such as information oversharing or revealing real identities online; however, they had less awareness with respect to other risks, such as online tracking or game promotions. Our findings offer promising directions for supporting children’s awareness of cyber risks and the ability to protect themselves online.
Supporting Coping with Parkinson’s Disease Through Self Tracking
Self-tracking can help people understand their medical condition and the factors that influence their symptoms. However, it is unclear how tracking technologies should be tailored to help people cope with the progression of a degenerative disease. To understand how smartphone apps and other tracking technologies can support people in coping with an incurable illness, we interviewed both people with Parkinson’s Disease (n=17) and care partners (n=6) who help people with Parkinson’s manage their lives. We describe how symptom trackers can help people identify and solve problems to improve their quality of life, the role symptom trackers can play in helping people combat their own tendencies towards avoidance and denial, and the complex role of care partners in defining and tracking ambiguous symptoms. Our findings yield insights that can guide the design of tracking technologies to help people with Parkinson’s Disease accept and plan for their condition.
What.Hack: Engaging Anti-Phishing Training Through a Role-playing Phishing Simulation Game
Phishing attacks are a major problem, as evidenced by the DNC hackings during the 2016 US presidential election, in which staff were tricked into sharing passwords by fake Google security emails, granting access to confidential information. Vulnerabilities such as these are due in part to insufficient and tiresome user training in cybersecurity. Ideally, we would have more engaging training methods that teach cybersecurity in an active and entertaining way. To address this need, we introduce the game What.Hack, which not only teaches phishing concepts but also simulates actual phishing attacks in a role-playing game to encourage the player to practice defending themselves. Our user study shows that our game design is more engaging and effective in improving performance than a standard form of training and a competing training game design (which does not simulate phishing attempts through role-playing).
Position Exchange Workshops: A Method to Design for Each Other in Families
Existing methods for researching and designing to support relationships between parents and their adult children tend to lead to designs that respect the differences between them. We conducted 14 Position Exchange Workshops with parents and their adult children, where the child has left home in recent years, aiming to explicate and confront their positions in creative and supportive ways. We designed three co-design methods (Card Sort for Me & You, Would I Lie to You? and A Magic Machine for You) to support participants to explore, understand, empathize, and design for each other. The findings show that the methods facilitated understanding, renegotiating, and reimagining their current positions. We discuss how positions can help consider both perspectives in the design process. This paper seeks to contribute (1) how the notion of positions enables generating understandings of the relationship, and (2) a set of methods influenced by position exchange, empathy, and playful engagement that help explore human relationships.
Behavioural Biometrics in VR: Identifying People from Body Motion and Relations in Virtual Reality
Every person is unique, with individual behavioural characteristics: how one moves, coordinates, and uses their body. In this paper we investigate body motion as behavioural biometrics for virtual reality. In particular, we look into which behaviour is suitable to identify a user. This is valuable in situations where multiple people use a virtual reality environment in parallel, for example in the context of authentication or to adapt the VR environment to users’ preferences. We present a user study (N=22) where people perform controlled VR tasks (pointing, grabbing, walking, typing), monitoring their head, hand, and eye motion data over two sessions. These body segments can be arbitrarily combined into body relations, and we found that these movements and their combination lead to characteristic behavioural patterns. We present an extensive analysis of which motion/relation is useful to identify users in which tasks using classification methods. Our findings are beneficial for researchers and practitioners alike who aim to build novel adaptive and secure user interfaces in virtual reality.
SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision
Current virtual reality applications do not support people who have low vision, i.e., vision loss that falls short of complete blindness but is not correctable by glasses. We present SeeingVR, a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development. Our evaluation with 11 participants with low vision showed that SeeingVR enabled users to better enjoy VR and complete tasks more quickly and accurately. Developers also found our Unity toolkit easy and convenient to use.
The Magic Machine Workshops: Making Personal Design Knowledge
New technologies emerge into an increasingly complex everyday life. How can we engage users further into material practices that explore ideas and notions of these new things? This paper proposes a set of qualities for short, intense, workshop-like experiences, created to generate strong individual commitments, and expose underlying personal desires as drivers for ideas. By making use of open-ended making to engage participants in the imagination of new things, we aim to allow a broad range of knowledge to materialise, focused on the making of work that is about technology, rather than of technology.
Thermporal: An Easy-To-Deploy Temporal Thermographic Sensor System to Support Residential Energy Audits
Underperforming, degraded, and missing insulation in US residential buildings is common. Detecting these issues, however, can be difficult. Using thermal cameras during energy audits can aid in locating potential insulation issues, but prior work indicates it is challenging to determine their severity using thermal imagery alone. In this work, we present an easy-to-deploy, temporal thermographic sensor system designed to support residential energy audits through quantitative analysis of building envelope performance. We then offer an evaluation of the system through two studies: (i) a one-week, in-home field study in five homes and (ii) a semi-structured interview study with five professional energy auditors. Our results show our system helps raise awareness, improves homeowners’ ability to gauge the severity of issues, and provides opportunities for new interactions between homeowners, building data, and professional auditors.
Our Friends Electric: Reflections on Advocacy and Design Research for the Voice Enabled Internet
Emerging technologies—such as the voice enabled internet—present many opportunities and challenges for HCI research and society as a whole. Advocating for better, healthier implementations of these technologies will require us to communicate abstract values, such as trust, to an audience that ranges from the general public to technologists and even policymakers. In this paper, we show how a combination of film-making and product design can help to illustrate these abstract values. Working as part of a wider international advocacy campaign, Our Friends Electric focuses on the voice enabled internet, translating abstract notions of Internet Health into comprehensible digital futures for the relationship between our voice and the internet. We conclude with a call for designers of physical things to be more involved with the development of trust, privacy and security in this powerful emerging technological landscape.
(Re-)Framing Menopause Experiences for HCI and Design
Informed by considerations from medicine and wellness research, experience design, investigations of new and emerging technologies, and sociopolitical critique, HCI researchers have demonstrated that women’s health is a complex and rich topic. Turning these research outputs into productive interventions, however, is difficult. We argue that design is well positioned to address such a challenge thanks to its methodological traditions of problem setting and framing situated in synthetic (rather than analytic) knowledge production. In this paper, we focus on designing for experiences of menopause. Building on our prior empirical work on menopause and our commitment to pursue design informed by women’s lived experience, we iteratively generated dozens of design frames and accompanying design crits. We document the unfolding of our design reasoning, showing how good-seeming insights nonetheless often lead to bad designs, while working progressively towards stronger insights and design constructs. The latter we offer as a contribution to researchers and practitioners who work at the intersections of women’s health and design.
Playing Blind: Revealing the World of Gamers with Visual Impairment
Previous research on games for people with visual impairment (PVI) has focused on co-designing or evaluating specific games – mostly under controlled conditions. In this research, we follow a game-agnostic, “in-the-wild” approach, investigating the habits, opinions and concerns of PVI regarding digital games. To explore these issues, we conducted an online survey and follow-up interviews with gamers with VI (GVI). Dominant themes from our analysis include the particular appeal of digital games to GVI, the importance of social trajectories and histories of gameplay, the need to balance complexity and accessibility in both games targeted to PVI and mainstream games, opinions about the state of the gaming industry, and accessibility concerns around new and emerging technologies such as VR and AR. Our study gives voice to an underrepresented group in the gaming community. Understanding the practices, experiences and motivations of GVI provides a valuable foundation for informing development of more inclusive games.
On the Internet, Nobody Knows You’re a Dog… Unless You’re Another Dog
How humans use computers has evolved from human-machine interfaces to human-human computer mediated communication. Whilst the field of animal-computer interaction has roots in HCI, technology developed in this area currently only supports animal? computer communication. This design fiction paper presents animal-animal connected interfaces, using dogs as an instance. Through a co-design workshop, we created six proposals. The designs focused on what a dog internet could look like and how interactions might be presented. Analysis of the narratives and conceived designs indicated that participants’ concerns focused around asymmetries within the interaction. This resulted in the use of objects seen as familiar to dogs. This was conjoined with interest in how to initiate and end interactions, which was often achieved through notification systems. This paper builds upon HCI methods for unconventional users, and applies a design fiction approach to uncover key questions towards the creation of animal-to-animal interfaces.
More than the Sum of Makers: The Complex Dynamics of Diverse Practices at Maker Faire
Human Computer Interaction has developed great interest in the Maker Movement. Previous work has explored it from various perspectives, focusing either on its potentials or issues. As these are however only fragmented portrayals, this paper aims to take a broader perspective and interconnect some of the fragments. We conducted a qualitative study in the context of two Maker Faires to gain a better understanding of the complex dynamics that makers operate in. We captured the voices of different stakeholders and explored how their respective agendas relate to each other. The findings illustrate how the event is co-created at the nexus of different technological, social and economic interests while leaving space for diverse practices. The paper contributes a first focused analysis of Maker Faire, probes it as a site for research and discusses how holistic perspectives on the Maker Movement could create new research opportunities.
“My blood sugar is higher on the weekends”: Finding a Role for Context and Context-Awareness in the Design of Health Self-Management Technology
Tools for self-care of chronic conditions often do not fit the contexts in which self-care happens because the influence of context on self-care practices is unclear. We conducted a diary study with 15 adolescents with Type 1 Diabetes and their caregivers to understand how context affects self-care. We observed different contextual settings, which we call contextual frames, in which diabetes self-management varied depending on certain factors – physical activity, food, emotional state, insulin, people, and attitudes. The relative prevalence of these factors across contextual frames impacts self-care necessitating different types of support. We show that contextual frames, as phenomenological abstractions of context, can help designers of context-aware systems systematically explore and model the relation of context with behavior and with technology supporting behavior. Lastly, considering contextual frames as sensitizing concepts, we provide design direction for using context in technology design.
Modeling Fully and Partially Constrained Lasso Movements in a Grid of Icons
Lassoing objects is a basic function in illustration software and presentation tools. Yet, for many common object arrangements lassoing is sometimes time-consuming to perform and requires precise pen operation. In this work, we studied lassoing movements in a grid of objects similar to icons. We propose a quantitative model to predict the time to lasso such objects depending on the margins between icons, their sizes, and layout, which all affect the number of stopping and crossing movements. Results of two experiments showed that our models predict fully and partially constrained movements with high accuracy. We also analyzed the speed profiles and pen stroke trajectories and identified deeper insights into user behaviors, such as that an unconstrained area can induce higher movement speeds even in preceding path segments.
Sampling Strategy for Ultrasonic Mid-Air Haptics
Mid-air tactile stimulation using ultrasonics has been used in a variety of human computer interfaces in the form of prototypes as well as products. When generating these tactile patterns with mid-air tactile ultrasonic displays, the common approach has been to sample the patterns using the hardware update rate capabilities to their full extent. In the current study we show that the hardware update rate can impact perception, but unexpectedly we find that higher update rates do not improve pattern perception. In a first user study, we highlight the effect of update rate on the perceived strength of a pattern, especially for patterns rendered at slow rate of less than 10 Hz. In a second user study, we identify the evolution of the optimal update rate according to variations in pattern size. Our main results show that update rate should be designated as additional parameter for tactile patterns. We also discuss how the relationships we defined in the current study can be implemented into designer tools so that designers remain oblivious to this additional complexity.
Mapping the Margins: Navigating the Ecologies of Domestic Violence Service Provision
Work addressing the negative impacts of domestic violence on victim-survivors and service providers has slowly been contributing to the HCI discourse. However, work discussing the necessary, pre-emptive steps for researchers to enter these spaces sensitively and considerately, largely remains opaque. Heavily-politicised specialisms that are imbued with conflicting values and practices, such as domestic violence service delivery can be especially difficult to navigate. In this paper, we report on a mixed methods study consisting of interviews, a design dialogue and an ideation workshop with domestic violence service providers to explore the potential of an online service directory to support their work. Through this three-stage research process, we were able to characterise this unique service delivery landscape and identify tensions in services’ access, understandings of technologies and working practices. Drawing from our findings, we discuss opportunities for researchers to work with and sustain complex information ecologies in sensitive settings.
Evaluating the Impact of Pseudo-Colour and Coordinate System on the Detection of Medication-induced ECG Changes
The electrocardiogram (ECG), a graphical representation of the heart’s electrical activity, is used for detecting cardiac pathologies. Certain medications can produce a complication known as ‘long QT syndrome’, shown on the ECG as an increased gap between two parts of the waveform. Self-monitoring for this could be lifesaving, as the syndrome can result in sudden death, but detecting it on the ECG is difficult. Here we evaluate whether using pseudo-colour to highlight wave length and changing the coordinate system can support lay people in identifying increases in the QT interval. The results show that introducing colour significantly improves accuracy, and that whilst it is easier to detect a difference without colour with Cartesian coordinates, the greatest accuracy is achieved when Polar coordinates are combined with colour. The results show that applying simple visualisation techniques has the potential to improve ECG interpretation accuracy, and support people in monitoring their own ECG.
Discovering Alternative Treatments for Opioid Use Recovery Using Social Media
Opioid use disorder (OUD) poses substantial risks to personal well-being and public health. In online communities, users support those seeking recovery, in part by promoting clinically grounded treatments. However, some communities also promote clinically unverified OUD treatments, such as unregulated and untested drugs. Little research exists on which alternative treatments people use, whether these treatments are effective for recovery, or if they cause negative side effects. We provide the first large-scale social media study of clinically unverified, alternative treatments in OUD recovery on Reddit, partnering with an addiction research scientist. We adopt transfer learning across 63 subreddits to precisely identify posts related to opioid recovery. Then, we quantitatively discover potential alternative treatments and contextualize their effectiveness. Our work benefits health research and practice by identifying undiscovered recovery strategies. We also discuss the impacts to online communities dealing with stigmatized behavior and research ethics.
Shape Changing Surfaces and Structures: Design Tools and Methods for Electroactive Polymers
Electroactive polymers (EAP) are a promising material for shape changing interfaces, soft robotics and other novel design explorations. However, the uptake of EAP prototyping in design, art and architecture has been slow due to limited commercial availability, challenging high voltage electronics and lack of simple fabrication techniques. This paper introduces DIY tools for building and activating EAP prototypes, together with design methods for making novel shape-changing surfaces and structures, outside of material science labs. We present iterations of our methods and tools, their use and evaluation in participatory workshops and public installations and how they affect the design outcomes. We discuss unique aesthetic and interactive experiences enabled by the organic and subtle movement of semi-transparent EAP membranes. Finally, we summarise the potential of design tools and methods to facilitate increased exploration of interactive EAP prototypes and outline future steps.
How Data Science Workers Work with Data: Discovery, Capture, Curation, Design, Creation
With the rise of big data, there has been an increasing need for practitioners in this space and an increasing opportunity for researchers to understand their workflows and design new tools to improve it. Data science is often described as data-driven, comprising unambiguous data and proceeding through regularized steps of analysis. However, this view focuses more on abstract processes, pipelines, and workflows, and less on how data science workers engage with the data. In this paper, we build on the work of other CSCW and HCI researchers in describing the ways that scientists, scholars, engineers, and others work with their data, through analyses of interviews with 21 data science professionals. We set five approaches to data along a dimension of interventions: Data as given; as captured; as curated; as designed; and as created. Data science workers develop an intuitive sense of their data and processes, and actively shape their data. We propose new ways to apply these interventions analytically, to make sense of the complex activities around data practices.
An Autonomy-Perspective on the Design of Assistive Technology Experiences of People with Multiple Sclerosis
In HCI and Assistive Technology design, autonomy is regularly equated with independence. This is a shortcut and leaves out design opportunities by omitting a more nuanced idea of autonomy. To improve our understanding of how people with severe physical disabilities experience autonomy, particularly in the context of Assistive Technologies, we engaged in in-depth fieldwork with 15 people with Multiple Sclerosis who were used to assistive devices. We constructed a grounded theory from a series of interviews, focus groups and observations, pointing to strategies in which participants sought autonomy either in the short-term (managing their daily energy reserve) or in the long-term (making future plans). The theory shows how factors like enabling technologies, capital (human, social, psychological resources), and compatibility with daily practices facilitated a sense of being in control for our participants. Moreover, we show how over-ambitious or bad design (e.g., paternalism) can lead to opposite results and restrict autonomy.
VizML: A Machine Learning Approach to Visualization Recommendation
Visualization recommender systems aim to lower the barrier to exploring basic visualizations by automatically generating results for analysts to search and select, rather than manually specify. Here, we demonstrate a novel machine learning-based approach to visualization recommendation that learns visualization design choices from a large corpus of datasets and associated visualizations. First, we identify five key design choices made by analysts while creating visualizations, such as selecting a visualization type and choosing to encode a column along the X- or Y-axis. We train models to predict these design choices using one million dataset-visualization pairs collected from a popular online visualization platform. Neural networks predict these design choices with high accuracy compared to baseline models. We report and interpret feature importances from one of these baseline models. To evaluate the generalizability and uncertainty of our approach, we benchmark with a crowdsourced test set, and show that the performance of our model is comparable to human performance when predicting consensus visualization type, and exceeds that of other visualization recommender systems.
"This Girl is on Fire": Sensemaking in an Online Health Community for Vulvodynia
Online health communities (OHCs) allow people living with a shared diagnosis or medical condition to connect with peers for social support and advice. OHCs have been well studied in conditions like diabetes and cancer, but less is known about their role in enigmatic diseases with unknown or complex causal mechanisms. In this paper, we study one such condition: Vulvodynia, a chronic pain syndrome of the vulvar region. Through observations of and interviews with members of a vulvodynia Facebook group, we found that while the interaction types are broadly similar to those found in other OHCs, the women spent more time seeking basic information and building individualized management plans. They also encounter significant emotional and interpersonal challenges, which they discuss with each other. We use this study to extend the field’s understanding of OHCs, and to propose implications for the design of self-tracking tools to support sensemaking in enigmatic conditions.
Dynamic Network Plaid: A Tool for the Analysis of Dynamic Networks
Network data that changes over time can be very useful for studying a wide range of important phenomena, from how social network connections change to epidemiology. However, it is challenging to analyze, especially if it has many actors, connections or if the covered timespan is large with rapidly changing links (e.g., months of changes with changes at second resolution). In these analyses one would often like to compare many periods of time to others, without having to look at the full timeline. To support this kind of analysis we designed and implemented a technique and system to visualize this dynamic data. The Dynamic Network Plaid (DNP) is designed for large displays and based on user-generated interactive timeslicing on the dynamic graph attributes and on linked provenance-preserving representations. We present the technique, interface and the design/evaluation with a group of public health researchers investigating non-suicidal self-harm picture sharing in Instagram.
Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control Tools
Many people struggle to control their use of digital devices. However, our understanding of the design mechanisms that support user self-control remains limited. In this paper, we make two contributions to HCI research in this space: first, we analyse 367 apps and browser extensions from the Google Play, Chrome Web, and Apple App stores to identify common core design features and intervention strategies afforded by current tools for digital self-control. Second, we adapt and apply an integrative dual systems model of self-regulation as a framework for organising and evaluating the design features found. Our analysis aims to help the design of better tools in two ways: (i) by identifying how, through a well-established model of self-regulation, current tools overlap and differ in how they support self-control; and (ii) by using the model to reveal underexplored cognitive mechanisms that could aid the design of new tools.
I’m Sensing in the Rain: Spatial Incongruity in Visual-Tactile Mid-Air Stimulation Can Elicit Ownership in VR Users
Major virtual reality (VR) companies are trying to enhance the sense of immersion in virtual environments by implementing haptic feedback in their systems (e.g., Oculus Touch). It is known that tactile stimulation adds realism to a virtual environment. In addition, when users are not limited by wearing any attachments (e.g., gloves), it is even possible to create more immersive experiences. Mid-air haptic technology provides contactless haptic feedback and offers the potential for creating such immersive VR experiences. However, one of the limitations of mid-air haptics resides in the need for freehand tracking systems (e.g., Leap Motion) to deliver tactile feedback to the user’s hand. These tracking systems are not accurate, limiting designers capability of delivering spatially precise tactile stimulation. Here, we investigated an alternative way to convey incongruent visual-tactile stimulation that can be used to create the illusion of a congruent visual-tactile experience, while participants experience the phenomenon of the rubber hand illusion in VR.
Visually Encoding the Lived Experience of Bipolar Disorder
Issues of social identity, attitudes towards self-disclosure, and potentially biased approaches to what is considered “typical” or “normal” are critical factors when designing visualizations for personal informatics systems. This is particularly true when working with vulnerable populations like those who self-track to manage serious mental illnesses like bipolar disorder (BD). We worked with individuals diagnosed with BD to 1) better understand sense-making challenges related to the representation and interpretation of personal data and 2) probe the benefits, risks, and limitations of participatory approaches to designing personal data visualizations that better reflect their lived experiences. We describe our co-design process, present a series of emergent visual encoding schemas resulting from these activities, and report on the assessment of these speculative designs by participants. We conclude by summarizing important considerations and implications for designing personal data visualizations for (and with) people who self-track to manage serious mental illness.
Methodological Gaps in Predicting Mental Health States from Social Media: Triangulating Diagnostic Signals
A growing body of research is combining social media data with machine learning to predict mental health states of individuals. An implication of this research lies in informing evidence-based diagnosis and treatment. However, obtaining clinically valid diagnostic information from sensitive patient populations is challenging. Consequently, researchers have operationalized characteristic online behaviors as “proxy diagnostic signals” for building these models. This paper posits a challenge in using these diagnostic signals, purported to support clinical decision-making. Focusing on three commonly used proxy diagnostic signals derived from social media, we find that predictive models built on these data, although offer strong internal validity, suffer from poor external validity when tested on mental health patients. A deeper dive reveals issues of population and sampling bias, as well as of uncertainty in construct validity inherent in these proxies. We discuss the methodological and clinical implications of these gaps and provide remedial guidelines for future research.
Putting the Value in VR: How to Systematically and Iteratively Develop a Value-Based VR Application with a Complex Target Group
In development, implementation and evaluation of eHealth it is essential to account for stakeholders’ perspectives, opinions and values, which are statements that specify what stakeholders want to achieve or improve via a technology. The use of values enables developers to systematically include stakeholders’ perspectives and the context of use in an eHealth development process. However, there are relatively few papers that explain how to use values in technology development. Consequently, in this paper we show how we formulated values during the multi-method, interdisciplinary and iterative development process of a VR application for a complex setting: forensic mental healthcare. We report the main foundations for these values: the outcomes of an online questionnaire with patients, therapists and other stakeholders (n=146) and interviews with patients and therapists (n=18). We show how a multidisciplinary project team used these qualitative results to formulate and adapt values and create lo-fi prototypes of a VR application. We discuss the importance of a systematic development process with multiple formative evaluations for eHealth and reflect on the role of values within this.
Navigating Ride-Sharing Regulations: How Regulations Changed the ‘Gig’ of Ride-Sharing for Drivers in Taiwan
Ride-sharing platforms have rapidly spread and disrupted ride hailing markets, resulting in conflicts between ride-sharing and taxi drivers. Taxi drivers claim that their counterparts have unfair advantages in terms of lower prices and a more stable customer base, making it difficult to earn a living. Local government entities have dealt with this disruption and conflict in different ways, often looking towards some form of regulation. While there have been discussions about what the regulation should be, there has been less work looking at what impacts regulations have on ride-sharing drivers and their usage of the platforms. In this paper we present our interview study of ride-sharing drivers in Taiwan, who have gone through three distinct phases of regulation. Drivers felt that regulations legitimized their work, while having to navigate consequences related to regulated access to platforms and fundamental changes to the “gig” of ride-sharing.
What Happens After Disclosing Stigmatized Experiences on Identified Social Media: Individual, Dyadic, and Social/Network Outcomes
Disclosing stigmatized experiences or identity facets on identified social media (e.g., Facebook) can be risky, inhibited, yet beneficial for the discloser. I investigate such disclosures’ outcomes when they do happen on identified social media as perceived by the individuals who perform them. I draw on interviews with women who have experienced pregnancy loss and are social media users in the U.S. I document outcomes at the social/network, individual, and dyad levels. I highlight the powerful role of connecting with others with a similar experience within networks of known ties, how disclosures lead to relationship changes, how disclosers take on new social roles as mentors and support sources, and how helpful connections following disclosures originate from various kinds of ties via diverse communication channels. I emphasize reciprocal disclosures as an outcome contributing to further outcomes (e.g., destigmatizing pregnancy loss). I provide design implications related to facilitating being a support source and mentor, helpful reciprocal disclosures, and finding similar others within networks of known ties.
How Guiding Questions Facilitate Feedback Exchange in Project-Based Learning
Peer feedback is essential for learning in project-based disciplines. However, students often need guidance when acting as either a feedback provider or a feedback receiver, both to gain from peer feedback and to criticize their peers’ work. This paper explores how to more effectively scaffold this exchange such that peers more deeply engage in the feedback process. Within a game design course, we introduced different processes for feedback receivers to write questions to guide peer feedback. Feedback receivers wrote four main types of guiding questions: improve, share, brainstorm, critique. We found that “improve” questions tended to lead to better feedback (more specific, critical, and actionable) than other question types, but feedback receivers wrote improve questions least often. We offer insights on how best to scaffold the question-writing process to facilitate peer feedback exchange.
Exploring the Plurality of Black Women’s Gameplay Experiences
Few gender-focused studies of video games explore the gameplay experiences of women of color, and those that do tend to only emphasize negative phenomena (i.e., racial or gender discrimination). In this paper, we conduct an exploratory case study attending to the motivations and gaming practices of Black college women. Questionnaire responses and focus group discussion illuminate the plurality of gameplay experiences for this specific population of Black college women. Sixty-five percent of this population enjoy the ubiquity of mobile games with casual and puzzle games being the most popular genres. However, academic responsibilities and competing recreational interests inhibit frequent gameplay. Consequently, this population of Black college women represent two types of casual gamers who report positive gameplay experiences, providing insights into creating a more inclusive gaming subculture.
"If you want, I can store the encrypted password": A Password-Storage Field Study with Freelance Developers
In 2017 and 2018, Naiakshina et al. (CCS’17, SOUPS’18) studied in a lab setting whether computer science students need to be told to write code that stores passwords securely. The authors’ results showed that, without explicit prompting, none of the students implemented secure password storage. When asked about this oversight, a common answer was that they would have implemented secure storage – if they were creating code for a company. To shed light on this possible confusion, we conducted a mixed-methods field study with developers. We hired freelance developers online and gave them a similar password storage task followed by a questionnaire to gain additional insights into their work. From our research, we offer two contributions. First of all, we reveal that, similar to the students, freelancers do not store passwords securely unless prompted, they have misconceptions about secure password storage, and they use outdated methods. Secondly, we discuss the methodological implications of using freelancers and students in developer studies.
Virtual Showdown: An Accessible Virtual Reality Game with Scaffolds for Youth with Visual Impairments
Virtual Reality (VR) is a growing source of entertainment, but people who are visually impaired have not been effectively included. Audio cues are motivated as a complement to visuals, making experiences more immersive, but are not a primary cue. To address this, we implemented a VR game called Virtual Showdown. We based Virtual Showdown on an accessible real-world game called Showdown, where people use their hearing to locate and hit a ball against an opponent. Further, we developed Verbal and Verbal/Vibration Scaffolds to teach people how to play Virtual Showdown. We assessed the acceptability of Virtual Showdown and compared our scaffolds in an empirical study with 34 youth who are visually impaired. Thirty-three participants wanted to play Virtual Showdown again, and we learned that participants scored higher with the Verbal Scaffold or if they had prior Showdown experience. Our empirical findings inform the design of future accessible VR experiences.
Moderation Practices as Emotional Labor in Sustaining Online Communities: The Case of AAPI Identity Work on Reddit
We examine how and why Asian American and Pacific Islander (AAPI) moderators on Reddit shape the norms of their online communities through the analytic lens of emotional labor. We conduct interviews with 21 moderators who facilitate identity work discourse in AAPI subreddits and present a thematic analysis of their moderation practices. We report on their challenges to sustaining moderation, which include burning out from volunteer work, navigating hierarchical structures, and balancing unfulfilled expectations. We then describe strategies that moderators employ to manage emotional labor, which involve distancing away from drama, building solidarity from shared struggles, and integrating an ecology of tools for self-organized moderation. We provide recommendations for improving moderation in online communities centered around identity work and discuss implications of emotional labor in the design of Reddit and similar platforms.
Multilayer Haptic Feedback for Pen-Based Tablet Interaction
We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance.
In UX We Trust: Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception of Automated Driving
In the evolution of technical systems, freedom from error and early adoption plays a major role for market success and to maintain competitiveness. In the case of automated driving, we see that faulty systems are put into operation and users trust these systems, often without any restrictions. Trust and use are often associated with users’ experience of the driver-vehicle interfaces and interior design. In this work, we present the results of our investigations on factors that influence the perception of automated driving. In a simulator study, N=48 participants had to drive a SAE level 2 vehicle with either perfect or faulty driving function. As a secondary activity, participants had to solve tasks on an infotainment system with varying aesthetics and usability (2×2). Results reveal that the interaction of conditions significantly influences trust and UX of the vehicle system. Our conclusion is that all aspects of vehicle design cumulate to system and trust perception.
Sharing Economy Design Cards
Sharing economy services have become increasingly popular. In addition to various well-known for-profit activities in this space (e.g., ride and apartment sharing), many community groups and non-profit organizations offer collections of shared things (e.g., books, tools) that explicitly aim to benefit local communities. We expect that both non-profit and for-profit approaches will see an increased use in the future. To support designers in devising new sharing economy services, we developed the Sharing Economy Design Cards, a design toolkit in the form of a card deck. We present two deployments of the cards: (1) in individual interviews with 16 designers and sharing economy domain experts; and (2) in two workshops with 5 participants each. Our findings show that the use of the cards not only facilitates the creation of future sharing platforms and services in a collaborative setting, but also helps to evaluate existing sharing economy services as an individual activity.
SottoVoce: An Ultrasound Imaging-Based Silent Speech Interaction Using Deep Neural Networks
The availability of digital devices operated by voice is expanding rapidly. However, the applications of voice interfaces are still restricted. For example, speaking in public places becomes an annoyance to the surrounding people, and secret information should not be uttered. Environmental noise may reduce the accuracy of speech recognition. To address these limitations, a system to detect a user’s unvoiced utterance is proposed. From internal information observed by an ultrasonic imaging sensor attached to the underside of the jaw, our proposed system recognizes the utterance contents without the user’s uttering voice. Our proposed deep neural network model is used to obtain acoustic features from a sequence of ultrasound images. We confirmed that audio signals generated by our system can control the existing smart speakers. We also observed that a user can adjust their oral movement to learn and improve the accuracy of their voice recognition.
Assessing the Accuracy of Point & Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories
Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three different point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.
From Director’s Cut to User’s Cut: to Watch a Brain-Controlled Film is to Edit it
Introducing interactivity to films has proven a longstanding and difficult challenge due to their narrative-driven, linear and theatre-based nature. Previous research has suggested that Brain-Computer Interfaces (BCI) may be a promising approach but also revealed a tension between being immersed in the film and thinking about control. We report a performance-led and in-the-wild study of a BCI film called The MOMENT covering its design rationale and how it was experienced by the public as controllers, non-controllers and repeat viewers. Our findings suggest that BCI movies should be designed to be credibly controllable, generate personal versions, be watchable as linear films, encourage repeat viewing and fit the medium of cinema. They also reveal how viewers appreciated the sense of editing their own personal cuts, suggesting a new stance on introducing interactivity into lean-back media in which filmmakers release editorial control to users to make their own versions.
The Invisible Potential of Facial Electromyography: A Comparison of EMG and Computer Vision when Distinguishing Posed from Spontaneous Smiles
Positive experiences are a success metric in product and service design. Quantifying smiles is a method of assessing them continuously. Smiles are usually a cue of positive affect, but they can also be fabricated voluntarily. Automatic detection is a promising complement to human perception in terms of identifying the differences between smile types. Computer vision (CV) and facial distal electromyography (EMG) have been proven successful in this task. This is the first study to use a wearable EMG that does not obstruct the face to compare the performance of CV and EMG measurements in the task of distinguishing between posed and spontaneous smiles. The results showed that EMG has the advantage of being able to identify covert behavior not available through vision. Moreover, CV appears to be able to identify visible dynamic features that human judges cannot account for. This sheds light on the role of non-observable behavior in distinguishing affect-related smiles from polite positive affect displays.
"Enable or Disable Gamification?": Analyzing the Impact of Choice in a Gamified Image Tagging Task
This paper investigates a simple form of customization: giving users the choice to enable or disable gamification. We present a study (N=77) in the context of image tagging, in which a gamification approach was shown to be effective in previous work. In our case, some participants could enable or disable gamification after they had experienced the task with and without it. Other participants had no choice and did the task with or without game elements. The results indicate that those who are not attracted by the elements can be motivated to tag more through this choice. In contrast, those that like the elements are not affected by it. This suggests that systems should provide the option to disable gamification in the absence of more sophisticated tailoring.
“Pretty Close to a Must-Have”: Balancing Usability Desire and Security Concern in Biometric Adoption
We report on a qualitative inquiry among security-expert and non-expert mobile device users about the adoption of biometric authentication using semi-structured interviews(n=38, 19/19 expert/non-expert). Security experts more readily adopted biometrics than non-experts but also harbored greater distrust towards its use for sensitive transactions,feared biometric signature compromise, and in some cases distrusted newer facial recognition methods. Both groups harbored misconceptions, such as misunderstanding of the functional role of biometrics in authentication, and were about equally likely to have stopped using biometrics due to usability. Implications include the need for tailored training for security-informed advocates, better design for device sharing and co-registration, and consideration for usability needs in work environments. Refinement of these features would remove perceived obstacles to ubiquitous computing among the growing population of mobile technology users sensitized to security risk.
r/science: Challenges and Opportunities in Online Science Communication
Online discussion websites, such as Reddit’s r/science forum, have the potential to foster science communication between researchers and the general public. However, little is known about who participates, what is discussed, and whether such websites are successful in achieving meaningful science discussions. To find out, we conducted a mixed-methods study analyzing 11,859 r/science posts and conducting interviews with 18 community members. Our results show that r/science facilitates rich information exchange and that the comments section provides a unique science communication document that guides engagement with scientific research. However, this community-sourced science communication comes largely from a knowledgeable public. We conclude with design suggestions for a number of critical problems that we uncovered: addressing the problem of topic newsworthiness and balancing broader participation and rigor.
Multi-plié: A Linear Foldable and Flattenable Interactive Display to Support Efficiency, Safety and Collaboration
We present the design concept of an accordion-fold interactive display to address the limits of touch-based interaction in airliner cockpits. Based on an analysis of pilot activity, tangible design principles for this design concept are identified. Two resulting functional prototypes are explored during participatory workshops with pilots, using activity scenarios. This exploration validated the design concept by revealing its ability to match pilot responsibilities in terms of safety, efficiency and collaboration. It provides an efficient visual perception of the system for real-time collaborative operations and tangible interaction to strengthen the perception of action and to manage safety through anticipation and awareness. The design work and insights enabled to specify further our needs regarding flexible screens. They also helped to better characterize the design concept as based on continuity of a developed surface, predictability of aligned folds and pleat face roles, embodied interactive properties, and flexibility through affordable reconfigurations.
Apprise: Supporting the Critical-Agency of Victims of Human Trafficking in Thailand
Human trafficking and forced labor are global issues affecting millions of people around the world. This paper describes an initiative that we are currently undertaking to understand the role technology can play to support the critical-agency of migrant workers in these situations of severe exploitation. Building on five consultations with more than 170 direct and indirect stakeholders in Thailand, the paper presents the co-design, development, and evaluation of Apprise, a mobile app to support the identification of victims of human trafficking using a Value Sensitive Design approach. It also provides a critical reflection on the use of digital technology in the initial screening of potential victims of human trafficking, to understand in what ways Apprise can support the critical agency of migrant workers in vulnerable situations.
JigFab: Computational Fabrication of Constraints to Facilitate Woodworking with Power Tools
We present JigFab, an integrated end-to-end system that supports casual makers in designing and fabricating constructions with power tools. Starting from a digital version of the construction, JigFab achieves this by generating various types of constraints that configure and physically aid the movement of a power tool. Constraints are generated for every operation and are custom to the work piece. Constraints are laser cut and assembled together with predefined parts to reduce waste. JigFab’s constraints are used according to an interactive step-by-step manual. JigFab internalizes all the required domain knowledge for designing and building intricate structures, consisting of various types of finger joints, tenon & mortise joints, grooves, and dowels. Building such structures is normally reserved for artisans or automated with advanced CNC machinery.
Teachers’ Expected and Perceived Gains of Participation in Classroom Based Design Activities
This paper explores teachers’ expected and perceived gains from classroom participation in design projects. The results indicate that teachers hope the experience will be fun for the children, and that it will increase both children’s and their own knowledge about technology. Although they consider learning goals important, these do not necessarily have to be communicated to the children, since the teachers experience that the children are learning several skills anyway. However, early involvement in the definition of learning goals could make participation more beneficial. The teachers also see several gains from partication for themselves, especially related to using a design approach in the classroom. We discuss the implications of these finding and suggest a way to increase the user gains for both children and teachers by considering the opportunity to use classroom participation as a way to support teachers’ competence development, thereby fulfilling the promise of mutual learning as advocated in Participatory Design.
Virtual Performance Augmentation in an Immersive Jump & Run Exergame
Human performance augmentation through technology has been a recurring theme in science and culture, aiming to increase human capabilities and accessibility. We investigate a related concept: virtual performance augmentation (VPA), using VR to give users the illusion of greater capabilities than they actually have. We propose a method for VPA of running and jumping, based on in place movements, and studied its effects in a VR exergame. We found that in place running and jumping in VR can be used to create a somewhat natural experience and can elicit medium to high physical exertion in an immersive and intrinsically motivating manner. We also found that virtually augmenting running and jumping can increase intrinsic motivation, perceived competence and flow, and may also increase motivation for physical activity in general. We discuss implications of VPA for safety and accessibility, with initial evidence suggesting that VPA may help users with physical impairments enjoy the benefits of exergaming.
WhatFutures: Designing Large-Scale Engagements on WhatsApp
WhatsApp, as the world’s most popular messaging application, offers significant opportunities for improving the reach and effectiveness of engagement projects. In collaboration with the International Federation of Red Cross and Red Crescent Societies (IFRC) we designed WhatFutures, a collaborative future forecasting engagement for global youth using WhatsApp. WhatFutures was successfully deployed with 487 players across 5 countries (Kenya, Bulgaria, Finland, Australia and Hong Kong) to inform strategic change within the IFRC. Based on our analysis of the activity – including 16,100 messages, 95 multimedia artifacts, and a post-engagement survey – we present a reflection upon the design decisions underpinning WhatFutures and identify how decisions made around group structures, processes and externalization of outputs influenced engagement and data quality. We conclude with the wider implications of our findings for the design of engagements that best utilize the affordances of existing messaging applications.
Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience
The ability to engage in real-time text conversations is an important feature on live streaming platforms. The moderation of this text content relies heavily on the work of unpaid volunteers. This study reports on interviews with 20 people who moderate for Twitch micro communities, defined as channels that are built around a single or group of streamers, rather than the broadcast of an event. The study identifies how people become moderators, their different styles of moderating, and the difficulties that come with the job. In addition to the hardships of dealing with negative content, moderators also have complex interpersonal relationships with the streamers and viewers, where the boundaries between emotional labor, physical labor, and fun are intertwined.
Communicating Uncertainty in Fertility Prognosis
Communicating uncertainty has been shown to provide positive effects on user understanding and decision-making. Surprisingly however, most personal health tracking applications fail to disclose the accuracy of their measurements and predictions. In the case of fertility tracking applications (FTAs), inaccurate predictions have already led to numerous unwanted pregnancies and law suits. However, integrating uncertainty into FTAs is challenging: Prediction accuracy is hard to understand and communicate, and its effect on users’ trust and behavior is not well understood. We created a prototype for uncertainty visualizations for FTAs and evaluated it in a four-week field study with real users and their own data (N=9). Our results uncover far-reaching effects of communicating uncertainty: For example, users interpreted prediction accuracy as a proxy for their cycle health and as a security indicator for contraception. Displaying predicted and detected fertile phases next to each other helped users to understand uncertainty without negative emotional effects.
Paragraph-based Faded Text Facilitates Reading Comprehension
We propose a new text layout that facilitates reading comprehension. By sequentially fading out characters sentence-by-sentence from the beginning of each paragraph, we highlight the paragraph structure of the entire text and the relative positions of the sentences. To evaluate the effectiveness of the paragraph-based faded text in a reading comprehension, we measure the comprehension, eye movements, and recognition for both the proposed method and a conventional standard method. In the proposed method, rates of correct answers to text comprehension questions are improved. Moreover, the proposed method leads to slower reading speeds and better recognition rates for the first sentences of paragraphs, which are displayed in a relatively thicker mode. With the paragraph-based faded text, the reader is naturally facilitated to pay attention to the first sentence of each paragraph, suggesting that this reading style could result in a more accurate text comprehension.
Towards Understanding the Link Between Age and Smartphone Authentication
While previous work on smartphone (un)locking has revealed real world usage patterns, several aspects still need to be explored. In this paper, we fill one of these knowledge gaps: the interplay between age and smartphone authentication behavior. To do this, we performed a two-month long field study (N = 134). Our results indicate that there are indeed significant differences across age. For instance, younger participants were more likely to use biometric unlocking mechanisms and older participants relied more on auto locks.
Coding for Outdoor Play: a Coding Platform for Children to Invent and Enhance Outdoor Play Experiences
Outdoor play is in decline, including its benefits to children’s development. Coding, a typically indoor, screen-based activity, can potentially enrich outdoor play, serving as a rule-making medium. We present a coding platform that controls a programmable hardware device, enabling children to technologically-enhance their outdoor play experiences by inventing game ideas, coding them, and playing their games together with their friends. In the evaluation study, 24 children used the system to invent and play outdoor games. Results show children are able to bridge between the different domains of coding and outdoor play. They used the system to modify traditional games and invent new ones, enriching their outdoor experience. Children merged computational concepts with physical game elements, integrated physical outdoor properties as variables in their code, and were excited to see their code come to life. We conclude children can use coding to express their ideas by creating technologically-enhanced outdoor play experiences.
HotStrokes: Word-Gesture Shortcuts on a Trackpad
Expert interaction techniques like hotkeys are efficient, but poorly adopted because they are hard to learn. HotStrokes removes the need for learning arbitrary mappings of commands to hotkeys. A user enters a HotStroke by holding a modifier key, then gesture typing a command name on a laptop trackpad as if on an imaginary virtual keyboard. The gestures are recognized using an adaptation of the SHARK2 algorithm with a new spatial model and a refined method for dynamic suggestions. A controlled experiment shows HotStrokes effectively augments the existing “menu and hotkey” command activation paradigm. Results show the method is efficient by reducing command activation time by 43% compared to linear menus. The method is also easy to learn with a high adoption rate, replacing 91% of linear menu usage. Finally, combining linear menus, hotkeys, and HotStrokes leads to 24% faster command activation overall.
How Do One’s Peers on a Leaderboard Affect Oneself?
Leaderboards are a workhorse of the gamification literature. While the effect of a leaderboard has been well studied, there is much less evidence how one’s peer group affects the treatment effect of a leaderboard. Through a pre-registered field experiment involving more than 1000 users on an online movie recommender website, we expose users to leaderboards, but different sets of users are exposed to different peer groups. Contrary to what a standard behavioral model would predict, we find that a user’s contribution increases when their peer’s scores are more dispersed. We also find that decreasing average peer contributions motivates a user to contribute more. Moreover, these effects are themselves mediated by group size. This sheds new light on existing theories of motivation and demotivation with regards to leaderboards, and also illustrates the potential of using personalized leaderboards to increase contributions.
App Usage Predicts Cognitive Ability in Older Adults
We have limited understanding of how older adults use smartphones, how their usage differs from younger users, and the causes for those differences. As a result, researchers and developers may miss promising opportunities to support older adults or offer solutions to unimportant problems. To characterize smartphone usage among older adults, we collected iPhone usage data from 84 healthy older adults over three months. We find that older adults use fewer apps, take longer to complete tasks, and send fewer messages. We use cognitive test results from these same older adults to then show that up to 79% of these differences can be explained by cognitive decline, and that we can predict cognitive test performance from smartphone usage with 83% ROCAUC. While older adults differ from younger adults in app usage behavior, the “cognitively young” older adults use smartphones much like their younger counterparts. Our study suggests that to better support all older adults, researchers and developers should consider the full spectrum of cognitive function.
ReCall: Crowdsourcing on Basic Phones to Financially Sustain Voice Forums
Although voice forums are widely used to enable marginalized communities to produce, consume, and share information, their financial sustainability is a key concern among HCI4D researchers and practitioners. We present ReCall, a crowdsourcing marketplace accessible via phone calls where low-income rural residents vocally transcribe audio files to gain free airtime to participate in voice forums as well as to earn money. We conducted a series of experimental and usability evaluations with 28 low-income people in rural India to examine the effect of phone types, channel types, and review modes on speech transcription performance. We then deployed ReCall for two weeks to 24 low-income rural residents who placed 5,879 phone calls, completed 29,000 micro tasks to yield transcriptions with 85% accuracy, and earned INR 20,500. Our mixed-methods analysis indicates that each minute of crowd work on ReCall gives users eight minutes of free airtime on another voice forum, and thus illustrates a way to address the financial sustainability of voice forums.
ThermalBracelet: Exploring Thermal Haptic Feedback Around the Wrist
Smartwatches enable the wrist to be used as an ideal location to provide always-available haptic notifications as they are constantly worn with direct contact with the skin. With the wrist straps, the haptic feedback can be extended to the full space around the wrist to provide more spatial and enriched feedback. With ThermalBracelet, we investigate thermal feedback as a haptic feedback modality around the wrist. We present three studies that lead to the development of a smartwatch-integratable thermal bracelet that stimulates six locations around the wrist. Our initial evaluation reports on the selection of the thermal module configurations. Secondly, with the selected six-module configuration, we explore its usability in a real-world scenarios such as walking and reading. Thirdly, we investigate its capability of providing spatio temporal feedback while engaged in distracting tasks. Finally we present application scenarios that demonstrates its usability.
HaptiVec: Presenting Haptic Feedback Vectors in Handheld Controllers using Embedded Tactile Pin Arrays
HaptiVec is a new haptic feedback paradigm for handheld controllers which allows users to feel directional haptic pressure vectors on their fingers and hands while interacting with virtual environments. We embed a 3 by 5 tactile pin array (with an average pin spacing of 25 mm) into the handles of two custom VR type controllers. By presenting directional pressure vectors in eight cardinal directions (N, NE, E, SE, S, SW, W, NW) to users without prior training, they were able to distinguish the correct direction with an accuracy of at least 79%. We illustrate two applications where our device enhances virtual experiences over traditional vibrotactile feedback. In the first application, through the classic first-person shooter Doom, we demonstrate that users can receive directional pressure feedback corresponding to the direction of incident enemy projectiles. In the second application, we demonstrate how our controller can create a more immersive experience by allowing the user to feel their virtual climate by randomizing the directional vectors and presenting the user with “haptic rain” which adapts with the intensity of the rainfall.
VisiBlends: A Flexible Workflow for Visual Blends
Visual blends are an advanced graphic design technique to draw attention to a message. They combine two objects in a way that is novel and useful in conveying a message symbolically. This paper presents VisiBlends, a flexible workflow for creating visual blends that follows the iterative design process. We introduce a design pattern for blending symbols based on principles of human visual object recognition. Our workflow decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. An evaluation of the workflow shows that decentralized groups can generate blends in independent microtasks, co-located groups can collaboratively make visual blends for their own messages, and VisiBlends improves novices’ ability to make visual blends.
Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration
Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.
GymSoles: Improving Squats and Dead-Lifts by Visualizing the User’s Center of Pressure
The correct execution of exercises, such as squats and dead-lifts, is essential to prevent various bodily injuries. Existing solutions either rely on expensive motion tracking or multiple Inertial Measurement Units (IMU) systems require an extensive set-up and individual calibration. This paper introduces a proof of concept, GymSoles, an insole prototype that provides feedback on the Centre of Pressure (CoP) at the feet to assist users with maintaining the correct body posture, while performing squats and dead-lifts. GymSoles was evaluated with 13 users in three conditions: 1) no feedback, 2) vibrotactile feedback, and 3) visual feedback. It has shown that solely providing feedback on the current CoP, results in a significantly improved body posture.
Usability of Gamified Knowledge Learning in VR and Desktop-3D
Affine Transformations (ATs) often escape an intuitive approach due to their high complexity. Therefore, we developed GEtiT that directly encodes ATs in its game mechanics and scales the knowledge’s level of abstraction. This results in an intuitive application as well as audiovisual presentation of ATs and hence in a knowledge learning. We also developed a specific Virtual Reality (VR) version to explore the effects of immersive VR on the learning outcomes. This paper presents our approach of directly encoding abstract knowledge in game mechanics, the conceptual design of GEtiT and its technical implementation. Both versions are compared in regard to their usability in a user study. The results show that both GEtiT versions induce a high degree of flow and elicit a good intuitive use. They validate the effectiveness of the design and the resulting knowledge application requirements. Participants favored GEtiT VR thus showing a potentially higher learning quality when using VR.
Mobi3DSketch: 3D Sketching in Mobile AR
Mid-air 3D sketching has been mainly explored in Virtual Reality (VR) and typically requires special hardware for motion capture and immersive, stereoscopic displays. The recently developed motion tracking algorithms allow real-time tracking of mobile devices, and have enabled a few mobile applications for 3D sketching in Augmented Reality (AR). However, they are more suitable for making simple drawings only, since they do not consider special challenges with mobile AR 3D sketching, including the lack of stereo display, narrow field of view, and the coupling of 2D input, 3D input and display. To address these issues, we present Mobi3DSketch, which integrates multiple sources of inputs with tools, mainly different versions of 3D snapping and planar/curves surface proxies. Our multimodal interface supports both absolute and relative drawing, allowing easy creation of 3D concept designs in situ. The effectiveness and expressiveness of Mobi3DSketch are demonstrated via a pilot study.
VirtualComponent: A Mixed-Reality Tool for Designing and Tuning Breadboarded Circuits
Prototyping electronic circuits is an increasingly popular activity, supported by researchers, who develop toolkits to improve the design, debugging, and fabrication of electronics. Although past work mainly dealt with circuit topology, in this paper we propose a system for determining or tuning the values of the circuit components. Based on the results of a formative study with seventeen makers, we designed VirtualComponent, a mixed-reality tool that allows users to digitally place electronic components on a real breadboard, tune their values in software, and see these changes applied to the physical circuit in real-time. VirtualComponent is composed of a set of plug-and-play modules containing banks of components, and a custom breadboard managing the connections and components’ values. Through demonstrations and the results of an informal study with twelve makers, we show that VirtualComponent is easy to use and allows users to test components’ value configurations with little effort.
Ethical Mediation in UX Practice
HCI scholars have become increasingly interested in describing the complex nature of UX practice. In parallel, HCI and STS scholars have sought to describe the ethical and value-laden relationship between designers and design outcomes. However, little research describes the ethical engagement of UX practitioners as a form of design complexity, including the multiple mediating factors that impact ethical awareness and decision-making. In this paper, we use a practice-led approach to describe ethical complexity, presenting three varied cases of UX practitioners based onin situ observations and interviews. In each case, we describe salient factors relating to ethical mediation, including organizational practices, self-driven ethical principles, and unique characteristics of specific projects the practitioner is engaged in. Using the concept of mediation from activity theory, we provide a rich account of practitioners’ ethical decision making. We propose future work on ethical awareness and design education based on the concept of ethical mediation.
Comparing Apples and Oranges: Taxonomy and Design of Pairwise Comparisons within Tabular Data
Asking pairwise comparison questions is common. Yet, we often find ourselves comparing apples and oranges — the two entities of interest are not readily comparable. To understand how technologies can extend our capabilities to conduct pairwise comparisons during data analysis, we analyzed pairwise comparison questions collected from crowd workers and propose a taxonomy of pairwise comparisons. We demonstrate how the taxonomy can be adopted by incorporating pairwise comparison capabilities into Duo, a spreadsheet application that supports comparing two groups of records in a data table. Duo decomposes a pairwise comparison question into rules and showcases sloppy rules, a query technique for specifying pairwise comparisons. We conducted a user study comparing sloppy rules and natural language. The findings suggest that for easier pairwise comparison tasks, the two techniques are comparable in efficiency and preference and that for more difficult pairwise comparison tasks, sloppy rules allow faster specification and are more preferable.
“Everyone Has Some Personal Stuff”: Designing to Support Digital Privacy with Shared Mobile Phone Use in Bangladesh
People in South Asia frequently share a single device among multiple individuals, resulting in digital privacy challenges. This paper explores a design concept that aims to mitigate some of these challenges through a ‘tiered’ privacy model. Using this model, a person creates a ‘shared’ account that contains data they are willing to share and that is assigned a password that will be shared. Simultaneously, they create a separate ‘secret’ account that contains data they prefer to keep secret and that uses a password they do not share with anyone. When a friend or family member asks to check their device, the user can tell them the password for their shared account, with their private data secure in the secret account that the other person is unaware of. We explore the benefits and trade-offs of our design through a three-week deployment with 21 participants in Bangladesh, presenting findings that show how our work aids digital privacy while also exposing the challenges that remain.
Accessing a New Land: Designing for a Social Conceptualisation of Access
This paper presents a study of mobile phone use by people settling in a new land to access state provided digital services. It shows that digital literacy and access to technology are not the only resources and capabilities needed to successfully access digital services and do not guarantee a straightforward resettlement process. Using creative engagement methods, the research involved 132 “newcomers” seeking to settle in Sweden. Ribot and Peluso’s theory of access (2003) was employed to examine the complex web of access experienced by our participants. We uncover that when communities are dealing with high levels of precarity, their primary concerns are related to accessing the benefits of a service, rather than controlling access. Broadening the HCI framework, the paper concludes that a sociotechnical model of access needs to connect access control and access benefit to facilitate the design of an effective digital service.
Gamified Ads: Bridging the Gap Between User Enjoyment and the Effectiveness of Online Ads
While the use of ad blockers prevents negative impacts of advertising on user experience, it poses a serious threat to the business model of commercial web services and freely available content on the web. As an alternative, we investigate the user enjoyment and the advertising effectiveness of playfully deactivating online ads. We created eight game concepts, performed a pre-study assessing the users’ perception of them (N=50) and implemented three well-perceived ones. In a lab study (N=72), we found that these game concepts are more enjoyable than deactivating ads without game elements. Additionally, one game concept was even preferred over using an ad blocker. Notably, playfully deactivating ads was shown to have a positive impact on users’ brand and product memory, enhancing the advertising effectiveness. Thus, our results indicate that playfully deactivating ads is a promising way of bridging the gap between user enjoyment and effective advertising.
The Role of Physical Props in VR Climbing Environments
Dealing with fear of falling is a challenge in sport climbing. Virtual reality (VR) research suggests that using physical and reality-based interaction increases the presence in VR. In this paper, we present a study that investigates the influence of physical props on presence, stress and anxiety in a VR climbing environment involving whole body movement. To help climbers overcoming fear of falling, we compared three different conditions: Climbing in reality at 10 m height, physical climbing in VR (with props attached to the climbing wall) and virtual climbing in VR using game controllers. From subjective reports and biosignals, our results show that climbing with props in VR increases the anxiety and sense of realism in VR for sport climbing. This suggests that VR in combination with physical props are an effective simulation setup to induce the sense of height.
Digital Fabrication of Soft Actuated Objects by Machine Knitting
With recent interest in shape-changing interfaces, material-driven design, wearable technologies, and soft robotics, digital fabrication of soft actuatable material is increasingly in demand. Much of this research focuses on elastomers or non-stretchy air bladders. Computationally-controlled machine knitting offers an alternative fabrication technology which can rapidly produce soft textile objects that have a very different character: breathable, lightweight, and pleasant to the touch. These machines are well established and optimized for the mass production of garments, but compared to other digital fabrication techniques such as CNC machining or 3D printing, they have received much less attention as general purpose fabrication devices. In this work, we explore new ways to employ machine knitting for the creation of actuated soft objects. We describe the basic operation of this type of machine, then show new techniques for knitting tendon-based actuation into objects. We explore a series of design strategies for integrating tendons with shaping and anisotropic texture design. Finally, we investigate different knit material properties, including considerations for motor control and sensing.
Sketching NLP: A Case Study of Exploring the Right Things To Design with Language Intelligence
This paper investigates how to sketch NLP-powered user experiences. Sketching is a cornerstone of design innovation. When sketching, designers rapidly experiment with a number of abstract ideas using simple, tangible instruments such as drawings and paper prototypes. Sketching NLP-powered experiences, however, presents challenges, i.e. How to visualize abstract language interaction? How to ideate a broad range of technically feasible intelligent functionalities? As a first step towards understanding these challenges, we present a first-person account of our sketching process when designing intelligent writing assistance. We detail the challenges we encountered and emergent solutions, such as a new format of wireframe for sketching language interactions and a new wizard-of-oz-based NLP rapid prototyping method. Drawing on these findings, we discuss the importance of abstraction in sketching and other implications.
Engagement with Mental Health Screening on Mobile Devices: Results from an Antenatal Feasibility Study
Perinatal depression (PND) affects up to 15% of women within the United Kingdom and has a lasting impact on a woman’s quality of life, birth outcomes and her child’s development. Suicide is the leading cause of maternal mortality. However, it is estimated that at least 50% of PND cases go undiagnosed. This paper presents the results of the first feasibility study to examine the potential of mobile devices to engage women in antenatal mental health screening. Using a mobile application, 254 women attending 14 National Health Service midwifery clinics provided 2,280 momentary and retrospective reports of their wellbeing over a 9-month period. Women spoke positively of the experience, installing and engaging with this technology regardless of age, education, wellbeing, number of children, marital or employment status, or past diagnosis of depression. 39 women reported a risk of depression, self-harm or suicide; two-thirds of whom were not identified by screening in-clinic.
Automating the Intentional Encoding of Human-Designable Markers
Recent work established that it is possible for human artists to encode information into hand-drawn markers, but it is difficult to do when simultaneously maintaining aesthetic quality. We present two methods for relieving the mental burden associated with encoding, while allowing an artist to draw as freely as possible. A ‘Helper Overlay’ guides the artist with real-time feedback indicating where visual features should be added or removed, and an ‘Autocomplete Tool’ directly adds necessary features to the drawing for the artist to touch up. Both methods are enabled by a two-part algorithm that uses a tree-search for finding ‘major’ changes and a dynamic programming method for finding the minimum number of ‘minor’ changes. A 24-person study demonstrates that a majority of participants prefer both tools over previous methods of manual encoding, with the Helper Overlay being the more popular of the two.
Ethical Dimensions of Visualization Research
Visualizations have a potentially enormous influence on how data are used to make decisions across all areas of human endeavor. However, it is not clear how this power connects to ethical duties: what obligations do we have when it comes to visualizations and visual analytics systems, beyond our duties as scientists and engineers? Drawing on historical and contemporary examples, I address the moral components of the design and use of visualizations, identify some ongoing areas of visualization research with ethical dilemmas, and propose a set of additional moral obligations that we have as designers, builders, and researchers of visualizations.
Investigating the Effect of Orientation and Visual Style on Touchscreen Slider Performance
Sliders are one of the most fundamental components used in touchscreen user interfaces (UIs). When entering data using a slider, errors occur due e.g. to visual perception, resulting in inputs not matching what is intended by the user. However, it is unclear if the errors occur uniformly across the full range of the slider or if there are systematic offsets. We conducted a study to assess the errors occurring when entering values with horizontal and vertical sliders as well as two common visual styles. Our results reveal significant effects of slider orientation and style on the precision of the entered values. Furthermore, we identify systematic offsets that depend on the visual style and the target value. As the errors are partially systematic, they can be compensated to improve users’ precision. Our findings provide UI designers with data to optimize user experiences in the wide variety of application areas where slider based touchscreen input is used.
Detecting Perception of Smartphone Notifications Using Skin Conductance Responses
Today’s smartphone notification systems are incapable of determining whether a notification has been successfully perceived without explicit interaction from the user. If the system incorrectly assumes that a notification has not been perceived, it may repeat it redundantly, disrupting the user and others (e.g., phone ringing). Or, if it incorrectly assumes that a notification was perceived, and therefore fails to repeat it, the notification will be missed altogether (e.g., text message). Results from a laboratory study confirm, for the first time, that both vibrotactile and auditory smartphone notifications induce skin conductance responses (SCR), that the induced responses differ from that of arbitrary stimuli, and that they could be employed to predict perception of smartphone notifications after their presentation using wearable sensors.
Exploring the Opportunities for Technologies to Enhance Quality of Life with People who have Experienced Vision Loss
Research predicts that 196 million people will be diagnosed with Age-Related Macular Degeneration (AMD) by 2020. People who experience AMD and other vision loss face barriers that affect their Quality of Life (QoL). People experience only modest improvement from technologies (e.g., screen readers, CCTV), tools (e.g., magnifying glasses, tactile buttons), and human help (e.g., friends, blindness organizations). Further, there are issues to accessing these resources based on one’s place of residence. To explore these challenges and determine design implications to support people who have experienced vision loss (PVL), we conducted a qualitative semi-structured interview study exploring QoL with 10 PVL. We uncovered themes of supporting creative work, recognizing the impact of one’s living in a non-urban setting on QoL, and increasing efficiency at accomplishing tasks. We motivate the inclusion of PVL in the design process because they learned skills while sighted and are now low vision or blind.
Ranked-List Visualization: A Graphical Perception Study
Visualization of ranked lists is a common occurrence, but many in-the-wild solutions fly in the face of vision science and visualization wisdom. For example, treemaps and bubble charts are commonly used for this purpose, despite the fact that the data is not hierarchical and that length is easier to perceive than area. Furthermore, several new visual representations have recently been suggested in this area, including wrapped bars, packed bars, piled bars, and Zvinca plots. To quantify the differences and trade-offs for these ranked-list visualizations, we here report on a crowdsourced graphical perception study involving six such visual representations, including the ubiquitous scrolled barchart, in three tasks: ranking (assessing a single item), comparison (two items), and average (assessing global distribution). Results show that wrapped bars may be the best choice for visualizing ranked lists, and that treemaps are surprisingly accurate despite the use of area rather than length to represent value.
A Lie Reveals the Truth: Quasimodes for Task-Aligned Data Presentation
Designers are often discouraged from creating data visualizations that omit or distort information, because they can easily be misleading. However, the same representations that could be used to deceive can provide benefits when chosen to appropriately align with user tasks. We present an interaction technique, Perceptual Glimpses, which allows for the transparent presentation of so-called ‘deceptive’ views of information that are made temporary using quasimodes. When presented using Perceptual Glimpses, message-level exaggeration caused by a truncated axis on a bar chart was reduced under some conditions, but users require guidance to avoid errors, and view presentation order may affect trust. When Perceptual Glimpses was extended to display a range of views that might otherwise be deceptive or difficult to understand if shown out of context, users were able to understand and leverage these transformations to perform a range of low-level tasks. Design recommendations and examples suggest extensions of the technique.
You `Might’ Be Affected: An Empirical Analysis of Readability and Usability Issues in Data Breach Notifications
Data breaches place affected individuals at significant risk of identity theft. Yet, prior studies have shown that many consumers do not take protective actions after receiving a data breach notification from a company. We analyzed 161 data breach notifications sent to consumers with respect to their readability, structure, risk communication, and presentation of potential actions. We find that notifications are long and require advanced reading skills. Many companies downplay or obscure the likelihood of the receiver being affected by the breach and associated risks. Moreover, potential actions and offered compensations are frequently described in lengthy paragraphs instead of clearly listed. Little information is provided regarding an action’s urgency and effectiveness; little guidance is provided on which actions to prioritize. Based on our findings, we provide recommendations for designing more usable and informative data breach notifications that could help consumers better mitigate the consequences of being affected by a data breach.
Understanding Trust, Transportation, and Accessibility through Ridesharing
Relatively few studies of accessibility and transportation for people with vision impairments have investigated forms of transportation besides public transportation and walking. To develop a more nuanced understanding of this context, we turn to ridesharing, an increasingly used mode of transportation. We interviewed 16 visually-impaired individuals about their active use of ridesharing services like Uber and Lyft. Our findings show that, while people with vision impairments value independence, ridesharing involves building trust across a complex network of stakeholders and technologies. This data is used to start a discussion on how other systems can facilitate trust for people with vision impairments by considering the role of conversation, affordances of system incentives, and increased agency.
Experimental Analysis of Barehand Mid-air Mode-Switching Techniques in Virtual Reality
We present an empirical comparison of eleven bare hand, mid-air mode-switching techniques suitable for virtual reality in two experiments. The first evaluates seven techniques spanning dominant and non-dominant hand actions. Techniques represent common classes of actions selected by a methodical examination of 56 examples of prior art. The standard “subtraction method” protocol is adapted for 3D interfaces, with two baseline selection methods, bare hand pinch and device controller button. A second experiment with four techniques explores more subtle dominant-hand techniques and the effect of using a dominant hand device for selection. Results provide guidance to practitioners when choosing bare hand, mid-air mode-switching techniques, and for researchers when designing new mode-switching methods in VR.
Designing Interactive 3D Printed Models with Teachers of the Visually Impaired
Students with visual impairments struggle to learn various concepts in the academic curriculum because diagrams, images, and other visual are not accessible to them. To address this, researchers have design interactive 3D printed models (I3Ms) that provide audio descriptions when a user touches components of a model. In prior work, I3Ms were designed on an ad hoc basis, and it is currently unknown what general guidelines produce effective I3M designs. To address this gap, we conducted two studies with Teachers of the Visually Impaired (TVIs). First, we led two design workshops with 35 TVIs, who modified sample models and added interactive elements to them. Second, we worked with three TVIs to design three I3Ms in an iterative instructional design process. At the end of this process, the TVIs used the I3Ms we designed to teach their students. We conclude that I3Ms should (1) have effective tactile features (e.g., distinctive patterns between components), (2) contain both auditory and visual content (e.g., explanatory animations), and (3) consider pedagogical methods (e.g., overview before details).
Defending My Castle: A Co-Design Study of Privacy Mechanisms for Smart Homes
Home is a person’s castle, a private and protected space. Internet-connected devices such as locks, cameras, and speakers might make a home “smarter” but also raise privacy issues because these devices may constantly and inconspicuously collect, infer or even share information about people in the home. To explore user-centered privacy designs for smart homes, we conducted a co-design study in which we worked closely with diverse groups of participants in creating new designs. This study helps fill the gap in the literature between studying users’ privacy concerns and designing privacy tools only by experts. Our participants’ privacy designs often relied on simple strategies, such as data localization, disconnection from the Internet, and a private mode. From these designs, we identified six key design factors: data transparency and control, security, safety, usability and user experience, system intelligence, and system modality. We discuss how these factors can guide design for smart home privacy.
Understanding Perceptions of Problematic Facebook Use: When People Experience Negative Life Impact and a Lack of Control
While many people use social network sites to connect with friends and family, some feel that their use is problematic, seriously affecting their sleep, work, or life. Pairing a survey of 20,000 Facebook users measuring perceptions of problematic use with behavioral and demographic data, we examined Facebook activities associated with problematic use as well as the kinds of people most likely to experience it. People who feel their use is problematic are more likely to be younger, male, and going through a major life event such as a breakup. They spend more time on the platform, particularly at night, and spend proportionally more time looking at profiles and less time browsing their News Feeds. They also message their friends more frequently. While they are more likely to respond to notifications, they are also more likely to deactivate their accounts, perhaps in an effort to better manage their time. Further, they are more likely to have seen content about social media or phone addiction. Notably, people reporting problematic use rate the site as more valuable to them, highlighting the complex relationship between technology use and well-being. A better understanding of problematic Facebook use can inform the design of context-appropriate and supportive tools to help people become more in control.
Typing on Split Keyboards with Peripheral Vision
Split keyboards are widely used on hand-held touchscreen devices (e.g., tablets). However, typing on a split keyboard often requires eye movement and attention switching between two halves of the keyboard, which slows users down and increases fatigue. We explore peripheral typing, a superior typing mode in which a user focuses her visual attention on the output text and keeps the split keyboard in peripheral vision. Our investigation showed that peripheral typing reduced attention switching, enhanced user experience and increased overall performance (27 WPM, 28% faster) over the typical eyes-on typing mode. This typing mode can be well supported by accounting the typing behavior in statistical decoding. Based on our study results, we have designed GlanceType, a text entry system that supported both peripheral and eyes-on typing modes for real typing scenario. Our evaluation showed that peripheral typing not only well co-existed with the existing eyes-on typing, but also substantially improved the text entry performance. Overall, peripheral typing is a promising typing mode and supporting it would significantly improve the text entry performance on a split keyboard.
Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction
Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
Decision-Making Under Uncertainty in Research Synthesis: Designing for the Garden of Forking Paths
To make evidence-based recommendations to decision-makers, researchers conducting systematic reviews and meta-analyses must navigate a garden of forking paths: a series of analytical decision-points, each of which has the potential to influence findings. To identify challenges and opportunities related to designing systems to help researchers manage uncertainty around which of multiple analyses is best, we interviewed 11 professional researchers who conduct research synthesis to inform decision-making within three organizations. We conducted a qualitative analysis identifying 480 analytical decisions made by researchers throughout the scientific process. We present descriptions of current practices in applied research synthesis and corresponding design challenges: making it more feasible for researchers to try and compare analyses, shifting researchers’ attention from rationales for decisions to impacts on results, and supporting communication techniques that acknowledge decision-makers’ aversions to uncertainty. We identify opportunities to design systems which help researchers explore, reason about, and communicate uncertainty in decision-making about possible analyses in research synthesis.
ReType: Quick Text Editing with Keyboard and Gaze
When a user needs to reposition the cursor during text editing, this is often done using the mouse. For experienced typists especially, the switch between keyboard and mouse can slow down the keyboard editing workflow considerably. To address this we propose ReType, a new gaze-assisted positioning technique combining keyboard with gaze input based on a new ‘patching’ metaphor. ReType allows users to perform some common editing operations while keeping their hands on the keyboard. We present the result of two studies. A free-use study indicated that ReType enhances the user experience of text editing. ReType was liked by many participants, regardless of their typing skills. A comparative user study showed that ReType is able to match or even beat the speed of mouse-based interaction for small text edits. We conclude that the gaze-augmented user interface can make common interactions more fluent, especially for professional keyboard users.
Desktop Electrospinning: A Single Extruder 3D Printer for Producing Rigid Plastic and Electrospun Textiles
We present a new type of 3D printer that combines rigid plastic printing with melt electrospinning? a technique that uses electrostatic forces to create thin fibers from a molten polymer. Our printer enables custom-shaped textile sheets (similar in feel to wool felt) to be produced alongside rigid plastic using a single material (i.e., PLA) in a single process. We contribute open-source firmware, hardware specifications, and printing parameters to achieve melt electrospinning. Our approach offers new opportunities for fabricating interactive objects and sensors that blend the flexibility, absorbency and softness of produced electrospun textiles with the structure and rigidity of hard plastic for actuation, sensing, and tactile experiences.
Reading Face, Reading Health: Exploring Face Reading Technologies for Everyday Health
With the recent advancement in computer vision, Artificial Intelligence (AI), and mobile technologies, it has become technically feasible for computerized Face Reading Technologies (FRTs) to learn about one’s health in everyday settings. However, how to design FRT-based applications for everyday health practices remains unexplored. This paper presents a design study with a technology probe called Faced, a mobile health checkup application based on the facial diagnosis method from Traditional Chinese Medicine (TCM). A field trial of Faced with 10 participants suggests potential usage modes and highlights a number of critical design issues in the use of FRTs for everyday health, including adaptability, practicality, sensitivity, and trustworthiness. We end by discussing design implications to address the unique challenges of fully integrating FRTs into everyday health practices.
Editing Spatial Layouts through Tactile Templates for People with Visual Impairments
Spatial layout is a key component in graphic design. While people who are blind or visually impaired (BVI) can use screen readers or magnifiers to access digital content, these tools fail to fully communicate the content’s graphic design information. Through semi-structured interviews and contextual inquiries, we identify the lack of this information and feedback as major challenges in understanding and editing layouts. Guided by these insights and a co-design process with a blind hobbyist web developer, we developed an interactive, multimodal authoring tool that lets blind people understand spatial relationships between elements and modify layout templates. Our tool automatically generates tactile print-outs of a web page’s layout, which users overlay on top of a tablet that runs our self-voicing digital design tool. We conclude with design considerations grounded in user feedback for improving the accessibility of spatially encoded information and developing tools for BVI authors.
The Effect of Stereo Display Deficiencies on Virtual Hand Pointing
The limitations of stereo display systems affect depth perception, e.g., due to the vergence-accommodation conflict or diplopia. We performed three studies to understand how stereo display deficiencies impact 3D pointing for targets in front of a screen and close to the user, i.e., in peripersonal space. Our first two experiments compare movements with and without a change in visual depth for virtual respectively physical targets. Results indicate that selecting targets along the depth axis is slower and has less throughput for virtual targets, while physical pointing demonstrates the opposite result. We then propose a new 3D extension for Fitts’ law that models the effect of stereo display deficiencies. Next, our third experiment verifies the model and measures more broadly how the change in visual depth between targets affects pointing performance in peripersonal space and confirms significant effects on time and throughput. Finally, we discuss implications for 3D user interface design.
Effects of Local Latency on Game Pointing Devices and Game Pointing Tasks
Studies have shown certain game tasks such as targeting to be negatively and significantly affected by latencies as low as 41ms. Therefore it is important to understand the relationship between local latency – delays between an input action and resulting change in the display – and common gaming tasks such as targeting and tracking. In addition, games now use a variety of input devices, including touchscreens, mice, tablets and controllers. These devices provide very different combinations of direct/indirect input, absolute/relative movement, and position/rate control, and are likely to be affected by latency in different ways. We performed a study evaluating and comparing the effects of latency across four devices (touchscreen, mouse, controller and drawing tablet) on targeting and interception tasks. We analyze both throughput and path characteristics, identify differences between devices, and provide design considerations for game designers.
Chatbots, Humbots, and the Quest for Artificial General Intelligence
What began as a quest for artificial general intelligence branched into several pursuits, including intelligent assistants developed by tech companies and task-oriented chatbots that deliver more information or services in specific domains. Progress quickened with the spread of low-latency networking, then accelerated dramatically a few years ago. In 2016, task-focused chatbots became a centerpiece of machine intelligence, promising interfaces that are more engaging than robotic answering systems and that can accommodate our increasingly phone-based information needs. Hundreds of thousands were built. Creating successful non-trivial chatbots proved more difficult than anticipated. Some developers now design for human-chatbot (humbot) teams, with people handling difficult queries. This paper describes the conversational agent space, difficulties in meeting user expectations, potential new design approaches, uses of human-bot hybrids, and implications for the ultimate goal of creating software with general intelligence.
Haptic Navigation Cues on the Steering Wheel
Haptic feedback is used in cars to reduce visual inattention. While tactile feedback like vibration can be influenced by the car’s movement, thermal and cutaneous push feedback should be independent of such interference. This paper presents two driving simulator studies investigating novel tactile feedback on the steering wheel for navigation. First, devices on one side of the steering wheel were warmed, indicating the turning direction, while those on the other side were cooled. This thermal feedback was compared to audio. The thermal navigation lead to 94.2% correct recognitions of warnings 200m before the turn and to 91.7% correct turns. Speech had perfect recognition for both. In the second experiment, only the destination side was indicated thermally, and this design was compared to cutaneous push feedback. The simplified thermal feedback design did not increase recognition, but cutaneous push feedback had high recognition rates (100% for 200 m warnings, 98% for turns).
Drag:on: A Virtual Reality Controller Providing Haptic Feedback Based on Drag and Weight Shift
Standard controllers for virtual reality (VR) lack sophisticated means to convey a realistic, kinesthetic impression of size, resistance or inertia. We present the concept and implementation of Drag:on, an ungrounded shape-changing VR controller that provides dynamic passive haptic feedback based on drag, i.e. air resistance, and weight shift. Drag:on leverages the airflow occurring at the controller during interaction. By dynamically adjusting its surface area, the controller changes the drag and rotational inertia felt by the user. In a user study, we found that Drag:on can provide distinguishable levels of haptic feedback. Our prototype increases the haptic realism in VR compared to standard controllers and when rotated or swung improves the perception of virtual resistance. By this, Drag:on provides haptic feedback suitable for rendering different virtual mechanical resistances, virtual gas streams, and virtual objects differing in scale, material and fill state.
ForceRay: Extending Thumb Reach via Force Input Stabilizes Device Grip for Mobile Touch Input
Smartphones are used predominantly one-handed, using the thumb for input. Many smartphones, however, have grown beyond 5″. Users cannot tap everywhere on these screens without destabilizing their grip. ForceRay (FR) lets users aim at an out-of-reach target by applying a force touch at a comfortable thumb location, casting a virtual ray towards the target. Varying pressure moves a cursor along the ray. When reaching the target, quickly lifting the thumb selects it. In a first study, FR was 195 ms slower and had a 3% higher selection error than the best existing technique, BezelCursor (BC), but FR caused significantly less device movement than all other techniques, letting users maintain a steady grip and removing their concerns about device drops. A second study showed that an hour of training speeds up both BC and FR, and that both are equally fast for targets at the screen border.
Understanding and Modeling User-Perceived Brand Personality from Mobile Application UIs
Designers strive to make their mobile apps stand out in a competitive market by creating a distinctive brand personality. However, it is unclear whether users can form a consistent impression of brand personality by looking at a few user interface (UI) screenshots in the app store, and if this process can be modeled computationally. To bridge this gap, we first collect crowd assessment on brand personalities depicted by the UIs of 318 applications, and statistically confirm that users can reach substantial agreement. To further model how users process mobile UI visually, we compute UI descriptors including Color, Organization, and Texture at both element and page levels. We feed these descriptors to a computational model, achieving a high accuracy of predicting perceived brand personality (MSE = 0.035 and R^2 = 0.78). This work could benefit designers by highlighting contributing visual factors to brand personality creation and providing quick, low-cost design feedback.
Underneath the Skin: An Analysis of YouTube Videos to Understand Insertable Device Interaction
During the last decade, people have started to experiment with insertable technology like RFID or NFC chips and use them for e.g. identification. However, little is known about how people in fact interact with and adapt insertables. We conducted a video analysis of 122 YouTube videos to gain insight into the interaction with the insertables. Second, we implemented an online survey to complement our data from the video analysis. Our findings show that there are many opportunities for interaction with insertables both for task-oriented and creative purposes. However, there are also multiple challenges and obstacles as well as side effects and health concerns. Our findings conclude that the current infrastructure is not ready to support the use of insertables yet, and we discuss implications of this.
Stroke-Gesture Input for People with Motor Impairments: Empirical Results & Research Roadmap
We examine the articulation characteristics of stroke-gestures produced by people with upper body motor impairments on touchscreens as well as the accuracy rates of popular classification techniques, such as the $-family, to recognize those gestures. Our results on a dataset of 9,681 gestures collected from 70 participants reveal that stroke-gestures produced by people with motor impairments are recognized less accurately than the same gesture types produced by people without impairments, yet still accurately enough (93.0%) for practical purposes; are similar in terms of geometrical criteria to the gestures produced by people without impairments; but take considerably more time to produce (3.4s vs. 1.7s) and exhibit lower consistency (-49.7%). We outline a research roadmap for accessible gesture input on touchscreens for users with upper body motor impairments, and we make our large gesture dataset publicly available in the community.
Developing Accessible Services: Understanding Current Knowledge and Areas for Future Support
When creating digital artefacts, it is important to ensure that the product being made is accessible to as much of the population as is possible. Many guidelines and supporting tools exist to assist reaching this goal. However, little is known about developers’ understanding of accessible practice and the methods that are used to implement this. We present findings from an accessibility design workshop that was carried out with a mixture of 197 developers and digital technology students. We discuss perceptions of accessibility, techniques that are used when designing accessible products, and what areas of accessibility development participants believed were important. We show that there are gaps in the knowledge needed to develop accessible products despite the effort to promote accessible design. Our participants are themselves aware of where these gaps are and have suggested a number of areas where tools, techniques and guidance would improve their practice.
Touchstone2: An Interactive Environment for Exploring Trade-offs in HCI Experiment Design
Touchstone2 offers a direct-manipulation interface for generating and examining trade-offs in experiment designs. Based on interviews with experienced researchers, we developed an interactive environment for manipulating experiment design parameters, revealing patterns in trial tables, and estimating and comparing statistical power. We also developed TSL, a declarative language that precisely represents experiment designs. In two studies, experienced HCI researchers successfully used Touchstone2 to evaluate design trade-offs and calculate how many participants are required for particular effect sizes. We discuss Touchstone2’s benefits and limitations, as well as directions for future research.
Socio-technical Dynamics: Cooperation of Emergent and Established Organisations in Crises and Disasters
Increasing ubiquitousness of information and communication technology exerts influence on crisis and disaster management. New media enable citizens to rapidly self-organise in emergent groups. Theoretical framing of their interactions with established organisations is lacking. To address this, we conduct a thematic analysis on qualitative data from the European migration crisis of 2015. We draw on context-rich material from both emergent groups and established organisation. To represent our findings, we introduce the notion of socio-technical dynamics. We derive implications for computer supported cooperative work in crises and disasters. These insights contribute to the efficient involvement of emergent groups in established systems.
Interpreting the Diversity in Subjective Judgments
In a CHI paper from 10 years ago, entitled “Accounting for Diversity in Subjective Judgments”, an interesting dichotomy was reported between, on the one side, the increased use of idiosyncratic constructs when judging the user experience of diverse products and, on the other hand, the statistical methods available to analyze such data. The paper more specifically proposed a method to extract diverse perspectives (called views) from experimental data. The current paper provides three improvements of this existing method by: 1) showing that a little-known approach for clustering attributes, called VARCLUS, can be applied and extended to provide a more optimal algorithm, 2) showing how the VARCLUS method can be applied to perform both within- and across-subject analysis, and 3) providing access to the VARCLUS method by incorporating it in ILLMO, a user-friendly and freely available program for interactive statistics.
ElasticVR: Providing Multilevel Continuously-Changing Resistive Force and Instant Impact Using Elasticity for VR
Resistive force (e.g., due to object elasticity) and impact (e.g., due to recoil) are common effects in our daily life. However, resistive force continuously changes due to users’ movements while impact instantly occurs when an event triggers it. These feedback are still not realistically provided by current VR haptic methods. In this paper, a wearable device, ElasticVR, which consists of an elastic band, servo motors and mechanical brakes, is proposed to provide the continuously-changing resistive force and instantly-occurring impact upon the user’s hand to enhance VR realism. By changing two physical properties, length and extension distance, of the elastic band, ElasticVR provides multilevel resistive force with no delay and impact with little delay, respectively, for realistic and versatile VR applications. A force perception study was performed to observe users’ force distinguishability of the resistive force and impact, and the prototype was built based on its results. A VR experience study further proves that the resistive force and impact from ElasticVR both outperform those from current approaches in realism. Applications using ElasticVR are also demonstrated.
Detecting Personality Traits Using Eye-Tracking Data
Personality is an established domain of research in psychology, and individual differences in various traits are linked to a variety of real-life outcomes and behaviours. Personality detection is an intricate task that typically requires humans to fill out lengthy questionnaires assessing specific personality traits. The outcomes of this, however, may be unreliable or biased if the respondents do not fully understand or are not willing to honestly answer the questions. To this end, we propose a framework for objective personality detection that leverages humans’ physiological responses to external stimuli. We exemplify and evaluate the framework in a case study, where we expose subjects to affective image and video stimuli, and capture their physiological responses using a commercial-grade eye-tracking sensor. These responses are then processed and fed into a classifier capable of accurately predicting a range of personality traits. Our work yields notably high predictive accuracy, suggesting the applicability of the proposed framework for robust personality detection.
Patient Perspectives on Self-Management Technologies for Chronic Fatigue Syndrome
Chronic Fatigue Syndrome (CFS) is a debilitating medical condition that is characterized by a range of physical, cognitive and social impairments. This paper investigates CFS patients’ perspectives on the potential for technological support for self-management of their symptoms. We report findings from three studies in which people living with CFS 1) prioritized symptoms that they would like technologies to address, 2) articulated their current approaches to self-management alongside challenges they face, and 3) reflected on their experiences with three commercial smartphone apps related to symptom management. We contribute an understanding of the specific needs of the ME/CFS population and the ways in which they currently engage in self-management using technology. The paper ends by describing five high-level design recommendations for ME/CFS self-management technologies.
The Role of Gaming During Difficult Life Experiences
HCI has become increasingly interested in the use of technology during difficult life experiences. Yet despite considerable popularity, little is known about how and why people engage with games in times of personal difficulty. Based on a qualitative analysis of an online survey (N=95), our findings indicate that games offered players much needed respite from stress, supported them in dealing with their feelings, facilitated social connections, stimulated personal change and growth, and provided a lifeline in times of existential doubt. However, despite an emphasis on gaming as being able to support coping in ways other activities did not, participants also referred to games as unproductive and as an obstacle to living well. We discuss these findings in relation to both coping process and outcome, while considering tensions around the potential benefits and perceived value of gaming.
The Dissimilarity-Consensus Approach to Agreement Analysis in Gesture Elicitation Studies
We introduce the dissimilarity-consensus method, a new approach to computing objective measures of consensus between users’ gesture preferences to support data analysis in end-user gesture elicitation studies. Our method models and quantifies the relationship between users’ consensus over gesture articulation and numerical measures of gesture dissimilarity, e.g., Dynamic Time Warping or Hausdorff distances, by employing growth curves and logistic functions. We exemplify our method on 1,312 whole-body gestures elicited from 30 children, ages 3 to 6 years, and we report the first empirical results in the literature on the consensus between whole-body gestures produced by children this young. We provide C# and R software implementations of our method and make our gesture dataset publicly available.
A Framework for the Experience of Meaning in Human-Computer Interaction
The view of quality in human-computer interaction continuously develops, having in past decades included consistency, transparency, usability, and positive emotions. Recently, meaning is receiving increased interest in the user experience literature and in industry, referring to the end, purpose or significance of interaction with computers. However, the notion of meaning remains elusive and a bewildering number of senses are in use. We present a framework of meaning in interaction, based on a synthesis of psychological meaning research. The framework outlines five distinct senses of the experience of meaning: connectedness, purpose, coherence, resonance, and significance. We illustrate the usefulness of the framework by analyzing a selection of recent papers at the CHI conference and by raising a series of open research questions about the interplay of meaning, user experience, reflection, and well-being.
Sustainabot – Exploring the Use of Everyday Foodstuffs as Output and Input for and with Emergent Users
Mainstream digital interactions are spread over a plethora of devices and form-factors, from mobiles to laptops; printouts to large screens. For emergent users, however, such abundance of choice is rarely accessible or affordable. In particular, viewing mobile content on a larger screen, or printing out copies, is often not available. In this paper we present Sustainabot – a small robot printer that uses everyday materials to print shapes and patterns from mobile phones. Sustainabot was proposed and developed by and with emergent users through a series of co-creation workshops. We begin by discussing this process, then detail the open-source mobile printer prototype. We carried out two evaluations of Sustainabot, the first focused on printing with materials in situ, and the second on understandability of its output. We present these results, and discuss opportunities and challenges for similar developments. We conclude by highlighting where and how similar devices could be used in future.
"Everything’s the Phone": Understanding the Phone’s Supercharged Role in Parent-Teen Relationships
Through focus groups (n=61) and surveys (n=2,083) of parents and teens, we investigated how parents and their teen children experience their own and each other’s phone use in the context of parent-teen relationships. Both expressed a lack of agency in their own and each other’s phone use, feeling overly reliant on their own phone and displaced by the other’s phone. In a classic example of the fundamental attribution error, each party placed primary blame on the other, and rationalized their own behavior with legitimizing excuses. We present a conceptual model showing how parents’ and teens’ relationships to their phones and perceptions of each other’s phone use are inextricably linked, and how, together, they contribute to parent-teen tensions and disconnections. We use the model to consider how the phone might play a less highly charged role in family life and contribute to positive connections between parents and their teen children.
On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction
We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.
“I feel it is my responsibility to stream”: Streaming and Engaging with Intangible Cultural Heritage through Livestreaming
Globalization has led to the destruction of many cultural practices, expressions, and knowledge found within local communities. These practices, defined by UNESCO as Intangible Cultural Heritage (ICH), have been identified, promoted, and safeguarded by nations, academia, organizations and local communities to varying degrees. Despite such efforts, many practices are still in danger of being lost or forgotten forever. With the increased popularity of livestreaming in China, some streamers have begun to use livestreaming to showcase and promote ICH activities. To better understand the practices, opportunities, and challenges inherent in sharing and safeguarding ICH through livestreaming, we interviewed 10 streamers and 8 viewers from China. Through our qualitative investigation, we found that ICH streamers had altruistic motivations and engaged with viewers using multiple modalities beyond livestreams. We also found that livestreaming encouraged real-time interaction and sociality, while non-live curated videos attracted attention from a broader audience and assisted in the archiving of knowledge.
AILA: Attentive Interactive Labeling Assistant for Document Classification through Attention-Based Deep Neural Networks
Document labeling is a critical step in building various machine learning applications. However, the step can be time-consuming and arduous, requiring a significant amount of human efforts. To support an efficient document labeling environment, we present a system called Attentive Interactive Labeling Assistant (AILA). In its core, AILA uses Interactive Attention Module (IAM), a novel module that visually highlights words in a document that labelers may pay attention to when labeling a document. IAM utilizes attention-based Deep Neural Networks which not only support a prediction of which words to highlight but also enable labelers to indicate words that should be assigned a high attention weight while labeling to improve the future quality of word prediction.We evaluated the labeling efficiency and the accuracy by comparing the conditions with and without IAM in our study. The results showed that participants’ labeling efficiency increased significantly under the condition with IAM than the condition without IAM, while the two conditions maintained roughly the same labeling accuracy.
Multi-Modal Approaches for Post-Editing Machine Translation
Current advances in machine translation increase the need for translators to switch from traditional translation to post-editing (PE) of machine-translated text, a process that saves time and improves quality. This affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Our results of an elicitation study with professional translators indicate that a combination of pen, touch, and speech could well support common PE tasks, and received high subjective ratings by our participants. Therefore, we argue that future translation environment research should focus more strongly on these modalities in addition to mouse- and keyboard-based approaches. On the other hand, eye tracking and gesture modalities seem less important. An additional interview regarding interface design revealed that most translators would also see value in automatically receiving additional resources when a high cognitive load is detected during PE.
Neighborhood Perception in Bar Charts
In this paper, we report three user experiments that investigate in how far the perception of a bar in a bar chart changes based on the height of its neighboring bars. We hypothesized that the perception of the very same bar, for instance, might differ when it is surrounded by the top highest vs. the top lowest bars. Our results show that such neighborhood effects exist: a target bar surrounded by high neighbor bars, is perceived to be lower as the same bar surrounded with low neighbors. Yet, the effect size of this neighborhood effect is small compared to other data-inherent effects: the judgment accuracy largely depends on the target bar rank, number of data items, and other data characteristics of the dataset. Based on the findings, we discuss design implications for perceptually optimizing bar charts.
Talking about Chat at Work in the Global South: An Ethnographic Study of Chat Use in India and Kenya
In this paper, we examine how two chat apps fit into the communication ecosystem of six large distributed enterprises, in India and Kenya. From the perspective of management, these chat apps promised to foster greater communication and awareness between workers in the field, and between fieldworkers and the enterprises administration and management centres. Each organisation had multiple different types of chat groups, characterised by the types of content and interaction patterns they mediate, and the different organisational functions they fulfil. Examining the interplay between chat and existing local practices for coordination, collaboration and knowledge-sharing, we discuss how chat manifests in the distributed workplace and how it fits — or otherwise — alongside the rhythms of both local and remote work. We contribute to understandings of chat apps for workplace communication and provide insights for shaping their ongoing development.
"What’s Happening at that Hip?": Evaluating an On-body Projection based Augmented Reality System for Physiotherapy Classroom
We present two studies to discuss the design, usability analysis, and educational outcome resulting from our system Augmented Body in physiotherapy classroom. We build on prior user-centric design work that investigates existing teaching methods and discuss opportunities for intervention. We present the design and implementation of a hybrid system for physiotherapy education combining an on-body projection based virtual anatomy supplemented by pen-based tablets to create real-time annotations. We conducted a usability evaluation of this system, comparing with projection only and traditional teaching conditions. Finally, we focus on a comparative study to evaluate learning outcome among students in actual classroom settings. Our studies showed increased usage of visual representation techniques in students’11 note taking behavior and statistically significant improvement in some learning aspects. We discuss challenges for designing augmented reality systems for education, including minimizing attention split, addressing text-entry issues, and digital annotations on a moving physical body.
Optimising Encoding for Vibrotactile Skin Reading
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Bring the Outside In: Providing Accessible Experiences Through VR for People with Dementia in Locked Psychiatric Hospitals
Many people with dementia (PWD) residing in long-term care may face barriers in accessing experiences beyond their physical premises; this may be due to location, mobility constraints, legal mental health act restrictions, or offence-related restrictions. In recent years, there have been research interests towards designing non-pharmacological interventions aiming to improve the Quality of Life (QoL) for PWD within long-term care. We explored the use of Virtual Reality (VR) as a tool to provide 360°-video based experiences for individuals with moderate to severe dementia residing in a locked psychiatric hospital. We discuss at depth the appeal of using VR for PWD, and the observed impact of such interaction. We also present the design opportunities, pitfalls, and recommendations for future deployment in healthcare services. This paper demonstrates the potential of VR as a virtual alternative to experiences that may be difficult to reach for PWD residing within locked setting.
Clairbuoyance: Improving Directional Perception for Swimmers
While we usually have no trouble with orientation, our sense of direction frequently fails in the absence of a frame of reference. Open-water swimmers raise their heads to look for a reference point, since disorientation might result in exhaustion or even drowning. In this paper, we report on Clairbuoyance – a system that provides feedback about the swimmer’s orientation through lights mounted on swimming goggles. We conducted an experiment with two versions of Clairbuoyance: Discrete signals relative to a chosen direction, and continuous signals providing a sense of absolute direction. Participants swam to a series of targets. Proficient swimmers preferred the discrete mode; novice users the continuous one. We determined that both versions of Clairbuoyance enabled reaching the target faster than without the help of the system, although the discrete mode increased error. Based on the results, we contribute insights for designing directional guidance feedback for swimmers.
Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes
Clinical decision support tools (DST) promise improved healthcare outcomes by offering data-driven insights. While effective in lab settings, almost all DSTs have failed in practice. Empirical research diagnosed poor contextual fit as the cause. This paper describes the design and field evaluation of a radically new form of DST. It automatically generates slides for clinicians’ decision meetings with subtly embedded machine prognostics. This design took inspiration from the notion of Unremarkable Computing, that by augmenting the users’ routines technology/AI can have significant importance for the users yet remain unobtrusive. Our field evaluation suggests clinicians are more likely to encounter and embrace such a DST. Drawing on their responses, we discuss the importance and intricacies of finding the right level of unremarkableness in DST design, and share lessons learned in prototyping critical AI systems as a situated experience.
AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness
We are entering an era of AI-Mediated Communication (AI-MC) where interpersonal communication is not only mediated by technology, but is optimized, augmented, or generated by artificial intelligence. Our study takes a first look at the potential impact of AI-MC on online self-presentation. In three experiments we test whether people find Airbnb hosts less trustworthy if they believe their profiles have been written by AI. We observe a new phenomenon that we term the Replicant Effect: Only when participants thought they saw a mixed set of AI- and human-written profiles, they mistrusted hosts whose profiles were labeled as or suspected to be written by AI. Our findings have implications for the design of systems that involve AI technologies in online self-presentation and chart a direction for future work that may upend or augment key aspects of Computer-Mediated Communication theory.
Magnetact: Magnetic-sheet-based Haptic Interfaces for Touch Devices
We describe a method for rapid prototyping of haptic interfaces for touch devices. A sheet-like touch interface is constructed from magnetic rubber sheets and conductive materials. The magnetic sheet is thin, and the capacitive sensor of the touch device can still detect the user’s finger above the sheet because of the rubber’s dielectric nature. Furthermore, tactile feedback can be customized with ease by using our magnetizing toolkit to change the magnetic patterns. Using the magnetizing toolkit, we investigated the appropriate size and thickness of haptic interfaces and demonstrated several interfaces such as buttons, sliders, switches, and dials. Our method is an easy and convenient way to customize the size, shape, and haptic feedback of a wide variety of interfaces.
Virtual Hubs: Understanding Relational Aspects and Remediating Incubation
We have recently seen the emergence of new platforms that aim to provide remotely located entrepreneurs and startup companies with support analogous to that found within traditional incubation or acceleration spaces. This paper offers an understanding of these ‘virtual hubs’, and the inherently socio-technical interactions that occur between their members. Our study analyzes a sample of existing virtual hubs in two stages. First, we contribute broader insight into the current landscape of virtual hubs by documenting and categorizing 25 hubs regarding their form, support offered and a selection of further qualities. Second, we contribute detailed insight into the operation and experience of such hubs, from an analysis of 10 semi-structured interviews with organizers and participants of virtual hubs. We conclude by analyzing our findings in terms of relational aspects of non-virtual hubs from the literature and remediation theory, and propose opportunities for advancing the design of such platforms.
Impulse Buying: Design Practices and Consumer Needs
E-commerce sites have an incentive to encourage impulse buying, even when not in the consumer’s best interest. This study investigates what features e-commerce sites use to encourage impulse buying and what tools consumers desire to curb their online spending. We present two studies: (1) a systematic content analysis of 200 top e-commerce websites in the U.S. and (2) a survey of online impulse buyers (N=151). From Study 1, we find that e-commerce sites contain multiple features that encourage impulsive buying, including those that lower perceived risks, leverage social influence, and enhance perceived proximity to the product. Conversely, from Study 2 we find that online impulse buyers want tools that (a) encourage deliberation and avoidance, (b) enforce spending limits and postponement, (c) increase checkout effort, (d) make costs more salient, and (e) reduce product desire. These findings inform the design of “friction” technologies that help users make more deliberative consumer choices.
Communication Breakdowns Between Families and Alexa
We investigate how families repair communication breakdowns with digital home assistants. We recruited 10 diverse families to use an Amazon Echo Dot in their homes for four weeks. All families had at least one child between four and 17 years old. Each family participated in pre- and post- deployment interviews. Their interactions with the Echo Dot (Alexa) were audio recorded throughout the study. We analyzed 59 communication breakdown interactions between family members and Alexa, framing our analysis with concepts from HCI and speech-language pathology. Our findings indicate that family members collaborate using discourse scaffolding (supportive communication guidance) and a variety of speech and language modifications in their attempts to repair communication breakdowns with Alexa. Alexa’s responses also influence the repair strategies that families use. Designers can relieve the communication repair burden that primarily rests with families by increasing digital home assistants’ abilities to collaborate together with users to repair communication breakdowns.
Data is Personal: Attitudes and Perceptions of Data Visualization in Rural Pennsylvania
Many of the guidelines that inform how designers create data visualizations originate in studies that unintentionally exclude populations that are most likely to be among the ‘data poor’. In this paper, we explore which factors may drive attention and trust in rural populations with diverse economic and educational backgrounds – a segment that is largely underrepresented in the data visualization literature. In 42 semi-structured interviews in rural Pennsylvania (USA), we find that a complex set of factors intermix to inform attitudes and perceptions about data visualization – including educational background, political affiliation, and personal experience. The data and materials for this research can be found at https://osf.io/uxwts/
HCI and Affective Health: Taking stock of a decade of studies and charting future research directions
In the last decade, the number of articles on HCI and health has increased dramatically. We extracted 139 papers on depression, anxiety and bipolar health issues from 10 years of SIGCHI conference proceedings. 72 of these were published in the last two years. A systematic analysis of this growing body of literature revealed that most innovation happens in automated diagnosis, and self-tracking, although there are innovative ideas in tangible interfaces. We noted an overemphasis on data production without consideration of how it leads to fruitful interventions. Moreover, we see a need to promote ethical practices for involvement of people living with affective disorders. Finally, although only 16 studies evaluate technologies in a clinical context, several forms of support and intervention illustrate how rich insights are gained from evaluations with real patients. Our findings highlight potential for growth in the design space of affective health technologies.
An Evaluation of Touch Input at the Edge of a Table
Tables, desks, and counters are often nearby, motivating their use as interactive surfaces. However, they are typically cluttered. As an alternative, we explore touch input along the ‘edge’ of table-like surfaces. The performance of tapping, crossing, and dragging is tested along the two ridges and front face of a table edge. Results show top ridge movement time is comparable to the top face when tapping or dragging. When crossing, both ridges are at least 11% faster than the top face. Effective width analysis is used to model performance and provide recommended target sizes. Based on observed user behaviour, variations of top and bottom ridge crossing are explored in a second study, and design recommendations with example applications are provided.
Examining and Enhancing the Illusory Touch Perception in Virtual Reality Using Non-Invasive Brain Stimulation
Virtual reality (VR) can be immersive to such a degree that users sometimes report feeling tactile sensations based on visualization of the touch, without any actual physical contact. This effect is not only interesting for studies of human perception, but can also be leveraged to improve the quality of VR by evoking tactile sensations without usage of specialized equipment. The aim of this paper is to study brain processing of the illusory touch and its enhancement for purposes of exploitation in VR scene design. To amplify the illusory touch, transcranial direct current stimulation (tDCS) was used. Participants attended two sessions with blinded stimulation and interacted with a virtual ball using tracked hands in VR. The effects were studied using electroencephalography (EEG), that allowed us to examine stimulation-induced changes in processing of the illusory touch in the brain, as well as to identify its neural correlates. Results confirm enhanced processing of the illusory touch after the stimulation, and some of these changes were correlated to subjective rating of its magnitude.
Pay Attention, Please: Formal Language Improves Attention in Volunteer and Paid Online Experiments
Participant engagement in online studies is key to collecting reliable data, yet achieving it remains an often discussed challenge in the research community. One factor that might impact engagement is the formality of language used to communicate with participants throughout the study. Prior work has found that language formality can convey social cues and power hierarchies, affecting people’s responses and actions. We explore how formality influences engagement, measured by attention, dropout, time spent on the study and participant performance, in an online study with 369 participants on Mechanical Turk (paid) and LabintheWild (volunteer). Formal language improves participant attention compared to using casual language in both paid and volunteer conditions, but does not affect dropout, time spent, or participant performance. We suggest using more formal language in studies containing complex tasks where fully reading instructions is especially important. We also highlight trade-offs that different recruitment incentives provide in online experimentation.
Enhancing Texture Perception in Virtual Reality Using 3D-Printed Hair Structures
Experiencing materials in virtual reality (VR) is enhanced by combining visual and haptic feedback. While VR easily allows changes to visual appearances, modifying haptic impressions remains challenging. Existing passive haptic techniques require access to a large set of tangible proxies. To reduce the number of physical representations, we look towards fabrication to create more versatile counterparts. In a user study, 3D-printed hairs with length varying in steps of 2.5 mm were used to influence the feeling of roughness and hardness. By overlaying fabricated hair with visual textures, the resolution of the user’s haptic perception increased. As changing haptic sensations are able to elicit perceptual switches, our approach can extend a limited set of textures to a much broader set of material impressions. Our results give insights into the effectiveness of 3D-printed hair for enhancing texture perception in VR.
The Design of Social Drones: A Review of Studies on Autonomous Flyers in Inhabited Environments
The design space of social drones, where autonomous flyers operate in close proximity to human users or bystanders, is distinct from use cases involving a remote human operator and/or an uninhabited environment; and warrants foregrounding human-centered design concerns. Recently, research on social drones has followed a trend of rapid growth. This paper consolidates the current state of the art in human-centered design knowledge about social drones through a review of relevant studies, scaffolded by a descriptive framework of design knowledge creation. Our analysis identified three high-level themes that sketch out knowledge clusters in the literature, and twelve design concerns which unpack how various dimensions of drone aesthetics and behavior relate to pertinent human responses. These results have the potential to inform and expedite future research and practice, by supporting readers in defining and situating their future contributions. The materials and results of our analysis are also published in an open online repository that intends to serve as a living hub for a community of researchers and designers working with social drones.
Recipes for Programmable Money
This paper presents a qualitative study of the recent integration of a UK-based, digital-first mobile banking app – Monzo – with the web automation service IFTTT (If This Then That). Through analysis of 113 unique IFTTT ‘recipes’ shared by Monzo users on public community forums, we illustrate the potentially diverse functions of these recipes, and how they are achieved through different kinds of automation. Beyond achieving more convenient and efficient financial management, we note many playful and expressive applications of conditionality and automation that far extend traditional functions of banking applications and infrastructure. We use these findings to map opportunities, challenges and areas of future research in the development of ‘programmable money’ and related financial technologies. Specifically, we present design implications for the extension of native digital banking applications; novel uses of banking data; the applicability of blockchains and smart contracts; and future forms of financial autonomy.
Crowdsourcing Interface Feature Design with Bayesian Optimization
Designing novel interfaces is challenging. Designers typically rely on experience or subjective judgment in the absence of analytical or objective means for selecting interface parameters. We demonstrate Bayesian optimization as an efficient tool for objective interface feature refinement. Specifically, we show that crowdsourcing paired with Bayesian optimization can rapidly and effectively assist interface design across diverse deployment environments. Experiment 1 evaluates the approach on a familiar 2D interface design problem: a map search and review use case. Adding a degree of complexity, Experiment 2 extends Experiment 1 by switching the deployment environment to mobile-based virtual reality. The approach is then demonstrated as a case study for a fundamentally new and unfamiliar interaction design problem: web-based augmented reality. Finally, we show how the model generated as an outcome of the refinement process can be used for user simulation and queried to deliver various design insights.
Comparing Effectiveness and Engagement of Data Comics and Infographics
This paper compares the effectiveness of data comics and infographics for data-driven storytelling. While infographics are widely used, comics are increasingly popular for explaining complex and scientific concepts. However, empirical evidence comparing the effectiveness and engagement of infographics, comics and illustrated texts is still lacking. We report on the results of two complementary studies, one in a controlled setting and one in the wild. Our results suggest participants largely prefer data comics in terms of enjoyment, focus, and overall engagement and that comics improve understanding and recall of information in the stories. Our findings help to understand the respective roles of the investigated formats as well as inform the design of more effective data comics and infographics.
Resilient Chatbots: Repair Strategy Preferences for Conversational Breakdowns
Text-based conversational systems, also referred to as chatbots, have grown widely popular. Current natural language understanding technologies are not yet ready to tackle the complexities in conversational interactions. Breakdowns are common, leading to negative user experiences. Guided by communication theories, we explore user preferences for eight repair strategies, including ones that are common in commercially-deployed chatbots (e.g., confirmation, providing options), as well as novel strategies that explain characteristics of the underlying machine learning algorithms. We conducted a scenario-based study to compare repair strategies with Mechanical Turk workers (N=203). We found that providing options and explanations were generally favored, as they manifest initiative from the chatbot and are actionable to recover from breakdowns. Through detailed analysis of participants’ responses, we provide a nuanced understanding on the strengths and weaknesses of each repair strategy.
Crowdlicit: A System for Conducting Distributed End-User Elicitation and Identification Studies
End-user elicitation studies are a popular design method. Currently, such studies are usually confined to a lab, limiting the number and diversity of participants, and therefore the representativeness of their results. Furthermore, the quality of the results from such studies generally lacks any formal means of evaluation. In this paper, we address some of the limitations of elicitation studies through the creation of the Crowdlicit system along with the introduction of end-user identification studies, which are the reverse of elicitation studies. Crowdlicit is a new web-based system that enables researchers to conduct online and in-lab elicitation and identification studies. We used Crowdlicit to run a crowd-powered elicitation study based on Morris’s “Web on the Wall” study (2012) with 78 participants, arriving at a set of symbols that included six new symbols different from Morris’s. We evaluated the effectiveness of 49 symbols (43 from Morris and six from Crowdlicit) by conducting a crowd-powered identification study. We show that the Crowdlicit elicitation study resulted in a set of symbols that was significantly more identifiable than Morris’s.
Thinking Too Classically: Research Topics in Human-Quantum Computer Interaction
Quantum computing is a fundamentally different way of performing computation than classical computing. Many problems that are considered hard for classical computers may have efficient solutions using quantum computers. Recently, technology companies including IBM, Microsoft, and Google have invested in developing both quantum computing hardware and software to explore the potential of quantum computing. Because of the radical shift in computing paradigms that quantum represents, we see an opportunity to study the unique needs people have when interacting with quantum systems, what we call Quantum HCI (QHCI). Based on interviews with experts in quantum computing, we identify four areas in which HCI researchers can contribute to the field of quantum computing. These areas include understanding current and future quantum users, tools for programming and debugging quantum algorithms, visualizations of quantum states, and educational materials to train the first generation of “quantum native” programmers.
Predicting Cognitive Load in Future Code Puzzles
Code puzzles are an increasingly popular way to introduce youth to programming. Yet our knowledge about how to maximize learning from puzzles is incomplete. We conducted a data collection study and trained a model that predicts cognitive load, the mental effort necessary to complete a task, on a future puzzle. Controlling cognitive load can lead to more effective learning. Our model suggests that it is possible to predict Cognitive Load on future problems; the model could correctly distinguish the more difficult puzzle within a pair 71%-79% of the time. Further, studying the model itself provides new insights into the sources of puzzle difficulty, the factors that contribute to Cognitive Load, and their inter-relationships. Finally, the ability to predict Cognitive Load on a future puzzle is an important step towards the creation of adaptive code puzzle systems.
Hey Google, Can I Ask You Something in Private?
MModern day voice-activated virtual assistants allow users to share and ask for information that could be considered as personal through different input modalities and devices. Using Google Assistant, this study examined if the differences in modality (i.e., voice vs. text) and device (i.e., smartphone vs. smart home device) affect user perceptions when users attempt to retrieve sensitive health information from voice assistants. Major findings from this study suggest that voice (vs. text) interaction significantly enhanced perceived social presence of the voice assistant, but only when the users solicited less sensitive health-related information. Furthermore, when individuals reported less privacy concerns, voice (vs. text) interaction elicited positive attitudes toward the voice assistant via increased social presence, but only in the low (vs. high) information sensitivity condition. Contrary to modality, the device difference did not exert any significant impact on the attitudes toward the voice assistant regardless of the sensitivity level of the health information being asked or the level of individuals’ privacy concerns.
Influencers in Multiplayer Online Shooters: Evidence of Social Contagion in Playtime and Social Play
In a wide range of social networks, people’s behavior is influenced by social contagion: we do what our network does. Networks often feature particularly influential individuals, commonly called “influencers.” Existing work suggests that in-game social networks in online games are similar to real-life social networks in many respects. However, we do not know whether there are in-game equivalents to influencers. We therefore applied standard social network features used to identify influencers to the online multiplayer shooter Tom Clancy’s The Division. Results show that network feature-defined influencers had indeed an outsized impact on playtime and social play of players joining their in-game network.
Digital Financial Needs of Micro-entrepreneur Women in Pakistan: Is Mobile Money The Answer?
This paper studies the use of Digital Financial Services (DFS) as a solution to women’s financial inclusion in deeply patriarchal, resource constrained communities. Through a qualitative, empirical study we map the financial life cycles of 20 women micro-entrepreneurs in different cities in Pakistan and the challenges they face. We explore how technology is currently influencing these women’s businesses and personal lives and reveal how mobile money is not tuned to the problems they face and their financial needs. We present alternate design directions for meeting the technological and financial needs of these women, circumnavigating the patriarchal structures that constrain them.
Understanding Mass-Market Mobile TV Behaviors in the Streaming Era
Despite claims of Mobile TV’s mainstream arrival in 2010, it took until 2017 for watching professionally-produced television content on mobile phones to truly become a mass-market phenomenon in America, with half of all TV content expected to be watched on mobile phones by 2020. But what professionally produced content are people watching on their phones and when are they watching it? Are there any clusters of behavior that emerge in the broader population when it comes to watching TV on the phone? We set out to answer these questions through two surveys deployed to representative samples of online Americans. We discuss our findings on the mass-market arrival of mobile TV viewing and differences from how the HCI community has previously envisioned mobile video. We conclude with implications for the design of future mobile TV systems.
Bringing Design to the Privacy Table: Broadening
In calls for privacy by design (PBD), regulators and privacy scholars have investigated the richness of the concept of “privacy.” In contrast, “design” in HCI is comprised of rich and complex concepts and practices, but has received much less attention in the PBD context. Conducting a literature review of HCI publications discussing privacy and design, this paper articulates a set of dimensions along which design relates to privacy, including: the purpose of design, which actors do design work in these settings, and the envisioned beneficiaries of design work. We suggest new roles for HCI and design in PBD research and practice: utilizing values- and critically-oriented design approaches to foreground social values and help define privacy problem spaces. We argue such approaches, in addition to current “design to solve privacy problems” efforts, are essential to the full realization of PBD, while noting the politics involved when choosing design to address privacy.
Practitioners Teaching Data Science in Industry and Academia: Expectations, Workflows, and Challenges
Data science has been growing in prominence across both academia and industry, but there is still little formal consensus about how to teach it. Many people who currently teach data science are practitioners such as computational researchers in academia or data scientists in industry. To understand how these practitioner-instructors pass their knowledge onto novices and how that contrasts with teaching more traditional forms of programming, we interviewed 20 data scientists who teach in settings ranging from small-group workshops to large online courses. We found that: 1) they must empathize with a diverse array of student backgrounds and expectations, 2) they teach technical workflows that integrate authentic practices surrounding code, data, and communication, 3) they face challenges involving authenticity versus abstraction in software setup, finding and curating pedagogically-relevant datasets, and acclimating students to live with uncertainty in data analysis. These findings can point the way toward better tools for data science education and help bring data literacy to more people around the world.
An Exploration of Speech-Based Productivity Support in the Car
In-car intelligent assistants offer the opportunity to help drivers productively use previously unclaimed time during their commute. However, engaging in secondary tasks can reduce attention on driving and thus may affect road safety. Any interface used while driving, even if speech-based, cannot consider non-driving tasks in isolation of driving—alerts for safer driving and timing of the non-driving tasks are crucial to maintaining safety. In this work, we explore experiences with a speech-based assistant that attempts to help drivers safely complete complex productivity tasks. Via a controlled simulator study, we look at how level of support and road context alerts from the assistant influence a driver’s ability to drive safely while writing a document or creating slides via speech. Our results suggest ways to support speech-based productivity interactions and how speech-based road context alerts may influence driver behavior.
Everyday Experiences: Small Stories and Mental Illness on Instagram
Despite historical precedence and modern prevalence, mental illness and associated disorders are frequently aligned with notions of deviance and, by association, abnormality. The view that mental illness deviates from an implicit social norm permeates the CHI community, impacting how scholars approach research in this space. In this paper, we challenge community and societal norms aligning mental illness with deviance. We combine semi-structured interviews with digital ethnography of public Instagram accounts to examine how Instagram users express mental illness. Drawing on small stories research, we find that individuals situate mental illness within their everyday lives and negotiate their tellings of experience due to the influence of various social control structures. We discuss implications for incorporating ‘the everyday’ into the design of technological solutions for marginalized communities and the ways in which researchers and designers may inadvertently perpetuate and instantiate stigma related to mental illness.
Blockchain Assemblages: Whiteboxing Technology and Transforming Infrastructural Imaginaries
In this paper we unpack empirical data from two domains within the Blockchain information infrastructure: The cryptocurrency trading domain, and the energy domain. Through these accounts we introduce the relational concepts of Blockchain Assemblages and Whiteboxing. Blockchain assemblages comprise configurations of digital and analogue artefacts that are entangled with imaginaries about the current and future state of the Blockchain information infrastructure. Rather than being a black box, Blockchain assemblages alternate between being dynamic and stable entities. We propose Whiteboxing as the sociomaterial process which drives blockchain assemblages in their dynamic state to be (re)configured, while related artefacts and imaginaries are simultaneously transformed, creating dynamic representations. Whiteboxing is triggered during disconfirming events when representations are discovered as problematic. Complementing existing historical accounts demonstrating technologies in the making, the contribution of this paper, proposes whiteboxing as an analytical concept which allows us to unpack how contemporary technologies are created through entrepreneurial activities.
Risk vs. Restriction: The Tension between Providing a Sense of Normalcy and Keeping Foster Teens Safe Online
Foster youth are particularly vulnerable to offline risks; yet, little is known about their online risk experiences or how foster parents mediate technology use in the home. We conducted 29 interviews with foster parents of 42 teens (ages 13-17) who were part of the child welfare system. Foster parents faced significant challenges relating to technology mediation in the home. Based on parental accounts, over half of the foster teens encountered high-risk situations that involved interacting with unsafe people online, resulting in rape, sex trafficking, and/or psychological harm. Overall, foster parents were at a loss for how to balance online safety with technology access in a way that engendered positive relationships with their foster teens. Instead, parents often resorted to outright restriction. Our research highlights the importance of considering the unique needs of foster families and designing technologies to address the challenges faced by this vulnerable population of teens and parents.
Who’s In Control?: Interactions In Multi-User Smart Homes
Adoption of commercial smart home devices is rapidly increasing, allowing in-situ research in people’s homes. As these technologies are deployed in shared spaces, we seek to understand interactions among multiple people and devices in a smart home. We conducted a mixed-methods study with 18 participants (primarily people who drive smart device adoption in their homes) living in multi-user smart homes, combining semi-structured interviews and experience sampling. Our findings surface tensions and cooperation among users in several phases of smart device use: device selection and installation, ordinary use, when the smart home does not work as expected, and over longer term use. We observe an outsized role of the person who installs devices in terms of selecting, controlling, and fixing them; negotiations between parents and children; and minimally voiced privacy concerns among co-occupants, possibly due to participant sampling. We make design recommendations for supporting long-term smart homes and non-expert household members.
GameViews: Understanding and Supporting Data-driven Sports Storytelling
Various stakeholders in the sports domain rely on the analysis and presentation of sports data to derive insights. In particular, sportswriters construct game stories using statistical information; fans share their viewpoints based on the real-time stats while watching the game. In this paper, we explore how these stakeholders construct data-driven sports stories. We began by observing a sportswriter, then analyzed published sports stories, and characterized 1500 fan comments about particular sporting events. We found that their story needs were similar in some respects while quite different in others. Based on the findings, we implemented two exploratory prototypes: GameViews-Writers for sportswriters to quickly extract key game information and GameViews-Fans to support a real-time data-driven game-viewing experience for fans. We report insights from two user studies conducted with four professional sportswriters and eight sports fans, respectively. We discuss the results of these studies and present several avenues for future work.
Managing Messes in Computational Notebooks
Data analysts use computational notebooks to write code for analyzing and visualizing data. Notebooks help analysts iteratively write analysis code by letting them interleave code with output, and selectively execute cells. However, as analysis progresses, analysts leave behind old code and outputs, and overwrite important code, producing cluttered and inconsistent notebooks. This paper introduces code gathering tools, extensions to computational notebooks that help analysts find, clean, recover, and compare versions of code in cluttered, inconsistent notebooks. The tools archive all versions of code outputs, allowing analysts to review these versions and recover the subsets of code that produced them. These subsets can serve as succinct summaries of analysis activity or starting points for new analyses. In a qualitative usability study, 12 professional analysts found the tools useful for cleaning notebooks and writing analysis code, and discovered new ways to use them, like generating personal documentation and lightweight versioning.
Nurturing Constructive Disagreement – Agonistic Design with Neurodiverse Children
Participatory design (PD) with heterogeneous groups poses particular challenges, requiring spaces in which different agendas or visions can be negotiated. In this paper we report on our PD work with two groups of neurodiverse children to design technologies that support co-located, social play. The heterogeneity in the groups in terms of abilities, conceptions of play, motivations to be involved and individual preferences has challenged us to think of the design process and its outcomes as spaces for continuous negotiation. Drawing on the notion of agonistic PD, we sought not to necessarily reconcile all views, but foster constructive disagreement as a resource for and possible outcome of design. Using our project work as a case study, we report on controversies, big and small, and how they manifested themselves in the processes and outcomes. Reflecting on our experiences, we discuss possible implications on the notion of democratising technology innovation.
An Exploratory Study of the Use of Drones for Assisting Firefighters During Emergency Situations
In the near future, emergency services within Canada will be supporting new technologies for 9-1-1 call centres and firefighters to learn about an emergency situation. One such technology is drones. To understand the benefits and challenges of using drones within emergency response, we conducted a study with citizens who have called 9-1-1 and firefighters who respond to a range of everyday emergencies. Our results show that drones have numerous benefits to both firefighters and 9-1-1 callers which include context awareness and social support for callers who receive feelings of assurance that help is on the way. Privacy was largely not an issue, though safety issues arose especially for complex uses of drones such as indoor flying. Our results point to opportunities for designing drone systems that help people to develop a sense of trust with emergency response drones, and mitigate privacy and safety concerns with more complex drone systems.
PickCells: A Physically Reconfigurable Cell-composed Touchscreen
Touchscreens are the predominant medium for interactions with digital services; however, their current fixed form factor narrows the scope for rich physical interactions by limiting interaction possibilities to a single, planar surface. In this paper we introduce the concept of PickCells, a fully re-configurable device concept composed of cells, that breaks the mould of rigid screens and explores a modular system that affords rich sets of tangible interactions and novel across-device relationships. Through a series of co-design activities — involving HCI experts and potential end-users of such systems — we synthesised a design space aimed at inspiring future research, giving researchers and designers a framework in which to explore modular screen interactions. The design space we propose unifies existing works on modular touch surfaces under a general framework and broadens horizons by opening up unexplored spaces providing new interaction possibilities. In this paper, we present the PickCells concept, a design space of modular touch surfaces, and propose a toolkit for quick scenario prototyping.
Active Edge: Designing Squeeze Gestures for the Google Pixel 2
Active Edge is a feature of Google Pixel 2 smartphone devices that creates a force-sensitive interaction surface along their sides, allowing users to perform gestures by holding and squeezing their device. Supported by strain gauge elements adhered to the inner sidewalls of the device chassis, these gestures can be more natural and ergonomic than on-screen (touch) counterparts. Developing these interactions is an integration of several components: (1) an insight and understanding of the user experiences that benefit from squeeze gestures; (2) hardware with the sensitivity and reliability to sense a user’s squeeze in any operating environment; (3) a gesture design that discriminates intentional squeezes from innocuous handling; and (4) an interaction design to promote a discoverable and satisfying user experience. This paper describes the design and evaluation of Active Edge in these areas as part of the product’s development and engineering.
Clench Interface: Novel Biting Input Techniques
People eat every day and biting is one of the most fundamental and natural actions that they perform on a daily basis. Existing work has explored tooth click location and jaw movement as input techniques, however clenching has the potential to add control to this input channel. We propose clench interaction that leverages clenching as an actively controlled physiological signal that can facilitate interactions. We conducted a user study to investigate users’ ability to control their clench force. We found that users can easily discriminate three force levels, and that they can quickly confirm actions by unclenching (quick release). We developed a design space for clench interaction based on the results and investigated the usability of the clench interface. Participants preferred the clench over baselines and indicated a willingness to use clench-based interactions. This novel technique can provide an additional input method in cases where users’ eyes or hands are busy, augment immersive experiences such as virtual/augmented reality, and assist individuals with disabilities.
Interferi: Gesture Sensing using On-Body Acoustic Interferometry
Interferi is an on-body gesture sensing technique using acoustic interferometry. We use ultrasonic transducers resting on the skin to create acoustic interference patterns inside the wearer’s body, which interact with anatomical features in complex, yet characteristic ways. We focus on two areas of the body with great expressive power: the hands and face. For each, we built and tested a series of worn sensor configurations, which we used to identify useful transducer arrangements and machine learning fea-tures. We created final prototypes for the hand and face, which our study results show can support eleven- and nine-class gestures sets at 93.4% and 89.0% accuracy, re-spectively. We also evaluated our system in four continu-ous tracking tasks, including smile intensity and weight estimation, which never exceed 9.5% error. We believe these results show great promise and illuminate an inter-esting sensing technique for HCI applications.
Cultivating Care through Ambiguity: Lessons from a Service Learning Course
Given the focus of professional graduate ICT programs on technical and managerial skills, pedagogical engagement with external organizations tends to be transactional and artifact-centered. This inhibits the students’ ability to understand social, technical and ethical issues in context, or to develop affective relationships with users and other stakeholders. To address this, we designed a service learning course that partnered students with non-profit organizations to help with their technology challenges. The service project was deliberately left open-ended to force students (and partners) to tackle important questions around project scoping and impact. By drawing parallels to soil care practices, we explore how “care time” emerged in this context, and how the incorporation of ambiguity galvanized students, community, and faculty to make time to navigate it. This led to non-tangible yet vital outcomes such as overcoming social limitations, building symbiotic relationships, and enacting acts of care necessary for more ethical orchestration of technology.
"Beautiful Seams": Strategic Revelations and Concealments
This paper tracks a debate that occurred, first, within the field of Ubiquitous Computing but quickly spread to CHI and beyond, in which design scholars argued that seamlessness had long been an implicit and privileged design virtue, often at the expense of seamfulness. Seamless design emphasizes clarity, simplicity, ease of use, and consistency to facilitate technological interaction. Seamful design emphasizes configurability, user appropriation, and revelation of complexity, ambiguity or inconsistency. Here we review these literatures together and argue that, rather than rival approaches, seamful and seamless design are complements, each emphasizing different aspects of downstream user agency. Ultimately, we situate this debate within the larger, perennial discussion about the strategic revelation and concealment of human and technological operations, and therein the role of design.
Understanding the Effect of Accuracy on Trust in Machine Learning Models
We address a relatively under-explored aspect of human-computer interaction: people’s abilities to understand the relationship between a machine learning model’s stated performance on held-out data and its expected performance post deployment. We conduct large-scale, randomized human-subject experiments to examine whether laypeople’s trust in a model, measured in terms of both the frequency with which they revise their predictions to match those of the model and their self-reported levels of trust in the model, varies depending on the model’s stated accuracy on held-out data and on its observed accuracy in practice. We find that people’s trust in a model is affected by both its stated accuracy and its observed accuracy, and that the effect of stated accuracy can change depending on the observed accuracy. Our work relates to recent research on interpretable machine learning, but moves beyond the typical focus on model internals, exploring a different component of the machine learning pipeline.
Engaging Gentrification as a Social Justice Issue in HCI
Gentrification-the spatial expression of economic inequality-is fundamentally a matter of social justice. Yet, even as work outside of HCI has begun to discuss how computing can enable or challenge gentrification, HCI’s growing social justice agenda has not engaged with this issue. This omission creates an opportunity for HCI to develop a research and design agenda at the intersection of computing, social justice, and gentrification. We begin this work by outlining existing scholarship describing how the consumption side dynamics of gentrification are mediated by contemporary socio-technical systems. Subsequently, we build on the social justice framework introduced by Dombrowski, Harmon, and Fox to discuss how HCI may resist or counter such forces. We offer six modes of research that HCI scholars can pursue to engage gentrification.
Exploring Virtual Agents for Augmented Reality
Prior work has shown that embodiment can benefit virtual agents, such as increasing rapport and conveying non-verbal information. However, it is unclear if users prefer an embodied to a speech-only agent for augmented reality (AR) headsets that are designed to assist users in completing real-world tasks. We conducted a study to examine users’ perceptions and behaviors when interacting with virtual agents in AR. We asked 24 adults to wear the Microsoft HoloLens and find objects in a hidden object game while interacting with an agent that would offer assistance. We presented participants with four different agents: voice-only, non-human, full-size embodied, and a miniature embodied agent. Overall, users preferred the miniature embodied agent due to the novelty of his size and reduced uncanniness as opposed to the larger agent. From our results, we draw conclusions about how agent representation matters and derive guidelines on designing agents for AR headsets.
Privacy, Power, and Invisible Labor on Amazon Mechanical Turk
Tasks on crowdsourcing platforms such as Amazon Mechanical Turk often request workers’ personal information, raising privacy risks that may be exacerbated by requester-worker power dynamics. We interviewed 14 workers to understand how they navigate these risks. We found that Turkers’ decisions to provide personal information during tasks were based on evaluations of the pay rate, the requester, the purpose, and the perceived sensitivity of the request. Participants also engaged in multiple privacy-protective behaviors, such as abandoning tasks or providing inaccurate data, though there were costs associated with these behaviors, such as wasted time and risk of rejection. Finally, their privacy concerns and practices evolved as they learned about both the platform and worker-designed tools and forums. These findings deepen our understanding of both privacy decision-making and invisible labor in paid crowdsourcing, and emphasize a general need to understand how privacy stances change over time.
Beyond Schematic Capture: Meaningful Abstractions for Better Electronics Design Tools
Printed Circuit Board (PCB) design tools are critical in helping users build non-trivial electronics devices. While recent work recognizes deficiencies with current tools and explores novel methods, little has been done to understand modern designers and their needs. To gain better insight into their practices, we interview fifteen electronics designers of a variety of backgrounds. Our open-ended, semi-structured interviews examine both overarching design flows and details of individual steps. One major finding was that most creative engineering work happens during system architecture, yet current tools operate at lower abstraction levels and create significant tedious work for designers. From that insight, we conceptualize abstractions and primitives for higher-level tools and elicit feedback from our participants on clickthrough mockups of design flows through an example project. We close with our observation on opportunities for improving board design tools and discuss generalizability of our findings beyond the electronics domain.
TutoriVR: A Video-Based Tutorial System for Design Applications in Virtual Reality
Virtual Reality painting is a form of 3D-painting done in a Virtual Reality (VR) space. Being a relatively new kind of art form, there is a growing interest within the creative practices community to learn it. Currently, most users learn using community posted 2D-videos on the internet, which are a screencast recording of the painting process by an instructor. While such an approach may suffice for teaching 2D-software tools, these videos by themselves fail in delivering crucial details that required by the user to understand actions in a VR space. We conduct a formative study to identify challenges faced by users in learning to VR-paint using such video-based tutorials. Informed by results of this study, we develop a VR-embedded tutorial system that supplements video tutorials with 3D and contextual aids directly in the user’s VR environment. An exploratory evaluation showed users were positive about the system and were able to use the proposed system to recreate painting tasks in VR.
Can Mobile Augmented Reality Stimulate a Honeypot Effect?: Observations from Santa’s Lil Helper
In HCI, the honeypot effect describes a form of audience engagement in which a person’s interaction with a technology stimulates passers-by to observe, approach and engage in an interaction themselves. In this paper we explore the potential for honeypot effects to arise in the use of mobile augmented reality (AR) applications in urban spaces. We present an observational study of Santa’s Lil Helper, a mobile AR game that created a Christmas-themed treasure hunt in a metropolitan area. Our study supports a consideration of three factors that may impede the honeypot effect: the presence of people in relation to the game and its interactive components; the visibility of gameplay in urban space; and the extent to which the game permits a shared experience. We consider how these factors can inform the design of future AR experiences that are capable of stimulating honeypot effects in public space.
Infrastructuring the Imaginary: How Sea-Level Rise Comes to Matter in the San Francisco Bay Area
Information infrastructures have become integral components of policy debates related to climate change and sustainability. To better understand this relationship, we studied the tools used to forecast and respond to sea-level rise in the San Francisco Bay Area, where active debates on how to best prepare for this issues are underway and will have important consequences for the future of the region. Drawing on 18 months of qualitative research we argue that competing visions of the problem are intimately intertwined with different elements of information infrastructure and beliefs about the role of data in policymaking. Current infrastructure in the region, far from being a neutral actor in these debates, exhibits an infrastructural bias, privileging some approaches over others. We identify some of the tactics that community organizations deploy to subvert the claims of sea-level rise experts and advance their own perspective, which prioritizes considerations of justice over technical expertise.
The Right to the Sustainable Smart City
Environmental concerns have driven an interest in sustainable smart cities, through the monitoring and optimisation of networked infrastructures. At the same time, there are concerns about who these interventions and services are for, and who benefits. HCI researchers and designers interested in civic life have started to call for the democratisation of urban space through resistance and political action to challenge state and corporate claims. This paper contributes to an emerging body of work that seeks to involve citizens in the design of sustainable smart cities, particularly in the context of marginalised and culturally diverse urban communities. We present a study involving co-designing Internet of Things with urban agricultural communities and discuss three ways in which design can participate in the right to the sustainable smart city through designing for the commons, care, and biocultural diversity.
A Place to Play: The (Dis)Abled Embodied Experience for Autistic Children in Online Spaces
Play is the work of children-but access to play is not equal from child to child. Having access to a place to play is a challenge for marginalized children, such as children with disabilities. For autistic children, playing with other children in the physical world may be uncomfortable or even painful. Yet, having practice in the social skills play provides is essential for childhood development. In this ethnographic work, I explore how one community uses the sense of place and the digital embodied experience in a virtual world specifically to give autistic children access to play with their peers. The contribution of this work is twofold. First, I demonstrate how various physical and virtual spaces work together to make play possible. Second, I demonstrate these spaces, though some of them are digital, are no more or less “real” than the physical spaces making up a schoolyard or playground.
‘Think secure from the beginning’: A Survey with Software Developers
Vulnerabilities persist despite existing software security initiatives and best practices. This paper focuses on the human factors of software security, including human behaviour and motivation. We conducted an online survey to explore the interplay between developers and software security processes, e.g., we looked into how developers influence and are influenced by these processes. Our data included responses from 123 software developers currently employed in North America who work on various types of software applications. Whereas developers are often held responsible for security vulnerabilities, our analysis shows that the real issues frequently stem from a lack of organizational or process support to handle security throughout development tasks. Our participants are self-motivated towards software security, and the majority did not dismiss it but identified obstacles to achieving secure code. Our work highlights the need to look beyond the individual, and take a holistic approach to investigate organizational issues influencing software security.
"Wait, Do I Know This Person?": Understanding Misdirected Email
Email is an essential tool for communication and social interaction. It also functions as a broadcast medium connecting businesses with their customers, as an authentication mechanism, and as a vector for scams and security threats. These uses are enabled by the fact that the only barrier to reaching someone by email is knowing his or her email address. This feature has given rise to the spam email industry but also has another side-effect that is becoming increasingly common: misdirected email, or legitimate emails that are intended for somebody else but are sent to the wrong recipient. In this paper we present findings from an interview study and survey focusing on characteristics of misdirected email messages, possible reasons why they happen, and how people manage these messages when they receive them. Misdirected email arises as a result of signifiers (usernames) which were selected by people for social and self-representation purposes, that are also used by machines for addressing. Because there is no mechanism for dealing with misdirected emails in a systematic way, individual recipients must choose whether to take action and how much effort to put forth to prevent potential negative consequences for themselves and others.
Effects of Locomotion and Visual Overview on Spatial Memory when Interacting with Wall Displays
Wall displays support people in interacting with large information spaces in two ways: On the one hand, the physical space in front of such displays enables them to navigate information spaces physically. On the other hand, the visual overview of the information space on the display may promote the formation of spatial memory; from studies of desktop computers we know this can boost performance. However, it remains unclear how the benefits of locomotion and overviews relate and whether one is more important than the other. We study this question through a wall display adaptation of the classic Data Mountain system to separate the effects of locomotion and visual overview. Our findings suggest that overview improves recall and that the combination of overview and locomotion outperforms all other combinations of factors.
Crowdsourcing Multi-label Audio Annotation Tasks with Citizen Scientists
Annotating rich audio data is an essential aspect of training and evaluating machine listening systems. We approach this task in the context of temporally-complex urban soundscapes, which require multiple labels to identify overlapping sound sources. Typically this work is crowdsourced, and previous studies have shown that workers can quickly label audio with binary annotation for single classes. However, this approach can be difficult to scale when multiple passes with different focus classes are required to annotate data with multiple labels. In citizen science, where tasks are often image-based, annotation efforts typically label multiple classes simultaneously in a single pass. This paper describes our data collection on the Zooniverse citizen science platform, comparing the efficiencies of different audio annotation strategies. We compared multiple-pass binary annotation, single-pass multi-label annotation, and a hybrid approach: hierarchical multi-pass multi-label annotation. We discuss our findings, which support using multi-label annotation, with reference to volunteer citizen scientists’ motivations.
Interactive Repair of Tables Extracted from PDF Documents on Mobile Devices
PDF documents often contain rich data tables that offer opportunities for dynamic reuse in new interactive applications. We describe a pipeline for extracting, analyzing, and parsing PDF tables based on existing machine learning and rule-based techniques. Implementing and deploying this pipeline on a corpus of 447 documents with 1,171 tables results in only 11 tables that are correctly extracted and parsed. To improve the results of automatic table analysis, we first present a taxonomy of errors that arise in the analysis pipeline and discuss the implications of cascading errors on the user experience. We then contribute a system with two sets of lightweight interaction techniques (gesture and toolbar), for viewing and repairing extraction errors in PDF tables on mobile devices. In an evaluation with 17 users involving both a phone and a tablet, participants effectively repaired common errors in 10 tables, with an average time of about 2 minutes per table.
Frequency-Based Design of Smart Textiles
Despite the increasing amount of smart textile design practitioners, the methods and tools commonly available have not progressed to the same scale. Most smart textile interaction designs today rely on detecting changes in resistance. The tools and sensors for this are generally limited to DC-voltage-divider based sensors and multimeters. Furthermore, the textiles and the materials used in smart textile design can exhibit behaviour making it difficult to identify even simple interactions using those means. For instance, steel-based textiles exhibit intrinsic semiconductive properties that are difficult to identify with current methods. In this paper, we show an alternative way to measure interaction with smart textiles. By relying on visualisation known as Lissajous-figures and frequency-based signals, we can detect even subtle and varied forms of interaction with smart textiles. We also show an approach to measuring frequency-based signals and present an Arduino-based system called Teksig to support this type of textile practice.
3D Pen + 3D Printer: Exploring the Role of Humans and Fabrication Machines in Creative Making
The emergence of a 3D pen brings 3D modeling from a screen-based computer-aided design (CAD) system and 3D printing to direct and rapid crafting by 3D doodling. However, 3D doodling remains challenging, requiring craft skills to rapidly express an idea, which is critical in creative making. We explore a new process of 3D modeling using 3D pen + 3D printer. Our pilot study shows that users need support to reduce the number of non-creative tasks to explore a wide design strategy. With the opportunity to invent a new 3D modeling process that needs to incorporate both a pen and printer, we propose techniques and a system that empower users to print while doodling to focus on creative exploration. Our user study shows that users can create diverse 3D models using a pen and printer. We discuss the roles of the human and fabrication machine for the future of fabrication.
Metaphoria: An Algorithmic Companion for Metaphor Creation
Creative writing, from poetry to journalism, is at the crux of human ingenuity and social interaction. Existing creative writing support tools produce entire passages or fully formed sentences, but these approaches fail to adapt to the writer’s own ideas and intentions. Instead we posit to build tools that generate ideas coherent with the writer’s context and encourage writers to produce divergent outcomes. To explore this, we focus on supporting metaphor creation. We present Metaphoria, an interactive system that generates metaphorical connections based on an input word from the writer. Our studies show that Metaphoria provides more coherent suggestions than existing systems, and supports the expression of writers’ unique intentions. We discuss the complex issue of ownership in human-machine collaboration and how to build adaptive creativity support tools in other domains.
RePlay: Contextually Presenting Learning Videos Across Software Applications
Complex activities often require people to work across multiple software applications. However, people frequently lack valuable knowledge about at least one application, especially as software changes and new software emerges. Existing help systems either lack contextual knowledge or are tightly-knit into a single application. We introduce an application-independent approach for contextually presenting video learning resources and demonstrate it through the RePlay system. RePlay uses accessibility APIs to gather context about the user’s activity. It leverages an existing search engine to present relevant videos and highlights key segments within them using video captions. We report on a week-long field study (n=7) and a lab study (n=24) showing that contextual assistance helps people spend less time away from their task than web video search and replaces current video navigation strategies. Our findings highlight challenges with representing and using context across applications.
The Promise of Empathy: Design, Disability, and Knowing the “Other”
This paper examines the promise of empathy, the name commonly given to the initial phase of the human-centered design process in which designers seek to understand their intended users in order to inform technology development. By analyzing popular empathy activities aimed at understanding people with disabilities, we examine the ways empathy works to both powerfully and problematically align designers with the values of people who may use their products. Drawing on disability studies and feminist theorizing, we describe how acts of empathy building may further distance people with disabilities from the processes designers intend to draw them into. We end by reimagining empathy as guided by the lived experiences of people with disabilities who are traditionally positioned as those to be empathized.
Just Give Me What I Want: How People Use and Evaluate Music Search
Music-streaming platforms offer users a large amount of content for consumption. Finding the right music can be challenging and users often need to search through extensive catalogs provided by these platforms. Prior research has focused on general-domain web search, which is designed to meet a broad range of user goals. Here, we study search in the domain of music, seeking to understand how and why people use search and how they evaluate their search experiences on a music-streaming platform. Over two studies, we conducted semi-structured interviews with 27 participants, asking about their search habits and preferences, and observing their behavior while searching for music. Analysis revealed participants evaluated their search experiences along two dimensions: success and effort. Importantly, how participants perceived success and effort differed by their mindset, or the way they assessed the results of their query. We conclude with recommendations to improve the user experience of music search.
Towards Enabling Blind People to Independently Write on Printed Forms
Filling out printed forms (e.g., checks) independently is currently impossible for blind people, since they cannot pinpoint the locations of the form fields, and quite often, they cannot even figure out what fields (e.g., name) are present in the form. Hence, they always depend on sighted people to write on their behalf, and help them affix their signatures. Extant assistive technologies have exclusively focused on reading, with no support for writing. In this paper, we introduce WiYG, a Write-it-Yourself guide that directs a blind user to the different form fields, so that she can independently fill out these fields without seeking assistance from a sighted person. Specifically, WiYG uses a pocket-sized custom 3D printed smartphone attachment, and well-established computer vision algorithms to dynamically generate audio instructions that guide the user to the different form fields. A user study with 13 blind participants showed that with WiYG, users could correctly fill out the form fields at the right locations with an accuracy as high as 89.5%.
Social, Cultural and Systematic Frustrations Motivating the Formation of a DIY Hearing Loss Hacking Community
Research on attitudes to assistive technology (AT) has shown both the positive and negative impact of these technologies on quality of life. Building on this research, we examine the sociocultural and technological frustrations with hearing loss (HL) technologies that motivate personal approaches to solving these issues. Drawing on meet-up observations and contextual interview data, we detail participants’ experiences of and attitudes towards hearing AT that influences hacking hearing loss. Hearing AT is misunderstood as a solution to the impairment, influencing one-to-one interactions, cultural norms, and systematic frustrations. Participants’ exasperation with the slow development of top-down solutions has led some members to design and develop their own personalised solutions. Beyond capturing a segment of the growing DIY health and wellbeing phenomenon, our findings extend beyond implications for design to present recommendations for the hearing loss industry, policy makers, and importantly, for researchers engaging with grassroots DIY health movements.
My Naturewatch Camera: Disseminating Practice Research with a Cheap and Easy DIY Design
My Naturewatch Camera is an inexpensive wildlife camera that we designed for people to make themselves as a way of promoting engagement with nature and digital making. We aligned its development to the interests of the BBC’s Natural History Unit as part of an orchestrated engagement strategy also involving our project website and outreach to social media. Since June 2018, when the BBC featured the camera on one of their Springwatch 2018 broadcasts, over 1000 My Naturewatch Cameras have been constructed using instructions and software from our project website and commercially available components, without direct contact with our studio. In this paper, we describe the project and outcomes with a focus on its success in promoting engagement with nature, engagement with digital making, and the effectiveness of this strategy for sharing research products outside traditional commercial channels.
Social Media TestDrive: Real-World Social Media Education for the Next Generation
Social media sites are where life happens for many of today’s young people, so it is important to teach them to use these sites safely and effectively. Many youth receive classroom education on digital literacy topics, but have few chances to build actual skills. Social Media TestDrive, an interactive social media simulation, fills a gap in digital literacy education by combining experiential learning in a realistic and safe social media environment with educator-facilitated classroom lessons. The tool was piloted with 12 educators and over 200 students, and formative evaluation data suggest that TestDrive achieved high levels of engagement with both groups. Students reported the modules enhanced their understanding of digital citizenship issues, and educators noted that students were engaging in meaningful classroom conversations. Finally, we discuss the importance of involving multiple stakeholder groups (e.g., researchers, youth, educators, curriculum developers) in designing educational technology.
Investigating the Impact of a Real-time, Multimodal Student Engagement Analytics Technology in Authentic Classrooms
We developed a real-time, multimodal Student Engagement Analytics Technology so that teachers can provide just-in-time personalized support to students who risk disengagement. To investigate the impact of the technology, we ran an exploratory semester-long study with a teacher in two classrooms. We used a multi-method approach consisting of a quasi-experimental design to evaluate the impact of the technology and a case study design to understand the environmental and social factors surrounding the classroom setting. The results show that the technology had a significant impact on the teacher’s classroom practices (i.e., increased scaffolding to the students) and student engagement (i.e., less boredom). These results suggest that the technology has the potential to support teachers’ role of being a coach in technology-mediated learning environments.
How do People Sort by Ratings?
Sorting items by user rating is a fundamental interaction pattern of the modern Web, used to rank products (Amazon), posts (Reddit), businesses (Yelp), movies (YouTube), and more. To implement this pattern, designers must take in a distribution of ratings for each item and define a sensible total ordering over them. This is a challenging problem, since each distribution is drawn from a distinct sample population, rendering the most straightforward method of sorting — comparing averages — unreliable when the samples are small or of different sizes. Several statistical orderings for binary ratings have been proposed in the literature (e.g., based on the Wilson score, or Laplace smoothing), each attempting to account for the uncertainty introduced by sampling. In this paper, we study this uncertainty through the lens of human perception, and ask “How do people sort by ratings?” In an online study, we collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items. Our results shed light on the cognitive models users employ to choose between rating distributions, which sorts of comparisons are most contentious, and how the presentation of rating information affects users’ preferences.
The Smartphone as a Pacifier and its Consequences: Young adults’ smartphone usage in moments of solitude and correlations to self-reflection
The smartphone plays a dominant role in everyday life. Among young adults, the average daily usage time is almost four hours. The present study [N=399] examines the specific psychological role of smartphone usage during alone time (e.g. in the subway, waiting, in bed). Particularly, we explore its role in coping with negative emotions in the sense of an “attachment object”, providing comfort like a pacifier for infants. Results underlined the pacifying role of smartphone usage to cope with negative emotions in moments of alone time. Moreover, particular personality dispositions (e.g., high need to belong, high proneness to boredom) were associated with more extensive self-reported smartphone usage and mediated by the perception of the smartphone as an attachment object. Finally, smartphone usage was negatively correlated to self-insight, possibly substituting intense inner debates or self-realizations during alone time. Implications for HCI research and practice are discussed.
Privacy and Security Considerations For Digital Technology Use in Elementary Schools
Elementary school educators increasingly use digital technologies to teach students, manage classrooms, and complete everyday tasks. Prior work has considered the educational and pedagogical implications of technology use, but little research has examined how educators consider privacy and security in relation to classroom technology use. To better understand what privacy and security mean to elementary school educators, we conducted nine focus groups with 25 educators across three metropolitan regions in the northeast U.S. Our findings suggest that technology use is an integral part of elementary school classrooms, that educators consider digital privacy and security through the lens of curricular and classroom management goals, and that lessons to teach children about digital privacy and security are rare. Using Bronfenbrenner’s ecological systems theory, we identify design opportunities to help educators integrate privacy and security into decisions about digital technology use and to help children learn about digital privacy and security.
Relations are more than Bytes: Re-thinking the Benefits of Smart Services through People and Things
Critical approaches to smart technologies have emerged in HCI that question the conditions necessary for smart technologies to benefit people. Smart services rely on a relation of trust and sense of security between people and technology requiring a more expansive definition of security. Using established design methods, we worked with two residents’ groups to critically explore and rethink smart services in the home and city. From our data analysis, we derive insights about perceptions and understandings of trust, privacy and security of smart devices, and identify how technological security needs to work in concert with social and relational forms of security for smart services to be effective. We conclude with an orientation for HCI that focuses on designing services for and with smart people and things.
VoiceAssist: Guiding Users to High-Quality Voice Recordings
Voice recording is a challenging task with many pitfalls due to sub-par recording environments, mistakes in recording setup, microphone quality, etc. Newcomers to voice recording often have difficulty recording their voice, leading to recordings with low sound quality. Many amateur recordings of poor quality have two key problems: too much reverberation (echo), and too much background noise (e.g. fans, electronics, street noise). We present VoiceAssist, a system that helps inexperienced users produce high quality recordings by providing real-time visual feedback on audio quality. We integrate modern audio quality measures into an interactive human-machine feedback loop, so that the audio quality can be maximized at capture-time. We demonstrate the utility of this feedback for improving the recording quality with a user study. When presented with visual feedback about recording quality, users produced recordings that were strongly preferred by third-party listeners, when compared to recordings made without this feedback.
On the Usability of HTTPS Deployment
HTTPS and TLS are the backbone of Internet security, however setting up web servers to run these protocols is a notoriously difficult process. In this paper, we perform two live subjects usability studies on the deployment of HTTPS in a real-world setting. Study 1 is a within subjects comparison between traditional HTTPS configuration (purchasing a certificate and installing it on a server) and Let’s Encrypt, which automates much of the process. Study 2 is a between subjects study looking at the same two systems, examining why users encounter usability issues. Overall we confirm past results that HTTPS is difficult to deploy, and we find some evidence that suggests Let’s Encrypt is an easier, more efficient method for deploying HTTPS.
PledgeWork: Online Volunteering through Crowdwork
In this paper, we explore an alternative form of volunteer work, PledgeWork, where individuals, rather than working directly for a charity, make indirect donations by completing tasks provided by a third party task provider. PledgeWork poses novel research questions on issues of user acceptance of on-line volunteerism, on quality and quantity of work performed as a volunteer, and on the benefits low-barrier volunteerism might provide to charities. To evaluate these questions, we conduct a mixed methods study that compares the quality and quantity of work between volunteer workers and paid workers and user attitudes toward PledgeWork, including perceived benefits and drawbacks. We find that PledgeWork can improve the quality of simple tasks and that the vast majority of our participants expressed interest in using our PledgeWork platform to contribute to a charity. Our interview also reveals current problems with volunteering and online donations, thus highlighting additional strengths of PledgeWork.
Modeling the Engagement-Disengagement Cycle of Compulsive Phone Use
Many smartphone users engage in compulsive and habitual phone checking they find frustrating, yet our understanding of how this phenomenon is experienced is limited. We conducted a semi-structured interview, a think-aloud phone-use demonstration, and a sketching exercise with 39 smartphone users (ages 14-64) to probe their experiences with compulsive phone checking. Their insights revealed a small taxonomy of common triggers that lead up to instances of compulsive phone use and a second set that end compulsive phone use sessions. Though participants expressed frustration with their lack of self-control, they also reported that the activities they engage in during these sessions can be meaningful, which they defined as transcending the current instance of use. Participants said they periodically reflect on their compulsive use and delete apps that drive compulsive checking without providing sufficient meaning. We use these findings to create a descriptive model of the cycle of compulsive checking, and we call on designers to craft experiences that meet users’ definition of meaningfulness rather than creating lock-out mechanisms to help them police their own use.
Social Reflections on Fitness Tracking Data: A Study with Families in Low-SES Neighborhoods
Wearable activity trackers can encourage physical activity (PA)-a behavior critical for preventing obesity and reducing the risks of chronic diseases. However, prior work has rarely explored how these tools can leverage family support or help people think about strategies for being active-wo factors necessary for achieving regular PA. In this 2-month qualitative study, we investigated PA tracking practices amongst 14 families living in low-income neighborhoods, where obesity is prevalent. We characterize how social discussions of PA data rarely extended beyond the early stages of experiential learning, thus limiting the utility of PA trackers. Caregivers and children rarely analyzed their experiences to derive insights about the meaning of their PA data for their wellbeing. Those who engaged in these higher-order learning processes were often influenced by parenting beliefs shaped by personal health experiences. We contribute recommendations for how technology can more effectively support family experiential learning using PA tracking data.
"Occupational Therapy is Making": Clinical Rapid Prototyping and Digital Fabrication
Consumer-fabrication technologies potentially improve the effectiveness and adoption of assistive technology (AT) by engaging AT users in AT creation. However, little is known about the role of clinicians in this revolution. We investigate clinical AT fabrication by working as expert fabricators for clinicians over a four-month period. We observed and co-designed AT with four occupational therapists at two clinics: a free clinic for uninsured clients, and a Veteran’s Affairs Hospital. We find that existing fabrication processes, particularly with respect to rapid prototyping, do not align with clinical practice and itsdo-no-harm ethos. We recommend software solutions that would integrate into client care by: amplifying clinicians’ expertise, revealing appropriate fabrication opportunities, and supporting adaptable fabrication.
Communicating Hurricane Risks: Multi-Method Examination of Risk Imagery Diffusion
Conveying uncertainty in information artifacts is difficult; the challenge only grows as the demand for mass communication through multiple channels expands. In particular, as natural hazards increase with changing global conditions, including hurricanes which threaten coastal areas, we need better means of communicating uncertainty around risks that empower people to make good decisions. We examine how people share and respond to a range of visual representations of risk from authoritative sources during hurricane events. Because these images are now shared widely on social media platforms, Twitter provides the means to study them on a large scale as close to in vivo as possible. Using mixed methods, this study analyzes diffusion of and reactions to forecast and other risk imagery during the highly damaging 2017 Atlantic hurricane season to describe the collective response to visual representations of risk.
“Collective Wisdom”: Inquiring into Collective Homes as a Site for HCI Design
The home has been a major focus of the HCI community for over two decades. Despite this body of research, nascent works have argued that HCI’s characterization of ‘the home’ remains narrow and requires more diverse accounts of domestic configurations. Our work contributes to this area through a four-month ethnography of three collective homes in Vancouver, Canada. Collective homes represent an alternative housing model that offers agency to individual members and the collective group by sharing values, resources, labour, space and memory. Our paper offers two contributions. First, we offer an in-depth design ethnography of three collective homes, attending to the values, ownership models, practices, and everyday interactions observed in the ongoing making of these domestic settings. Second, we interpret and synthesize our findings to provide new opportunities for expanding the way we conceptualize and design for ‘the home’ in HCI.
Symbiotic Encounters: HCI and Sustainable Agriculture
Recent sustainable HCI research has advocated “working with nature” as a potentially efficacious alternative to human efforts to control it: yet it is less clear how to do so. We contribute to the theoretical aspect of this research by presenting an ethnographic study on alternative farming practices, in which the farm is not so much a system but an assemblage characterized by multiple systems or rationalities always evolving and changing. In them, relationships among species alternate between mutually beneficial in one moment (or season), and harmful in the next. If HCI is to participate in and to support working with nature, we believe that it will have to situate itself within such assemblages and temporalities. In this work, we look into nontraditional users (e.g., nonhumans) and emerging forms of uses (e.g., interactions between human and other species) to help open a design space for technological interventions. We offer three ethnographic accounts in which farmers-and ourselves as researchers-learn to notice, respond, and engage in symbiotic encounters with companion species and the living soil itself.
Freedom to Personalize My Digital Classroom: Understanding Teachers’ Practices and Motivations
Although modern classrooms are increasingly moving towards digital immersion and personalized learning, we have few insights into K-12 teachers’ current practices, motivations, and barriers in setting up their digital classroom ecosystems. We interviewed 20 teachers on their process of discovering and integrating a vast range of productivity software and educational platforms in their classrooms, with a particular focus on how they personalize the UI and content of these tools (e.g., with plugins, templates, or option menus). We found that teachers largely depended on their own experimentation and professional circles to find, personalize, and troubleshoot software tools to support student needs or their own preferences. Teachers were often hesitant to attempt more advanced personalizations due to concerns over student confusion and increased troubleshooting load. We derive several design implications for HCI to better support teachers in sharing their personalized setups and helping their students benefit from digital immersion.
Healthy Lies: The Effects of Misrepresenting Player Health Data on Experience, Behavior, and Performance
Game designers use a variety of techniques that mislead players with the goal of inducing play experience. For example, designers may manipulate data displays of player health-showing they have less health than they actually do-to induce tension. While commonly used, players make decisions based on in-game data displays, raising the question of how misrepresentations impact behavior and performance, and whether this might have unintended consequences. To provide a better understanding of how data misrepresentation impacts play, we compare two versions of a game: one that displays health accurately and one that misrepresents health. Our results suggest that even subtle manipulations to data displays can have a measurable effect on behavior and performance, and these changes can help explain differences in experience. We show that data misrepresentations need to be designed carefully to avoid unintended effects. Our work provides new directions for research into the design of misrepresentation in games.
Pseudo-Haptic Weight: Changing the Perceived Weight of Virtual Objects By Manipulating Control-Display Ratio
In virtual reality, the lack of kinesthetic feedback often prevents users from experiencing the weight of virtual objects. Control-to-display (C/D) ratio manipulation has been proposed as a method to induce weight perception without kinesthetic feedback. Based on the fact that lighter (heavier) objects are easier (harder) to move, this method induces an illusory perception of weight by manipulating the rendered position of users’ hands—increasing or decreasing their displayed movements. In a series of experiments we demonstrate that C/D-ratio induces a genuine perception of weight, while preserving ownership over the virtual hand. This means that such a manipulation can be easily introduced in current VR experiences without disrupting the sense of presence. We discuss these findings in terms of estimation of physical work needed to lift an object. Our findings provide the first quantification of the range of C/D-ratio that can be used to simulate weight in virtual reality.
EnhancedTouchX: Smart Bracelets for Augmenting Interpersonal Touch Interactions
EnhancedTouchX, a bracelet-type interpersonal body area network device, not only detects but also quantifies interpersonal hand-to-hand touch interactions. Without any wired connection, it can identify the direction and gestures of a touch. The developed device can connect to an external device via Bluetooth Low Energy for monitoring and logging where, when, how long, who, and how the touch interactions occurred. These daily augmented touch interactions provided by such contextual information would offer a variety of applications to facilitate social interactions. Our experiment, conducted with several pairs of participants, demonstrates that the devices can identify the direction of a touch (from one initiating the touch (active touch) to the one being touched (passive touch)) with 95% accuracy. In addition, the devices are also capable of identifying four types of touch gestures with 85% accuracy using a simple threshold classifier.
Older People Inventing their Personal Internet of Things with the IoT Un-Kit Experience
We introduce the IoT Un-Kit Experience, a co-design approach that engages people in exploring, designing and generating personally meaningful IoT applications and that also serves as a means to explore IoT kit design through in-home workshops. Un-Kit represents a seemingly uncompleted set of sensors, actuators and media elements that have a decontextualized appearance – unfinished state, undefined purpose and unboxed form. The approach emphasises users contemplating and experiencing the IoT elements in their familiar space through detailed and layered conversation with researchers; rather than focusing on connecting up the kit itself, thus their ideas are not constrained by the kit or their competence with it. We illustrate the approach through in-home workshops with older adults, envisioned users of IoT who have had limited voice in its conception. The Un-kit approach supported participants to lead the process and to imagine new artfully integrated designs, with personally legible interactions and aesthetic qualities that fit their desire. We offer insights for a more situated and responsive approach to design of the IoT and its constituent kits.
Brick: Toward A Model for Designing Synchronous Colocated Augmented Reality Games
Augmented reality (AR) games have been growing in popularity in recent years. However, current AR games offer limited opportunities for a synchronous multiplayer experience. This paper introduces a model for designing AR experiences in which players inhabit a shared, real-time augmented environment and can engage in synchronous and collaborative interactions with other players. We explored the development of this model through the creation of Brick, a two-player mobile AR game at the room scale. We refined Brick over multiple rounds of iteration, and we used our playtests to investigate a range of issues involved in designing shared-world AR games. Our findings suggest that there are five major categories of interactions in a shared-world AR system: single-player, intrapersonal, multiplayer, interpersonal, and environmental. We believe that this model can support the development of collaborative AR games and new forms of social gameplay.
Instrumenting and Analyzing Fabrication Activities, Users, and Expertise
The recent proliferation of fabrication and making activities has introduced a large number of users to a variety of tools and equipment. Monitored, reactive and adaptive fabrication spaces are needed to provide personalized information, feedback and assistance to users. This paper explores the sensorization of making and fabrication activities, where the environment, tools, and users were considered to be separate entities that could be instrumented for data collection. From this exploration, we present the design of a modular system that can capture data from the varied sensors and infer contextual information. Using this system, we collected data from fourteen participants with varying levels of expertise as they performed seven representative making tasks. From the collected data, we predict which activities are being performed, which users are performing the activities, and what expertise the users have. We present several use cases of this contextual information for future interactive fabrication spaces.
Scaptics and Highlight-Planes: Immersive Interaction Techniques for Finding Occluded Features in 3D Scatterplots
Three-dimensional scatterplots suffer from well-known perception and usability problems. In particular, overplotting and occlusion, mainly due to density and noise, prevent users from properly perceiving the data. Thanks to accurate head and hand tracking, immersive Virtual Reality (VR) setups provide new ways to interact and navigate with 3D scatterplots. VR also supports additional sensory modalities such as haptic feedback. Inspired by methods commonly used in Scientific Visualisation to visually explore volumes, we propose two techniques that leverage the immersive aspects of VR: first, a density-based haptic vibration technique (Scaptics) which provides feedback through the controller; and second, an adaptation of a cutting plane for 3D scatterplots (Highlight-Plane). We evaluated both techniques in a controlled study with two tasks involving density (finding high- and low-density areas). Overall, Scaptics was the most time-efficient and accurate technique, however, in some conditions, it was outperformed by Highlight-Plane.
ChewIt. An Intraoral Interface for Discreet Interactions
Sensing interfaces relying on head or facial gestures provide effective solutions for hands-free scenarios. Most of these interfaces utilize sensors attached to the face, as well as into the mouth, being either obtrusive or limited in input bandwidth. In this paper, we propose ChewIt — a novel intraoral input interface. ChewIt resembles an edible object that allows users to perform various hands-free input operations, both simply and discreetly. Our design is informed by a series of studies investigating the implications of shape, size, locations for comfort, discreetness, maneuverability, and obstructiveness. Additionally, we evaluated potential gestures that users could use to interact with such an intraoral interface.
The Scale and Structure of Personal File Collections
Although many challenges of managing computer files have been identified in past studies — and many alternative prototypes made — the scale and structure of personal file collections remain relatively unknown. We studied 348 such collections, and found they are typically considerably larger in scale (30-190 thousand files) and structure (folder trees twice taller and many times wider) than previously thought, which suggests files and folders are used now more than ever despite advances in Web storage, desktop search, and tagging. Data along many measures within and across collections were log normally distributed, indicating that personal collections resemble imbalanced, group-made collections and confirming the intuition that personal information management behaviour varies greatly. Directions for the generation of test collections and other future research are discussed.
The Effects of Interruption Timings on Autonomous Height-Adjustable Desks that Respond to Task Changes
Actuated furniture, such as electric adjustable sit-stand desks, helps users vary their posture and contributes to comfort and health. However, studies found that users rarely initiate height changes. Therefore, in this paper, we look into furniture that adjusts itself to the user’s needs. A situated interview study indicated task-changing as an opportune moment for automatic height adjustment. We then performed a Wizard of Oz study to find the best timing for changing desk height to minimize interruption and discomfort. The results are in line with prior work on task interruption in graphical user interfaces and show that the table should change height during a task change. However, results also indicate that until users build trust in the system, they prefer actuation after a task change to experience the impact of the adjustment. Based on the results, we discuss design guidelines for interactive desks with agency.
SurfaceSight: A New Spin on Touch, User, and Object Sensing for IoT Experiences
IoT appliances are gaining consumer traction, from smart thermostats to smart speakers. These devices generally have limited user interfaces, most often small buttons and touchscreens, or rely on voice control. Further, these devices know little about their surroundings unaware of objects, people and activities happening around them. Consequently, interactions with these “smart” devices can be cumbersome and limited. We describe SurfaceSight, an approach that enriches IoT experiences with rich touch and object sensing, offering a complementary input channel and increased contextual awareness. For sensing, we incorporate LIDAR into the base of IoT devices, providing an expansive, ad hoc plane of sensing just above the surface on which devices rest. We can recognize and track a wide array of objects, including finger input and hand gestures. We can also track people and estimate which way they are facing. We evaluate the accuracy of these new capabilities and illustrate how they can be used to power novel and contextually-aware interactive experiences.
Conservation of Procrastination: Do Productivity Interventions Save Time Or Just Redistribute It?
Productivity behavior change systems help us reduce our time on unproductive activities. However, is that time actually saved, or is it just redirected to other unproductive activities? We report an experiment using HabitLab, a behavior change browser extension and phone application, that manipulated the frequency of interventions on a focal goal and measured the effects on time spent on other applications and platforms. We find that, when intervention frequency increases on the focal goal, time spent on other applications is held constant or even reduced. Likewise, we find that time is not redistributed across platforms from browser to mobile phone or vice versa. These results suggest that any conservation of procrastination effect is minimal, and that behavior change designers may target individual productivity goals without causing substantial negative second-order effects.
Understanding Law Enforcement Strategies and Needs for Combating Human Trafficking
In working to rescue victims of human trafficking, law enforcement officers face a host of challenges. Working in complex, layered organizational structures, they face challenges of collaboration and communication. Online information is central to every phase of a human-trafficking investigation. With terabytes of available data such as sex work ads, policing is increasingly a big-data research problem. In this study, we interview sixteen law enforcement officers working to rescue victims of human trafficking to try to understand their computational needs. We highlight three major areas where future work in human-computer interaction can help. First, combating human trafficking requires advances in information visualization of large, complex, geospatial data, as victims are frequently forcibly moved across jurisdictions. Second, the need for unified information databases raises critical research issues of usable security and privacy. Finally, the archaic nature of information systems available to law enforcement raises policy issues regarding resource allocation for software development.
Vocal Shortcuts for Creative Experts
Vocal shortcuts, short spoken phrases to control interfaces, have the potential to reduce cognitive and physical costs of interactions. They may benefit expert users of creative applications (e.g., designers, illustrators) by helping them maintain creative focus. To aid the design of vocal shortcuts and gather use cases and design guidelines for speech interaction, we interviewed ten creative experts. Based on our findings, we built VoiceCuts, a prototype implementation of vocal shortcuts in the context of an existing creative application. In contrast to other speech interfaces, VoiceCuts targets experts’ unique needs by handling short and partial commands and leverages document model and application context to disambiguate user utterances. We report on the viability and limitations of our approach based on feedback from creative experts.
Rumors and Collective Sensemaking: Managing Ambiguity in an Informal Marketplace
Rumors are an enduring form of communication across socio-cultural landscapes globally. Counter to their typical negative association, rumors play a nuanced role, helping people collectively deal with problems through constructing a representation of an uncertain situation. Drawing on unstructured interviews and participant observation from a technology goods marketplace in Bangalore, India, we study the circulation of rumors related to the government’s recent policy of demonetization and entry of online marketplaces and digital wallets, all of which disrupted existing market practices. These rumors emerge as attempts at sensemaking when a community is faced with ambiguity. Through highlighting the relationship of institutional trust with rumors, the paper argues that the study of rumors can help us identify the concerns of a community in the face of differential power relations. Further, rumors are a form of social bonding which help communities make sense of their place in society and shape existing practices.
Temporal Rhythms and Patterns of Electronic Documentation in Time-Critical Medical Work
We examine nursing documentation on a newly implemented electronic flowsheet in medical resuscitations to identify the temporal patterns of documentation and how the recorded information supported time-critical teamwork. To determine when the information was documented, we compared timestamps from 58 flowsheet logs to those of verbal communications derived from video review. We also drew on observations of 95 resuscitations to understand the behaviors of nurse documenters. We found that only 8% of the verbal reports were documented in near real-time (one minute within the verbal report), while 42% of reports were not documented in the electronic flowsheet. In addition, 38% were documented early (before the verbal report) and 12% were documented with a delay, ranging from one to 58 minutes after the report. Our study showed that the electronic flowsheet design posed many challenges for real-time documentation, leading to paper-based workarounds and the use of free-text fields on the flowsheet to visualize and keep track of time, and to communicate temporal information to the team. These findings suggest that documenters shape the temporal rhythms of not only their own work but also the rhythms of the electronic record and medical process. We discuss the implications of these rhythms for EHR redesign to support real-time documentation in high-risk, safety-critical settings.
Personas and Identity: Looking at Multiple Identities to Inform the Construction of Personas
Personas are valuable tools to help designers get to know their users and adopt their perspectives. Yet people are complex and multiple identities have to be considered in their interplay to account for a comprehensive representation otherwise, personas might be superficial and prone to activate stereotypes. Therefore, the way users’ identities are presented in a limited set of personas is crucial to account for diversity and highlight facets which otherwise would go unnoticed. In this paper, we introduce an approach to the development of personas informed by social identity theory. The effectiveness of this approach is investigated in a qualitative study in the context of the design process for an e-learning platform for women in tech. The results suggest that considering multiple identities in the construction of personas adds value when designing technologies.
Hands Holding Clues for Object Recognition in Teachable Machines
Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blind people to coordinate hand movements using proprioception, we propose a deep learning system that jointly models hand segmentation and object localization for object classification. We investigate the utility of hands as a natural interface for including and indicating the object of interest in the camera frame. We confirm the potential of this approach by analyzing existing datasets from people with visual impairments for object recognition. With a new publicly available egocentric dataset and an extensive error analysis, we provide insights into this approach in the context of teachable recognizers.
Friending to Flame: How Social Features Affect Player Behaviours in an Online Collectible Card Game
Online Collectible Card Games (OCCGs) are enormously popular worldwide. Previous studies found that the social aspects of physical CCGs are crucial for player engagement. However, we know little about the different types of sociability that OCCGs afford. Nor to what extent they influence players’ social experiences. This mixed method online survey study focuses on a representative OCCG, Hearthstone [24] to 1) identify and define social design features and examine the extent to which players’ use of these features predict their sense of community; 2) investigate participants’ attitudes towards and experiences with the game community. The results show that players rarely use social features, and these features contribute differently to predicting players’ sense of community. We also found emergent toxic behaviors, afforded by the social features. Findings can inform the best practices and principles in the design of OCCGs, and contribute to our understanding of players’ perceptions of OCCG communities.
Sensing Fine-Grained Hand Activity with Smartwatches
Capturing fine-grained hand activity could make computational experiences more powerful and contextually aware. Indeed, philosopher Immanuel Kant argued, “the hand is the visible part of the brain.” However, most prior work has focused on detecting whole-body activities, such as walking, running and bicycling. In this work, we explore the feasibility of sensing hand activities from commodity smartwatches, which are the most practical vehicle for achieving this vision. Our investigations started with a 50 participant, in-the-wild study, which captured hand activity labels over nearly 1000 worn hours. We then studied this data to scope our research goals and inform our technical approach. We conclude with a second, in-lab study that evaluates our classification stack, demonstrating 95.2% accuracy across 25 hand activities. Our work highlights an underutilized, yet highly complementary contextual channel that could unlock a wide range of promising applications.
Human-Computer Insurrection: Notes on an Anarchist HCI
The HCI community has worked to expand and improve our consideration of the societal implications of our work and our corresponding responsibilities. Despite this increased engagement, HCI continues to lack an explicitly articulated politic, which we argue re-inscribes and amplifies systemic oppression. In this paper, we set out an explicit political vision of an HCI grounded in emancipatory autonomy-an anarchist HCI, aimed at dismantling all oppressive systems by mandating suspicion of and a reckoning with imbalanced distributions of power. We outline some of the principles and accountability mechanisms that constitute an anarchist HCI. We offer a potential framework for radically reorienting the field towards creating prefigurative counterpower-systems and spaces that exemplify the world we wish to see, as we go about building the revolution in increment.
Beyond “One-Size-Fits-All”: Understanding the Diversity in How Software Newcomers Discover and Make Use of Help Resources
For most modern feature-rich software, considerable external help and learning resources are available on the web (e.g., documentation, tutorials, videos, Q&A forums). But, how do users new to an application discover and make use of such resources? We conducted in-lab and diary studies with 26 software newcomers from a variety of different backgrounds who were all using Fusion 360, a 3D modeling application, for the first time. Our results illustrate newcomers’ diverse needs, perceptions, and help-seeking behaviors. We found a number of distinctions in how technical and non-technical users approached help-seeking, including: when and how they initiated the help-seeking process, their struggles in recognizing relevant help, the degree to which they made coordinated use of the application and different resources, and in how they perceived the utility of different help formats. We discuss implications for moving beyond “one-size-fits-all” help resources towards more structured, personalized, and curated help and learning materials.
City Explorer: The Design and Evaluation of a Location-Based Community Information System
Many working professionals commute via public transit, yet they have limited tools for learning about their urban neighborhoods and fellow commuters. We designed a location-based game called City Explorer to investigate how transit commuters capture, share, and view community information that is specifically tied to locations. Through a four-week field study, we found that participants valued the increased awareness of their personal travel routines that they gained through City Explorer. When viewing community information, they preferred information that was factual rather than opinion-based and was presented at the start and end of their commutes. Participants found less value in connecting with other transit riders because transit rides were often seen as opportunities to disengage from others. We discuss how location-based technologies can be designed to display factual community information before, during, and at the end of transit commutes.
Dynamics of Visual Attention in Multiparty Collaborative Problem Solving using Multidimensional Recurrence Quantification Analysis
Multiparty collaborative problem solving – an increasingly important context in the 21st century workforce – suffers from a degradation of social and behavioral signals when attempted remotely, resulting in suboptimal outcomes. We investigate teams’ multidimensional patterns of visual attention during a collaborative problem-solving task with an eye for leveraging insights to improve collaborative interfaces. Fifty-seven novices (forming 19 triads) engaged in a challenging programming task (Minecraft Hour of Code) using videoconferencing software with screen sharing. To discover patterns of individual-level gaze-UI coupling(coordination of a teammate’s attention with respect to changes in the user interface) and team-level gaze-UI regularity (dynamics of teams’ collective attention in context with changes in the user interface), we applied cross- and multidimensional recurrence quantification analyses, respectively. Individuals’ eye gaze was significantly coupled with the ongoing screen activity whereas teams displayed significant patterns of gaze regularity, suggesting repetitive patterns in teams’ attention. These measures predicted expert-coded collaborative processes of constructing shared knowledge and negotiation and coordination (but not maintaining team function) and correlated with task score (r = .425). They also predicted individually assessed subjective perceptions of team performance and the collaboration process, but not individual’s learning or team’s task scores. We discuss implications of our findings for the design of intelligent collaborative interfaces.
Upside and Downside Risk in Online Security for Older Adults with Mild Cognitive Impairment
Older adults are rapidly increasing their use of online services such as banking, social media, and email – services that come with subtle and serious security and privacy risks. Older adults with mild cognitive impairment (MCI) are particularly vulnerable to these risks because MCI can reduce their ability to recognize scams such as email phishing, follow recommended password guidelines, and consider the implications of sharing personal information. Older adults with MCI often cope with their impairments with the help of caregivers, including partners, children, and professional health personnel, when using and managing online services. Yet, this too carries security and privacy risks: sharing personal information with caregivers can create issues of agency, autonomy, and even risk embarrassment and information leakage; caregivers also do not always act in their charges’ best interest. Through a series of interviews conducted in the US, we identify a spectrum of safeguarding strategies used and consider them through the lens of ‘upside and downside risk’ where there are tradeoffs between reduced privacy and maintaining older adults’ autonomy and access to online services.
Seekers, Providers, Welcomers, and Storytellers: Modeling Social Roles in Online Health Communities
Participants in online communities often enact different roles when participating in their communities. For example, some in cancer support communities specialize in providing disease-related information or socializing new members. This work clusters the behavioral patterns of users of a cancer support community into specific functional roles. Based on a series of quantitative and qualitative evaluations, this research identified eleven roles that members occupy, such as welcomer and story sharer. We investigated role dynamics, including how roles change over members’ lifecycles, and how roles predict long-term participation in the community. We found that members frequently change roles over their history, from ones that seek resources to ones offering help, while the distribution of roles is stable over the community’s history. Adopting certain roles early on predicts members’ continued participation in the community. Our methodology will be useful for facilitating better use of members’ skills and interests in support of community-building efforts.
AdaCAD: Crafting Software For Smart Textiles Design
Woven smart textiles are useful in creating flexible electronics because they integrate circuitry into the structure of the fabric itself. However, there do not yet exist tools that support the specific needs of smart textiles weavers. This paper describes the process and development of AdaCAD, an application for composing smart textile weave drafts. By augmenting traditional weaving drafts, AdaCAD allows weavers to design woven structures and circuitry in tandem and offers specific support for common smart textiles techniques. We describe these techniques, how our tool supports them alongside feedback from smart textiles weavers. We conclude with a reflection on smart textiles practice more broadly and suggest that the metaphor of coproduction can be fruitful in creating effective tools and envisioning future applications in this space.
Trust and Recall of Information across Varying Degrees of Title-Visualization Misalignment
Visualizations are emerging as a means of spreading digital misinformation. Prior work has shown that visualization interpretation can be manipulated through slanted titles that favor only one side of the visual story, yet people still think the visualization is impartial. In this work, we study whether such effects continue to exist when titles and visualizations exhibit greater degrees of misalignment: titles whose message differs from the visually cued message in the visualization, and titles whose message contradicts the visualization. We found that although titles with a contradictory slant triggered more people to identify bias compared to titles with a miscued slant, visualizations were persistently perceived as impartial by the majority. Further, people’s recall of the visualization’s message more frequently aligned with the titles than the visualization. Based on these results, we discuss the potential of leveraging textual components to detect and combat visual-based misinformation with text-based slants.
RealityCheck: Blending Virtual Environments with Situated Physical Reality
Today’s virtual reality (VR) systems offer chaperone rendering techniques that prevent the user from colliding with physical objects. Without a detailed geometric model of the physical world, these techniques offer limited possibility for more advanced compositing between the real world and the virtual. We explore this using a realtime 3D reconstruction of the real world that can be combined with a virtual environment. RealityCheck allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical space without losing the sense of immersion or presence inside their virtual world. We demonstrate RealityCheck with seven existing VR titles, and describe compositing approaches that address the potential conflicts when rendering the real world and a virtual environment together. A study with frequent VR users demonstrate the affordances provided by our system and how it can be used to enhance current VR experiences.
Lost in Style: Gaze-driven Adaptive Aid for VR Navigation
A key challenge for virtual reality level designers is striking a balance between maintaining the immersiveness of VR and providing users with on-screen aids after designing a virtual experience. These aids are often necessary for wayfinding in virtual environments with complex paths. We introduce a novel adaptive aid that maintains the effectiveness of traditional aids, while equipping designers and users with the controls of how often help is displayed. Our adaptive aid uses gaze patterns in predicting user’s need for navigation aid in VR and displays mini-maps or arrows accordingly. Using a dataset of gaze angle sequences of users navigating a VR environment and markers of when users requested aid, we trained an LSTM to classify user’s gaze sequences as needing navigation help and display an aid. We validated the efficacy of the adaptive aid for wayfinding compared to other commonly-used wayfinding aids.
“If It’s Important It Will Be A Headline”: Cybersecurity Information Seeking in Older Adults
Older adults are increasingly vulnerable to cybersecurity attacks and scams. Yet we know relatively little about their understanding of cybersecurity, their information-seeking behaviours, and their trusted sources of information and advice in this domain. We conducted 22 semi-structured interviews with community-dwelling older adults in order to explore their cybersecurity information seeking behaviours. Following a thematic analysis of these interviews, we developed a cybersecurity information access framework that highlights shortcomings in older adults’ choice of information resources. Specifically, we find that older users prioritise social resources based on availability, rather than cybersecurity expertise, and that they avoid using the Internet for cybersecurity information searches despite using it for other domains. Finally, we discuss the design of cybersecurity information dissemination strategies for older users, incorporating favoured sources such as TV adverts and radio programming.
Engaging Lived and Virtual Realities
We examined the integration of VR into informal and less-structured learning environments in Atlanta (USA) and Mumbai (India) through a process of co-design, co-creation, and co-learning with students and teachers where students learned to use VR to engage with their economic, social, and cultural realities. Using qualitative methods, we engaged students and teachers at both sites in VR content creation activities; through these activities, we attempt to uncover a deeper understanding of the challenges and opportunities of introducing low-cost mobile VR for content generation, consumption, and sharing in underserved learning contexts. We also motivate future work that looks at integrating VR in new contexts, using flexible methods, across borders. The larger vision of our research is to advance us towards greater accessibility and inclusivity of VR across diverse learning environments.
Alternative Avenues for IoT: Designing with Non-Stereotypical Homes
We report on the findings of a co-speculative design inquiry that investigates alternative visions of the Internet of Things (IoT) for the home. We worked with 16 people living in non-stereotypical homes to develop situated and personal concepts attuned to their home. As a prompt for co-speculation and discussion, we created handmade booklets where we took turns overlaying sketched design concepts on top of photos taken with participants in their homes. Our findings reveal new avenues for the design of IoT systems such as: acknowledging porous boundaries of the home, exposing neighborly relations, exploring diverse timescales, revisiting agency, and embracing imaginary and potential uses. We invite human-computer interaction and design researchers to use these avenues as starting points to broaden current assumptions embedded in design and research practices for domestic technologies. We conclude by highlighting the value of examining divergent perspectives and surfacing the unseen.
Collaborative Futures: Co-Designing Research Methods for Younger People Living with Dementia
Designing new technologies to support the lived experience of dementia is of increasing interest within HCI. While there is guidance on qualitative research methods to use in areas such as dementia, there is a need for more appropriate ways to research in the younger demographic. In Younger Onset Dementia (YOD), the circumstances and experiences are markedly different from dementia in the later stage of life requiring a different approach. This paper presents insights into the methods and approaches used in fieldwork with five people living with YOD; where they engaged as co-researchers in a co-directed inquiry into their lived experiences. Through this, we make a number of methodological contributions to HCI and Participatory Action Research (PAR) for research in the YOD setting. This includes productive approaches that are sensitive, respectful and empowering to the participants. It also extends current approaches to using probes in HCI and dementia research.
"I was really, really nervous posting it": Communicating about Invisible Chronic Illnesses across Social Media Platforms
People with invisible chronic illnesses (ICIs) can use social media to seek both informational and emotional support, but these individuals also face social and health-related challenges in posting about their often-stigmatized conditions online. To understand how they evaluate different platforms for disclosure, we interviewed 19 people with ICIs who post on general social media about their illnesses, such as Facebook, Instagram, and Twitter. We present a cross-platform analysis of how platforms varied in their suitability to achieve participants’ goals, as well as the challenges posed by each platform. We also found that as participants’ ICIs progressed, their goals, challenges, and social media use similarly evolved over time. Our findings highlight how people with ICIs select platforms from a broader ecology of social media and suggest a general need to understand shifts in social media use for populations with chronic but changing health concerns.
The Effect of Field-of-View Restriction on Sex Bias in VR Sickness and Spatial Navigation Performance
Recent studies show that women are more susceptible to visually-induced VR sickness, which might explain the low adoption rate of VR technology among women. Reducing field-of-view (FOV) during locomotion is already a widely used strategy to reduce VR sickness as it blocks peripheral optical flow perception and mitigates visual/vestibular conflict. Prior studies show that men are more adept at 3D spatial navigation than women, though this sex bias can be minimized by providing women with a larger FOV. Our study provides insight into the relationship between sex and FOV restriction with respect to VR sickness and spatial navigation performance which seem to conflict. We find the use of an FOV restrictor to be effective in mitigating VR sickness in both sexes while we did not find a negative effect of FOV restriction on spatial navigation performance.
Implementing Multi-Touch Gestures with Touch Groups and Cross Events
Multi-touch gestures can be very difficult to program correctly because they require that developers build high-level abstractions from low-level touch events. In this paper, we introduce programming primitives that enable programmers to implement multi-touch gestures in a more understandable way by helping them build these abstractions. Our design of these primitives was guided by a formative study, in which we observed developers’ natural implementations of custom gestures. Touch groups provide summaries of multiple fingers rather than requiring that programmers track them manually. Cross events allow programmers to summarize the movement of one or a group of fingers. We implemented these two primitives in two environments: a declarative programming system and in a standard imperative programming language. We found that these primitives are capable of defining nuanced multi-touch gestures, which we illustrate through a series of examples. Further, in two user evaluations of these programming primitives, we found that multi-touch behaviors implemented in these programming primitives are more understandable than those implemented with standard touch events.
Witchcraft and HCI: Morality, Modernity, and Postcolonial Computing in Rural Bangladesh
While Human-Computer Interaction (HCI) research on health and well-being is increasingly becoming more aware and inclusive of its social and political dimensions, spiritual practices are still largely overlooked there. For a large number of people around the world, especially in the global south, witchcraft, sorcery, and other occult practices are the primary means of achieving health, wealth, satisfaction, and happiness. Building on an eight-month long ethnography in six villages in Jessore, Bangladesh, this paper explores the knowledge, materials, and politics involved in the local witchcraft practices there. By drawing from a rich body of anthropological work on witchcraft, this paper discusses how those findings contribute to the broader issues in HCI around morality, modernity, and postcolonial computing. This paper concludes by recommending ways for smooth integration of traditional occult practices with HCI through design and policy. We argue for occult practices as an under-appreciated site for HCI to learn how to combat ideological hegemony.
QuizBot: A Dialogue-based Adaptive Learning System for Factual Knowledge
Advances in conversational AI have the potential to enable more engaging and effective ways to teach factual knowledge. To investigate this hypothesis, we created QuizBot, a dialogue-based agent that helps students learn factual knowledge in science, safety, and English vocabulary. We evaluated QuizBot with 76 students through two within-subject studies against a flashcard app, the traditional medium for learning factual knowledge. Though both systems used the same algorithm for sequencing materials, QuizBot led to students recognizing (and recalling) over 20% more correct answers than when students used the flashcard app. Using a conversational agent is more time consuming to practice with, but in a second study, of their own volition, students spent 2.6x more time learning with QuizBot than with flashcards and reported preferring it strongly for casual learning. Our results in this second study showed QuizBot yielded improved learning gains over flashcards on recall. These results suggest that educational chatbot systems may have beneficial use, particularly for learning outside of traditional settings.
The Adventures of Older Authors: Exploring Futures through Co-Design Fictions
This paper presents co-design fiction as an approach to engaging users in imagining, envisioning and speculating not just on future technology but future life through co-created fictional works. Design fiction in research is often created or written by researchers. There is relatively little critical discussion of how to co-create design fictions with end-users, with the concomitant opportunities and challenges this poses. To fill this gap in knowledge, we conducted co-design fiction workshops with nine older creative writers, utilising prompts to inspire discussion and engage their imaginative writing about the trend towards tracking and monitoring older people. Their stories revealed futures of neither dystopia nor utopia but of social and moral dilemmas narrating their wish not just to “maintain their independence”, but a palpable desire for adventure and very nuanced senses of how they wish to take control. We discuss inherent tensions in the control of the co-design fiction process; balancing the author’s need for freedom and creativity with the researcher’s desire to guide the process toward the design investigation at hand.
Beyond The Force: Using Quadcopters to Appropriate Objects and the Environment for Haptics in Virtual Reality
Quadcopters have been used as hovering encountered-type haptic devices in virtual reality. We suggest that quadcopters can facilitate rich haptic interactions beyond force feedback by appropriating physical objects and the environment. We present HoverHaptics, an autonomous safe-to-touch quadcopter and its integration with a virtual shopping experience. HoverHaptics highlights three affordances of quadcopters that enable these rich haptic interactions: (1) dynamic positioning of passive haptics, (2) texture mapping, and (3) animating passive props. We identify inherent challenges of hovering encountered-type haptic devices, such as their limited speed, inadequate control accuracy, and safety concerns. We then detail our approach for tackling these challenges, including the use of display techniques, visuo-haptic illusions, and collision avoidance. We conclude by describing a preliminary study (n = 9) to better understand the subjective user experience when interacting with a quadcopter in virtual reality using these techniques.
Using Presence Questionnaires in Virtual Reality
Virtual Reality (VR) is gaining increasing importance in science, education, and entertainment. A fundamental characteristic of VR is creating presence, the experience of ‘being’ or ‘acting’, when physically situated in another place. Measuring presence is vital for VR research and development. It is typically repeatedly assessed through questionnaires completed after leaving a VR scene. Requiring participants to leave and re-enter the VR costs time and can cause disorientation. In this paper, we investigate the effect of completing presence questionnaires directly in VR. Thirty-six participants experienced two immersion levels and filled three standardized presence questionnaires in the real world or VR. We found no effect on the questionnaires’ mean scores; however, we found that the variance of those measures significantly depends on the realism of the virtual scene and if the subjects had left the VR. The results indicate that, besides reducing a study’s duration and reducing disorientation, completing questionnaires in VR does not change the measured presence but can increase the consistency of the variance.
Understanding the Social Acceptability of Mobile Devices using the Stereotype Content Model
Understanding social perception is important for designing mobile devices that are socially acceptable. Previous work not only investigated the social acceptability of mobile devices and interaction techniques but also provided tools to measure social acceptance. However, we lack a robust model that explains the underlying factors that make devices socially acceptable. In this paper, we consider mobile devices as social objects and investigate if the stereotype content model (SCM) can be applied to those devices. Through a study that assesses combinations of mobile devices and group stereotypes, we show that mobile devices have a systematic effect on the stereotypes’ warmth and competence. Supported by a second study, which combined mobile devices without a specific stereotypical user, our result suggests that mobile devices are perceived stereotypically by themselves. Our combined results highlight mobile devices as social objects and the importance of considering stereotypes when assessing social acceptance of mobile devices.
Anchoring Effects and Troublesome Asymmetric Transfer in Subjective Ratings
Within-subjects experiments are prone to asymmetric transfer, which confounds results interpretation. While HCI researchers routinely test asymmetric transfer in objective data, doing so for subjective data is rare. Yet literature suggests that anchoring effects should make subjective measures particularly susceptible to asymmetric transfer. We report on four analyses of NASA-TLX data from four previously published HCI papers, with four main findings. First, asymmetric transfer is common, occurring in 42% of tests analysed. Second, the data conforms to predictions of anchoring effects. Third, the magnitude of the anchor’s effect correlates with the magnitude of the difference between the interface ratings — that is, the anchor’s ‘pull’ correlates with the anchoring stimulus. Fourth, several of the previously published findings are changed when data are reanalysed using between-subjects treatment. We urge caution when analysing within-subjects subjective measures and recommend that researchers test for and report the occurrence of asymmetric transfer.
Aggregated Visualization of Playtesting Data
Playtesting is a key component in the game development process aimed at improving the quality of games through the collection of gameplay data and identification of design issues. Visualization techniques are currently being employed to help integrate quantitative and qualitative data. Despite that, two existing challenges are to determine the level of detail to be presented to developers based on their needs and to effectively communicate the collected data so that informed design changes can be reached. In this paper, we first propose an aggregated visualization technique that makes use of clustering, territory tessellation, and trajectory aggregation to simultaneously display mixed playtesting data. Secondly, to assess the usefulness of our technique we evaluate it through interviews with professional game developers and compare it to a non-aggregated visualization. The results of this study also provide an important contribution towards identifying areas of improvement in the portrayal of gameplay data.
Look-From Camera Control for 3D Terrain Maps
We introduce three lightweight interactive camera control techniques for 3D terrain maps on touch devices based on a look-from metaphor (Discrete Look-From-At, Continuous Look-From-Forwards, and Continuous Look-From-Towards). These techniques complement traditional touch screen pan, zoom, rotate, and pitch controls allowing viewers to quickly transition between top-down, oblique, and ground-level views. We present the results of a study in which we asked participants to perform elevation comparison and line-of-sight determination tasks using each technique. Our results highlight how look-from techniques can be integrated on top of current direct manipulation navigation approaches by combining several direct manipulation operations into a single look-from operation. Additionally, they show how look-from techniques help viewers complete a variety of common and challenging map-based tasks.
ZeRONE: Safety Drone with Blade-Free Propulsion
We present ZeRONE, a new indoor drone that does not use rotating blades for propulsion. The proposed device is a helium blimp type drone that uses the wind generated by the ultrasonic vibration of piezo elements for propulsion. Compared to normal drones with rotating propellers, the drone is much safer because its only moving parts are the piezo elements whose surfaces vibrate at the order of micrometers. The drone can float for a few weeks and the ultrasonic propulsion system is quiet. We implement a prototype of the drone and evaluate its performance and unique characteristics in experiments. Moreover, application scenarios in which ZeRONE coexists with people are also discussed.
Beyond the Patient Portal: Supporting Needs of Hospitalized Patients
Although patient portals-technologies that give patients access to their health information-are recognized as key to increasing patient engagement, we have a limited understanding of how these technologies should be designed to meet the needs of hospitalized patients and caregivers. Through semi-structured interviews with 30 patients and caregivers, we examine how future patient portals can best align with their needs and support engagement in their care. Our findings reveal six needs that existing patient portals do not support: (1) transitioning from home to hospital, (2) adjusting schedules and receiving status updates, (3) understanding and remembering care, (4) asking questions and flagging problems, (5) collaborating with providers and care- givers, and (6) preparing for discharge and at-home care. Based on these findings, we discuss three design implications: highlight patient-centric goals and preferences, provide dynamic information about care events, and design for situationally-impaired users. Our contributions guide future patient portals in engaging hospitalized patients and care- givers as primary stakeholders in their health care.
Can Privacy Be Satisfying?: On Improving Viewer Satisfaction for Privacy-Enhanced Photos Using Aesthetic Transforms
Pervasive photo sharing in online social media platforms can cause unintended privacy violations when elements of an image reveal sensitive information. Prior studies have identified image obfuscation methods (e.g., blurring) to enhance privacy, but many of these methods adversely affect viewers’ satisfaction with the photo, which may cause people to avoid using them. In this paper, we study the novel hypothesis that it may be possible to restore viewers’ satisfaction by ‘boosting’ or enhancing the aesthetics of an obscured image, thereby compensating for the negative effects of a privacy transform. Using a between-subjects online experiment, we studied the effects of three artistic transformations on images that had objects obscured using three popular obfuscation methods validated by prior research. Our findings suggest that using artistic transformations can mitigate some negative effects of obfuscation methods, but more exploration is needed to retain viewer satisfaction.
An Analytic Model for Time Efficient Personal Hierarchies
Hierarchy structures such as file systems are widespread interfaces for item retrieval and selection tasks. Some hierarchies can be modified by end-users, such as application launchers on smartphones or pictures in a file folder. These modifiable hierarchies cannot benefit from an optimization made beforehand as their content, unknown during the design process, is constantly evolving. We hence propose an analytic model which designers can integrate in their system to recommend a range of local structure modifications (e.g., creating new folders) to end-users. Proposing a range of modifications gives flexibility to end-users regarding their own meaningful grouping and labeling choices to follow a recommendation. A first experiment confirms that the recommendations built on our model can lead to modified hierarchies resulting in faster theoretical selection times. A second experiment confirms that the theoretical selection times fit empirical selection times in different hierarchy visual layouts: linear, radial, and grid.
Geppetto: Enabling Semantic Design of Expressive Robot Behaviors
Expressive robots are useful in many contexts, from industrial to entertainment applications. However, designing expressive robot behaviors requires editing a large number of unintuitive control parameters. We present an interactive, data-driven system that allows editing of these complex parameters in a semantic space. Our system combines a physics-based simulation that captures the robot’s motion capabilities, and a crowd-powered framework that extracts relationships between the robot’s motion parameters and the desired semantic behavior. These relationships enable mixed-initiative exploration of possible robot motions. We specifically demonstrate our system in the context of designing emotionally expressive behaviors. A user-study finds the system to be useful for more quickly developing desirable robot behaviors, compared to manual parameter editing.
Personal Health Oracle: Explorations of Personalized Predictions in Diabetes Self-Management
The increasing availability of health data and knowledge about computationally modeling human physiology opens new opportunities for personalized predictions in health. Yet little is known about how individuals interact and reason with personalized predictions. To explore these questions, we developed a smartphone app, GlucOracle, that uses self-tracking data of individuals with type 2 diabetes to generate personalized forecasts for post-meal blood glucose levels. We pilot-tested GlucOracle with two populations: members of an online diabetes community, knowledgeable about diabetes and technologically savvy; and individuals from a low socio-economic status community, characterized by high prevalence of diabetes, low literacy and limited experience with mobile apps. Individuals in both communities engaged with personal glucose forecasts and found them useful for adjusting immediate meal options, and planning future meals. However, the study raised new questions as to appropriate time, form, and focus of forecasts and suggested new research directions for personalized predictions in health.
Exploring Factors that Influence Connected Drivers to (Not) Use or Follow Recommended Optimal Routes
Navigation applications are becoming ubiquitous in our daily navigation experiences. With the intention to circumnavigate congested roads, their route guidance always follows the basic assumption that drivers always want the fastest route. However, it is unclear how their recommendations are followed and what factors affect their adoption. We present the results of a semi-structured qualitative study with 17 drivers, mostly from the Philippines and Japan. We recorded their daily commutes and occasional trips, and inquired into their navigation practices, route choices and on-the-fly decision-making. We found that while drivers choose a recommended route in urgent situations, many still preferred to follow familiar routes. Drivers deviated because of a recommendation’s use of unfamiliar roads, lack of local context, perceived driving unsuitability, and inconsistencies with realized navigation experiences. Our findings and implications emphasize their personalization needs, and how the right amount of algorithmic sophistication can encourage behavioral adaptation.
"I Bought This for Me to Look More Ordinary": A Study of Blind People Doing Online Shopping
Online shopping, by reducing the needs for traveling, has become an essential part of lives for people with visual impairments. However, in HCI, research on online shopping for them has only been limited to the analysis of accessibility and usability issues. To develop a broader and better understanding of how visually impaired people shop online and design accordingly, we conducted a qualitative study with twenty blind people. Our study highlighted that blind people’s desire of being treated as ordinary had significantly shaped their online shopping practices: very attentive to the visual appearance of the goods even they themselves could not see and taking great pain to find and learn what commodities are visually appropriate for them. This paper reports how their trying to appear ordinary is manifested in online shopping and suggests design implications to support these practices.
Touchscreen Haptic Augmentation Effects on Tapping, Drag and Drop, and Path Following
We study the effects of haptic augmentation on tapping, path following, and drag & drop tasks based on a recent flagship smartphone with refined touch sensing and haptic actuator technologies. Results show actuated haptic confirmation on tapping targets was subjectively appreciated by some users but did not improve tapping speed or accuracy. For drag & drop, a clear performance improvement was measured when haptic feedback is applied to target boundary crossing, particularly when the targets are small. For path following tasks, virtual haptic feedback improved accuracy at a reduced speed in a sitting condition. Stronger results were achieved in a physical haptic mock-up. Overall, we found actuated touchscreen haptic feedback particularly effective when the touched object was visually interfered by the finger. Participants subjective experience of haptic feedback in all tasks tended to be more positive than their time or accuracy performance suggests. We compare and discuss these findings with previous results on early generations of devices. The work provides an empirical foundation to product design and future research of touch input and haptic systems.
Opportunities for Automating Email Processing: A Need-Finding Study
Email management consumes significant effort from senders and recipients. Some of this work might be automatable. We performed a mixed-methods need-finding study to learn: (i) what sort of automatic email handling users want, and (ii) what kinds of information and computation are needed to support that automation. Our investigation included a design workshop to identify categories of needs, a survey to better understand those categories, and a classification of existing email automation software to determine which needs have been addressed. Our results highlight the need for: a richer data model for rules, more ways to manage attention, leveraging internal and external email context, complex processing such as response aggregation, and affordances for senders. To further investigate our findings, we developed a platform for authoring small scripts over a user’s inbox. Of the automations found in our studies, half are impossible in popular email clients, motivating new design directions.
Examining Augmented Virtuality Impairment Simulation for Mobile App Accessibility Design
With mobile apps rapidly permeating all aspects of daily living with use by all segments of the population, it is crucial to support the evaluation of app usability for specific impaired users to improve app accessibility. In this work, we examine the effects of using our augmented virtuality impairment simulation system–Empath-D–to support experienced designer-developers to redesign a mockup of commonly used mobile application for cataract-impaired users, comparing this with existing tools that aid designing for accessibility. We show that the use of augmented virtuality for assessing usability supports enhanced usability challenge identification, finding more defects and doing so more accurately than with existing methods. Through our user interviews, we also show that augmented virtuality impairment simulation supports realistic interaction and evaluation to provide a concrete understanding over the usability challenges that impaired users face, and complements the existing guidelines-based approaches meant for general accessibility.
Accessible Gesture Typing for Non-Visual Text Entry on Smartphones
Gesture typing–entering a word by gliding the finger sequentially over letter to letter– has been widely supported on smartphones for sighted users. However, this input paradigm is currently inaccessible to blind users: it is difficult to draw shape gestures on a virtual keyboard without access to key visuals. This paper describes the design of accessible gesture typing, to bring this input paradigm to blind users. To help blind users figure out key locations, the design incorporates the familiar screen-reader supported touch exploration that narrates the keys as the user drags the finger across the keyboard. The design allows users to seamlessly switch between exploration and gesture typing mode by simply lifting the finger. Continuous touch-exploration like audio feedback is provided during word shape construction that helps the user glide in the right direction of the key locations constituting the word. Exploration mode resumes once word shape is completed. Distinct earcons help distinguish gesture typing mode from touch exploration mode, and thereby avoid unintended mix-ups. A user study with 14 blind people shows 35% increment in their typing speed, indicative of the promise and potential of gesture typing technology for non-visual text entry.
Gabber: Supporting Voice in Participatory Qualitative Practices
We describe the iterative design, development and learning process we undertook to produce Gabber, a digital platform that aims to support distributed capture of spoken interviews and discussions, and their qualitative analysis. Our aim is to reduce both expertise and cost barriers associated with existing technologies, making the process more inclusive. Gabber structures distributed audio data capture, facilitates participatory sensemaking, and supports collaborative reuse of audio. We describe our design and development journey across three distinct field trials over a two-year period. Reflecting on the iterative design process, we offer insights into the challenges faced by non-experts throughout their qualitative practices, and provide guidance for researchers designing systems to support engagement in these practices.
Voice User Interfaces in Schools: Co-designing for Inclusion with Visually-Impaired and Sighted Pupils
Voice user interfaces (VUIs) are increasingly popular, particularly in homes. However, little research has investigated their potential in other settings, such as schools. We investigated how VUIs could support inclusive education, particularly for pupils with visual impairments (VIs). We organised focused discussions with educators at a school, with support staff from local authorities and, through bodystorming, with a class of 27 pupils. We then ran a series of co-design workshops with participants with mixed-visual abilities to design an educational VUI application. This provided insights into challenges faced by pupils with VIs in mainstream schools, and opened a space for educators, sighted and visually impaired pupils to reflect on and design for their shared learning experiences through VUIs. We present scenarios, a design space and an example application that show novel ways of using VUIs for inclusive education. We also reflect on co-designing with mixed-visual-ability groups in this space.
Programmable Donations: Exploring Escrow-Based Conditional Giving
This paper reports on a co-speculative interview study with charitable donors to explore the future of programmable, conditional and data-driven donations. Responding to the rapid emergence of blockchain-based and AI-supported financial technologies, we specifically examine the potential of automated, third-party ‘escrows’, where donations are held before they are released or returned based on specified rules and conditions. To explore this we conducted pilot workshops with 9 participants and an interview study in which 14 further participants were asked about their experiences of donating money, and invited to co-speculate on a service for programmable giving. The study elicited how data-driven conditionality and automation could be leveraged to create novel donor experiences, however also illustrated the inherent tensions and challenges involved in giving programmatically. Reflecting on these findings, our paper contributes implications both for the design of programmable aid platforms, and the design of escrow-based financial services in general.
Like A Second Skin: Understanding How Epidermal Devices Affect Human Tactile Perception
The emerging class of epidermal devices opens up new opportunities for skin-based sensing, computing, and interaction. Future design of these devices requires an understanding of how skin-worn devices affect the natural tactile perception. In this study, we approach this research challenge by proposing a novel classification system for epidermal devices based on flexural rigidity and by testing advanced adhesive materials, including tattoo paper and thin films of poly (dimethylsiloxane) (PDMS). We report on the results of three psychophysical experiments that investigated the effect of epidermal devices of different rigidity on passive and active tactile perception. We analyzed human tactile sensitivity thresholds, two-point discrimination thresholds, and roughness discrimination abilities on three different body locations (fingertip, hand, forearm). Generally, a correlation was found between device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Surprisingly, thin epidermal devices based on PDMS with a hundred times the rigidity of commonly used tattoo paper resulted in comparable levels of tactile acuity. The material offers the benefit of increased robustness against wear and the option to re-use the device. Based on our findings, we derive design recommendations for epidermal devices that combine tactile perception with device robustness.
Exploring Crowdsourced Work in Low-Resource Settings
While researchers have studied the benefits and hazards of crowdsourcing for diverse classes of workers, most work has focused on those having high familiarity with both computers and English. We explore whether paid crowdsourcing can be inclusive of individuals in rural India, who are relatively new to digital devices and literate mainly in local languages. We built an Android application to measure the accuracy with which participants can digitize handwritten Marathi/Hindi words. The tasks were based on the real-world need for digitizing handwritten Devanagari script documents. Results from a two-week, mixed-methods study show that participants achieved 96.7% accuracy in digitizing handwritten words on low-end smartphones. A crowdsourcing platform that employs these users performs comparably to a professional transcription firm. Participants showed overwhelming enthusiasm for completing tasks, so much so that we recommend imposing limits to prevent overuse of the application. We discuss the implications of these results for crowdsourcing in low-resource areas.
Using Both Hands: Tangibles for Stroke Rehabilitation in the Home
Stroke is one of the most common cause of long-term disability in the world, significantly reducing quality of life through impairing motor functions and cognitive abilities. Whilst rehabilitation exercises can help in the recovery of motor function impairments, stroke survivors rarely exercise enough, leading to far from optimal recovery. In this paper, we investigate how upper limb stroke rehabilitation can be supported using interactive tangible bimanual devices in the home. We customise the rehabilitation activities based on individual rehabilitation requirements and motivation of stroke survivors. Through evaluation with five stroke survivors, we uncovered insight into how tangible stroke rehabilitation systems for the home should be designed. The evaluation revealed the special importance of tailorable form factor and supporting self-awareness and grip exercises in order to increase the independency of stroke survivors to carry out activities of daily living.
Barriers to End-User Designers of Augmented Fabrication
Augmented fabrication is the practice of designing and fabricating an artifact to work with existing objects. Although common both in the wild and as an area for research tools, little is known about how novices approach the task of designing under the constraints of interfacing with real-world objects. In this paper, we report the results of a study of fifteen novice end users in an augmented fabrication design task. We discuss obstacles encountered in four contexts: capturing information about physical objects, transferring information to 3D~modeling software, digitally modeling a new object, and evaluating whether the new object will work when fabricated. Based on our findings, we suggest how future tools can better support augmented fabrication in each of these contexts.
Bookly: An Interactive Everyday Artifact Showing the Time of Physically Accumulated Reading Activity
We introduce Bookly, an interactive artifact that physically represents the accumulated time of users’ reading activity through abstract volumetric changes. Bookly accumulates the time of actions (e.g., picking up and putting down books) that users performed for reading and provides a designated space for the ongoing book being read. The results of our 2-week in-field study with six participants showed that continuous exposure to volumetric changes representing the accumulated time of reading activities helped the users to understand their unsettled reading patterns. Bookly also motivated the users to improve their reading behavior by gradually making reading part of their schedules. Additionally, the definite distinction of the ongoing book improved its visual affordance and accessibility for the users to start reading books. Based on the findings, we confirmed the possibility of making intangible data physical for self-reflection to enhance changes in behaviors that are difficult to perform due to weak motivation.
Empowering Expression for Users with Aphasia through Constrained Creativity
Creative activities allow people to express themselves in rich, nuanced ways. However, being creative does not always come easily. For example, people with speech and language impairments, such as aphasia, face challenges in creative activities that involve language. In this paper, we explore the concept of constrained creativity as a way of addressing this challenge and enabling creative writing. We report an app, MakeWrite, that supports the constrained creation of digital texts through automated redaction. The app was co-designed with and for people with aphasia and was subsequently explored in a workshop with a group of people with aphasia. Participants were not only successful in crafting novel language, but, importantly, self-reported that the app was crucial in enabling them to do so. We refect on the potential of technology-supported constrained creativity as a means of empowering expression amongst users with diverse needs.
The Race Towards Digital Wellbeing: Issues and Opportunities
As smartphone use increases dramatically, so do studies about technology overuse. Many different mobile apps for breaking “smartphone addiction” and achieving “digital wellbeing” are available. However, it is still not clear whether and how such solutions work. Which functionality do they have? Are they effective and appreciated? Do they have a relevant impact on users’ behavior? To answer these questions, (i) we reviewed the features of 42 digital wellbeing apps, (ii) we performed a thematic analysis on 1,128 user reviews of such apps, and (iii) we conducted a 3-week-long in-the-wild study of Socialize, an app that includes the most common digital wellbeing features, with 38 participants. We discovered that digital wellbeing apps are appreciated and useful for some specific situations. However, they do not promote the formation of new habits and they are perceived as not restrictive enough, thus not effectively helping users to change their behavior with smartphones.
Autonomous Distributed Energy Systems: Problematising the Invisible through Design, Drama and Deliberation
Technologies such as blockchains, smart contracts and programmable batteries facilitate emerging models of energy distribution, trade and consumption, and generate a considerable number of opportunities for energy markets. However, these developments complicate relationships between stakeholders, disrupting traditional notions of value, control and ownership. Discussing these issues with the public is particularly challenging as energy consumption habits often obscure the competing values and interests that shape stakeholders’ relationships. To make such difficult discussions more approachable and examine the missing relational aspect of autonomous energy systems, we combined the design of speculative hairdryers with performance and deliberation. This integrated method of inquiry makes visible the competing values and interests, eliciting people’s wishes to negotiate these terms. We argue that the complexity of mediated energy distribution and its convoluted stakeholder relationships requires more sophisticated methods of inquiry to engage people in debates concerning distributed energy systems.
Empowering End Users in Debugging Trigger-Action Rules
End users can program trigger-action rules to personalize the joint behavior of their smart devices and online services. Trigger-action programming is, however, a complex task for non-programmers and errors made during the composition of rules may lead to unpredictable behaviors and security issues, e.g., a lamp that is continuously flashing or a door that is unexpectedly unlocked. In this paper, we introduce EUDebug, a system that enables end users to debug trigger-action rules. With EUDebug, users compose rules in a web-based application like IFTTT. EUDebug highlights possible problems that the set of all defined rules may generate and allows their step-by-step simulation. Under the hood, a hybrid Semantic Colored Petri Net (SCPN) models, checks, and simulates trigger-action rules and their interactions. An exploratory study on 15 end users shows that EUDebug helps identifying and understanding problems in trigger-action rules, which are not easily discoverable in existing platforms.
Mapping the Landscape of Creativity Support Tools in HCI
Creativity Support Tools (CSTs) play a fundamental role in the study of creativity in Human-Computer Interaction (HCI). Even so, there is no consensus definition of the term ‘CST’ in HCI, and in most studies, CSTs have been construed as one-off exploratory prototypes, typically built by the researchers themselves. This makes it difficult to clearly demarcate CST research, but also to compare findings across studies, which impedes advancement in digital creativity as a growing field of research. Based on a literature review of 143 papers from the ACM Digital Library (1999-2018), we contribute a first overview of the key characteristics of CSTs developed by the HCI community. Moreover, we propose a tentative definition of a CST to help strengthen knowledge sharing across CST studies. We end by discussing our study’s implications for future HCI research on CSTs and digital creativity.
Follow the Money: Managing Personal Finance Digitally
The move towards digital payments and mobile money, and away from physical cash and banking services offers users opportunities to change the ways that they can spend, save and manage their money through a variety of personal financial management services. However, set against ordinary, everyday patterns of spending, saving and other forms of financial transaction, it is not clear how users might interact with, understand, or value financial management services that utilise rich data and connected digital content for their personal use. In order to explore how people might engage with such systems, we conducted a study of financial activity, following people’s transactional activity over time, and interviewing them about their practices, understandings, needs, concerns and expectations of current and future financial technologies. Drawing from the everyday activities and practices observed, we identify implications for the design of digitally enabled, personal financial systems.
Crowdworker Economics in the Gig Economy
The nature of work is changing. As labor increasingly trends to casual work in the emerging gig economy, understanding the broader economic context is crucial to effective engagement with a contingent workforce. Crowdsourcing represents an early manifestation of this fluid, laisser-faire, on-demand workforce. This work analyzes the results of four large-scale surveys of US-based Amazon Mechanical Turk workers recorded over a six-year period, providing comparable measures to national statistics. Our results show that despite unemployment far higher than national levels, crowdworkers are seeing positive shifts in employment status and household income. Our most recent surveys indicate a trend away from full-time-equivalent crowdwork, coupled with a reduction in estimated poverty levels to below national figures. These trends are indicative of an increasingly flexible workforce, able to maximize their opportunities in a rapidly changing national labor market, which may have material impacts on existing models of crowdworker behavior.
Why Do You Need This?: Selective Disclosure of Data Among Citizen Scientists
Recent scandals involving data from participatory research have contributed to broader public concern about online privacy. Such concerns might make people more reluctant to participate in research that asks them to volunteer personal data, compromising many researchers’ data collection. We tested several motivational messages that encouraged participation in a citizen science project. We measured people’s willingness to disclose personal information. While participants were less likely to share sensitive data than neutral data, disclosure behaviour was not affected by attitudes to privacy. Importantly, we found that citizen scientists who were exposed to a motivational message that emphasised ‘learning’ were more likely to share sensitive information than those presented with other types of motivational cues. Our results suggest that priming individuals with motivational messages can increase their willingness to contribute personal data to a project, even if the request pertains to sensitive information.
Frame Analysis of Voice Interaction Gameplay
Voice control is an increasingly common feature of digital games, but the experience of playing with voice control is often hampered by feelings of embarrassment and dissonance. Past research has recognised these tensions, but has not offered a general model of how they arise and how players respond to them. In this study, we use Erving Goffman’s frame analysis, as adapted to the study of games by Conway and Trevillian, to understand the social experience of playing games by voice. Based on 24 interviews with participants who played voice-controlled games in a social setting, we put forward a frame analytic model of gameplay as a social event, along with seven themes that describe how voice interaction enhances or disrupts the player experience. Our results demonstrate the utility of frame analysis for understanding social dissonance in voice interaction gameplay, and point to practical considerations for designers to improve engagement with voice-controlled games.
A Tale of Two Perspectives: A Conceptual Framework of User Expectations and Experiences of Instructional Fitness Apps
We present a conceptual framework grounded in both users’ reviews and HCI theories, residing between practices and theories as a form of intermediate-level knowledge in interaction design. Previous research has examined different forms of intermediary knowledge such as conceptual structures, strong concepts, and bridging concepts. Within HCI, these forms are generic and rise either from theories or particular instances. In this work, we created and evaluated a conceptual framework for a specific domain (instructional fitness apps). We first extracted the particular instances using users’ online reviews and conceptualised them as an expectations and experiences framework. Second, within the framework, we evaluated the artefact related constructs using Norman’s design principles. Third, we evaluated beyond the artefact related constructs using distributed cognition theory. We present an analysis of such intermediate-level knowledge with the aim of informing future designs.
PicMe: Interactive Visual Guidance for Taking Requested Photo Composition
PicMe is a mobile application that provides interactive on-screen guidance that helps the user take pictures of a composition that another person requires. Once the requester captures a picture of the desired composition and delivers it to the user (photographer), a 2.5D guidance system, called the virtual frame, guides the user in real-time by showing a three-dimensional composition of the target image (i.e., size and shape). In addition, according to the matching accuracy rate, we provide a small-sized target image in an inset window as feedback and edge visualization for further alignment of the detail elements. We implemented PicMe to work fully in mobile environments. We then conducted a preliminary user study to evaluate the effectiveness of PicMe compared to traditional 2D guidance methods. The results show that PicMe helps users reach their target images more accurately and quickly by giving participants more confidence in their tasks.
Power Struggles and Disciplined Designers – A Nexus Analytic Inquiry on Cross-Disciplinary Research and Design
Design is at the heart of Human Computer Interaction research and practice. In the research community, there has emerged an increasing interest in understanding and conceptualizing our research practice, particularly such entailing design. However, reflective discussion around the associated challenges and practicalities is yet limited. Moreover, so far there is limited discussion on the cross-disciplinary nature of our research and design practices: although cross-disciplinarity has been brought up as an ideal and a necessity, its practicalities and complexities remain yet poorly explored. This study examines a cross-disciplinary research project with a number of researcher-designers representing different disciplines acting as ‘designers’, while having a divergent understanding of it and of who has authority to do it. The study relies on nexus analysis as a sensitizing device and shows how various discourses, epistemologies and histories shape cross-disciplinary research and design. Critical reflection around our research practice entailing design is called for.
Evaluating Sustainable Interaction Design of Digital Services: The Case of YouTube
Recent research has advocated for a broader conception of evaluation for Sustainable HCI (SHCI), using interdisciplinary insights and methods. In this paper, we put this into practice to conduct an evaluation of Sustainable Interaction Design (SID) of digital services. We explore how SID can contribute to corporate greenhouse gas (GHG) reduction strategies. We show how a Digital Service Provider (DSP) might incorporate SID into their design process and quantitatively evaluate a specific SID intervention by combining user analytics data with environmental life cycle assessment. We illustrate this by considering YouTube. Replacing user analytics data with aggregate estimates from publicly available sources, we estimate emissions associated with the deployment of YouTube to be approximately 10MtCO2e p.a. We estimate emissions reductions enabled through the use of an SID intervention from prior literature to be approximately 300KtCO2e p.a., and demonstrate that this is significant when considered alongside other emissions reduction interventions used by DSPs.
Affinity Lens: Data-Assisted Affinity Diagramming with Augmented Reality
Despite the availability of software to support Affinity Diagramming (AD), practitioners still largely favor physical sticky-notes. Physical notes are easy to set-up, can be moved around in space and offer flexibility when clustering un-structured data. However, when working with mixed data sources such as surveys, designers often trade off the physicality of notes for analytical power. We propose AffinityLens, a mobile-based augmented reality (AR) application for Data-Assisted Affinity Diagramming (DAAD). Our application provides just-in-time quantitative insights overlaid on physical notes. Affinity Lens uses several different types of AR overlays (called lenses) to help users find specific notes, cluster information, and summarize insights from clusters. Through a formative study of AD users, we developed design principles for data-assisted AD and an initial collection of lenses. Based on our prototype, we find that Affinity Lens supports easy switching between qualitative and quantitative ‘views’ of data, without surrendering the lightweight benefits of existing AD practice.
Managing Multimorbidity: Identifying Design Requirements for a Digital Self-Management Tool to Support Older Adults with Multiple Chronic Conditions
Older adults with multiple chronic conditions (multimorbidity) face complex self-management routines, including symptom monitoring, managing multiple medications, coordinating healthcare visits, communicating with multiple healthcare providers and processing and managing potentially conflicting advice on conditions. While much research exists on single disease management, little, if any research has explored the topic of technology to support those with multimorbidity, particularly older adults, to self-manage with support from a care network. This paper describes a large qualitative study with 125 participants, including older adults with multimorbidity and those who care for them, across two European countries. Key findings related to the: impact of multimorbidity, complexities involved in self-management, motivators and barriers to self-management, sources of support and poor communication as a barrier to care coordination. We present important concepts and design features for a digital health system that aim to address requirements derived from this study.
The Performative Mirror Space
Interactive mirrors, typically combining semi-transparent mirrors, digital screens and interaction mechanisms have been developed for a variety of application areas. Drawing on existing techniques to create interactive mirror spaces, we investigated their performative qualities through artistic discovery and collaborative prototyping. We document a linked set of design explorations and two public, site-specific experiences that brought together artists, communities, and HCI researchers. We illustrate the abstracted interactive mirror space that practitioners in the performance art, theatre and museum sectors can work with. In turn, we also discuss six performative design strategies concerning the use of physical context, movement and narrative that HCI researchers who wish to deploy interactive mirrors in more mainstream settings need to consider.
The Inflatable Cat: Idiosyncratic Ideation of Smart Objects for the Home
Research on product experience has a history in investigating the sensory and emotional qualities of interacting with objects. However, this notion has not been fully expanded to the design space of co-designing smart objects. In this paper, we report on findings from a series of co-design workshops where we used the toolkit Loaded Dice in conjunction with a card set that aimed to support participants in reflecting the sensory qualities of domestic smart objects. We synthesize and interpret findings from our study to help illustrate how the workshops supported co-designers in creatively ideating concepts for emotionally valuable smart objects that better connect personal inputs with the output of smart objects. Our work contributes a case example of how a co-design approach that emphasizes situated sensory exploration can be effective in enabling co-designers to ideate concepts of idiosyncratic smart objects that closely relate to the characteristics of their domestic living situations.
Grasping Microgestures: Eliciting Single-hand Microgestures for Handheld Objects
Single-hand microgestures have been recognized for their potential to support direct and subtle interactions. While pioneering work has investigated sensing techniques and presented first sets of intuitive gestures, we still lack a systematic understanding of the complex relationship between microgestures and various types of grasps. This paper presents results from a user elicitation study of microgestures that are performed while the user is holding an object. We present an analysis of over 2,400 microgestures performed by 20 participants, using six different types of grasp and a total of 12 representative handheld objects of varied geometries and size. We expand the existing elicitation method by proposing statistical clustering on the elicited gestures. We contribute detailed results on how grasps and object geometries affect single-hand microgestures, preferred locations, and fingers used. We also present consolidated gesture sets for different grasps and object size. From our findings, we derive recommendations for the design of microgestures compatible with a large variety of handheld objects.
AutoFritz: Autocomplete for Prototyping Virtual Breadboard Circuits
We propose autocomplete for the design and development of virtual breadboard circuits using software prototyping tools. With our system, a user inserts a component into the virtual breadboard, and it automatically provides a user with a list of suggested components. These suggestions complete or ex- tend the electronic functionality of the inserted component to save the user’s time and reduce circuit error. To demon- strate the effectiveness of autocomplete, we implemented our system on Fritzing, a popular open source breadboard circuit prototyping software, used by novice makers. Our autocomplete suggestions were implemented based upon schematics from datasheets for standard components, as well as how components are used together from over 4000 circuit projects from the Fritzing community. We report the results of a controlled study with 16 participants, evaluating the effectiveness of autocomplete in the creation of virtual breadboard circuits, and conclude by sharing insights and directions for future research.
Printer Pals: Experience-Centered Design to Support Agency for People with Dementia
Whereas there have been significant improvements in the quality of care provided for people with dementia, limited attention to the importance for people with dementia being enabled to make positive social contributions within care home contexts can restrict their sense of agency. In this paper we describe the design and deployment of ‘Printer Pals’ a receipt-based print media device, which encourages social contribution and agency within a care home environment. The design followed a two-year ethnography, from which the need for highlighting participation and supporting agency for residents within the care home became clear. The residents use of Printer Pals mediated participation in a number of different ways, such as engaging with the technology itself, offering shared experiences and participating in co-constructive and meaningful ways, each of which is discussed. We conclude with a series of design consideration to support agentic and caring interactions through inclusive design practices.
Using Time and Space Efficiently in Driverless Cars: Findings of a Co-Design Study
The alternative use of travel time is a widely discussed benefits of driverless cars. We therefore conducted 14 co-design sessions to examine how people manage their time, to determine how they perceive the value of time in driverless cars and derive design implications. Our findings suggest that driverless mobility will affect people’s use of travel time and their time management in general. The participants repeatedly stated the desire of completing tasks while traveling to save time for activities that are normally neglected in everyday life. Using travel time efficiently requires using car space efficiently. We found out that the design concept of tiny houses could serve as common design pattern to deal with the limited space within cars and support diverse needs.
Expression of Curiosity in Social Robots: Design, Perception, and Effects on Behaviour
Curiosity-the intrinsic desire for new information-can enhance learning, memory, and exploration. Therefore, understanding how to elicit curiosity can inform the design of educational technologies. In this work, we investigate how a social peer robot’s verbal expression of curiosity is perceived, whether it can affect the emotional feeling and behavioural expression of curiosity in students, and how it impacts learning. In a between-subjects experiment, 30 participants played the game LinkIt!, a game we designed for teaching rock classification, with a robot verbally expressing: curiosity, curiosity plus rationale, or no curiosity. Results indicate that participants could recognize the robot’s curiosity and that curious robots produced both emotional and behavioural curiosity contagion effects in participants.
Understanding and Mitigating Worker Biases in the Crowdsourced Collection of Subjective Judgments
Crowdsourced data acquired from tasks that comprise a subjective component (e.g. opinion detection, sentiment analysis) is potentially affected by the inherent bias of crowd workers who contribute to the tasks. This can lead to biased and noisy ground-truth data, propagating the undesirable bias and noise when used in turn to train machine learning models or evaluate systems. In this work, we aim to understand the influence of workers’ own opinions on their performance in the subjective task of bias detection. We analyze the influence of workers’ opinions on their annotations corresponding to different topics. Our findings reveal that workers with strong opinions tend to produce biased annotations. We show that such bias can be mitigated to improve the overall quality of the data collected. Experienced crowd workers also fail to distance themselves from their own opinions to provide unbiased annotations.
Magnetips: Combining Fingertip Tracking and Haptic Feedback for Around-Device Interaction
Around-device interaction methods expand the available interaction space for mobile devices; however, there is currently no way to simultaneously track a user’s input and provide haptic feedback at the tracked point away from the device. We present Magnetips, a simple, mobile solution for around-device tracking and mid-air haptic feedback. Magnetips combines magnetic tracking and electromagnetic feedback that works regardless of visual occlusion, through most common materials, and at a size that allows for integration with mobile devices. We demonstrate: (1) high-frequency around-device tracking and haptic feedback; (2) the accuracy and range of our tracking solution which corrects for the effects of geomagnetism, necessary for enabling mobile use; and (3) guidelines for maximising strength of haptic feedback, given a desired tracking frequency. We present technical and usability evaluations of our prototype, and demonstrate four example applications of its use.
Measuring the Influences of Musical Parameters on Cognitive and Behavioral Responses to Audio Notifications Using EEG and Large-scale Online Studies
Prior studies have evaluated various designs for audio notifications. However, calls for more in-depth research on how such notifications work, especially at the level of users’ cognitive states, have gone unanswered; and studies evaluating audio notifications with large numbers of participants in multiple environments have been rare. This study conducted an electroencephalography study (N=20) and an online study (N=967) to enhance understandings of how three musical parameters – melody (simple, complex), pitch (high, low), and tempo (fast, slow) – influenced users’ cognition and behaviors. There are eight different notifications with different combinations of these parameters. The online study analyzed the effects of user-specific and environmental information on users’ behaviors while they listened to these notifications. The results revealed that tempo and pitch have the main effect on the speed and strength (accuracy) of users’ cognition and behaviors. The users’ characteristics and environments influenced the effects of these musical parameters.
Design Considerations for Interactive Office Lighting: Interface Characteristics, Shared and Hybrid Control
The inclusion of IoT in office lighting allows people to have personal lighting control at their workplace. To design lighting control interfaces that fit people’s everyday living, we need a better understanding of how people experience lighting interaction in the real world. Still, lighting control is often explored in controlled settings. This work presents a qualitative field study concerning the user experience of two control interfaces for a state-of-the-art lighting system of 400+ luminaires in a real-life office. In ten weeks, 43 people interacted 3937 times. The findings illustrate the effects of using a smartphone for lighting control, how people experience lighting control in shared situations, and issues with automatic system behavior. We define design considerations for interface characteristics, shared control, and hybrid control. The work contributes to making the potential benefits of interactive office lighting a reality.
Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems
AI technologies have been incorporated into many end-user applications. However, expectations of the capabilities of such systems vary among people. Furthermore, bloated expectations have been identified as negatively affecting perception and acceptance of such systems. Although the intelligibility of ML algorithms has been well studied, there has been little work on methods for setting appropriate expectations before the initial use of an AI-based system. In this work, we use a Scheduling Assistant – an AI system for automated meeting request detection in free-text email – to study the impact of several methods of expectation setting. We explore two versions of this system with the same 50% level of accuracy of the AI component but each designed with a different focus on the types of errors to avoid (avoiding False Positives vs. False Negatives). We show that such different focus can lead to vastly different subjective perceptions of accuracy and acceptance. Further, we design expectation adjustment techniques that prepare users for AI imperfections and result in a significant increase in acceptance.
Voice-Based Quizzes for Measuring Knowledge Retention in Under-Connected Populations
Information dissemination using automated phone calls allows reaching low-literate and tech-naive populations. Open challenges include rapid verification of expected knowledge gaps in the community, dissemination of specific information to address these gaps, and follow-up measurement of knowledge retention. We report Sawaal, a voice-based telephone service that uses audio-quizzes to address these challenges. Sawaal allows its open community of users to post and attempt multiple-choice questions and to vote and comment on them. Sawaal spreads virally as users challenge friends to quiz competitions. Administrator-posted questions allow confirming specific knowledge gaps, spreading correct information and measuring knowledge retention via rephrased, repeated questions. In 14 weeks and with no advertisement, Sawaal reached 3,433 users (120,119 calls) in Pakistan, who contributed 13,276 questions that were attempted 455,158 times by 2,027 users. Knowledge retention remained significant for up to two weeks. Surveys revealed that 71% of the mostly low-literate, young, male users were blind.
ORC Layout: Adaptive GUI Layout with OR-Constraints
We propose a novel approach for constraint-based graphical user interface (GUI) layout based on OR-constraints (ORC) in standard soft/hard linear constraint systems. ORC layout unifies grid layout and flow layout, supporting both their features as well as cases where grid and flow layouts individually fail. We describe ORC design patterns that enable designers to safely create flexible layouts that work across different screen sizes and orientations. We also present the ORC Editor, a GUI editor that enables designers to apply ORC in a safe and effective manner, mixing grid, flow and new ORC layout features as appropriate. We demonstrate that our prototype can adapt layouts to screens with different aspect ratios with only a single layout specification, easing the burden of GUI maintenance. Finally, we show that ORC specifications can be modified interactively and solved efficiently at runtime.
Exploring Interaction Fidelity in Virtual Reality: Object Manipulation and Whole-Body Movements
High degrees of interaction fidelity (IF) in virtual reality (VR) are said to improve user experience and immersion, but there is also evidence of low IF providing comparable experiences. VR games are now increasingly prevalent, yet we still do not fully understand the trade-off between realism and abstraction in this context. We conducted a lab study comparing high and low IF for object manipulation tasks in a VR game. In a second study, we investigated players’ experiences of IF for whole-body movements in a VR game that allowed players to crawl underneath virtual boulders and “dangle” along monkey bars. Our findings show that high IF is preferred for object manipulation, but for whole-body movements, moderate IF can suffice, as there is a trade-off with usability and social factors. We provide guidelines for the development of VR games based on our results.
Can Children Understand Machine Learning Concepts?: The Effect of Uncovering Black Boxes
Machine Learning services are integrated into various aspects of everyday life. Their underlying processes are typically black-boxed to increase ease-of-use. Consequently, children lack the opportunity to explore such processes and develop essential mental models. We present a gesture recognition research platform, designed to support learning from experience by uncovering Machine Learning building blocks: Data Labeling and Evaluation. Children used the platform to perform physical gestures, iterating between sampling and evaluation. Their understanding was tested in a pre/post experimental design, in three conditions: learning activity uncovering Data Labeling only, Evaluation only, or both. Our findings show that both building blocks are imperative to enhance children’s understanding of basic Machine Learning concepts. Children were able to apply their new knowledge to everyday life context, including personally meaningful applications. We conclude that children’s interaction with uncovered black boxes of Machine Learning contributes to a better understanding of the world around them.
Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications
Appearance-based gaze estimation methods that only require an off-the-shelf camera have significantly improved but they are still not yet widely used in the human-computer interaction (HCI) community. This is partly because it remains unclear how they perform compared to model-based approaches as well as dominant, special-purpose eye tracking equipment. To address this limitation, we evaluate the performance of state-of-the-art appearance-based gaze estimation for interaction scenarios with and without personal calibration, indoors and outdoors, for different sensing distances, as well as for users with and without glasses. We discuss the obtained findings and their implications for the most important gaze-based applications, namely explicit eye input, attentive user interfaces, gaze-based user modelling, and passive eye monitoring. To democratise the use of appearance-based gaze estimation and interaction in HCI, we finally present OpenGaze (www.opengaze.org), the first software toolkit for appearance-based gaze estimation and interaction.
Explicating "Implicit Interaction": An Examination of the Concept and Challenges for Research
The term implicit interaction is often used to denote interactions that differ from traditional purposeful and attention demanding ways of interacting with computers. However, there is a lack of agreement about the term’s precise meaning. This paper develops implicit interaction further as an analytic concept and identifies the methodological challenges related to HCI’s particular design orientation. We first review meanings of implicit as unintentional, attentional background, unawareness, unconsciousness and implicature, and compare them in regards to the entity they qualify, the design motivation they emphasize and their constructive validity for what makes good interaction. We then demonstrate how the methodological challenges can be addressed with greater precision by using an updated, intentionality-based definition that specifies an input-effect relationship as the entity of implicit. We conclude by identifying a number of new considerations for design and evaluation, and by reflecting on the concepts of user and system agency in HCI.
Charting Subtle Interaction in the HCI Literature
Human-computer interaction is replete with ways of talking about qualities of interaction or interfaces, including if they are expressive, rich, fluid, or playful. An example of such a quality is subtle. While this word is frequently used in the literature, we lack a coherent account of what it means to be subtle, how to achieve subtleness in an interface, and what theoretical backing subtleness has. To create such an account, we analyze a sample of 55 publications that use the word subtle. We describe the variants of subtle interaction in the literature, including claimed benefits, empirical approaches, and ethical considerations. Not only does this create a basis for thinking about subtleness as a quality of interaction, it also works to show how to solidify varieties of quality in HCI. We conclude by outlining some open empirical and conceptual questions about subtleness.
Monotasking or Multitasking: Designing for Crowdworkers’ Preferences
Crowdworkers receive no formal training for managing their tasks, time or working environment. To develop tools that support such workers, an understanding of their preferences and the constraints they are under is essential. We asked 317 experienced Amazon Mechanical Turk workers about factors that influence their task and time management. We found that a large number of the crowdworkers score highly on a measure of polychronicity; this means that they prefer to frequently switch tasks and happily accommodate regular work and non-work interruptions. While a preference for polychronicity might equip people well to deal with the structural demands of crowdworking platforms, we also know that multitasking negatively affects workers’ productivity. This puts crowdworkers’ working preferences into conflict with the desire of requesters to maximize workers’ productivity. Combining the findings of prior research with the new knowledge obtained from our participants, we enumerate practical design options that could enable workers, requesters and platform developers to make adjustments that would improve crowdworkers’ experiences.
On the Latency of USB-Connected Input Devices
We propose a method for accurately and precisely measuring the intrinsic latency of input devices and document measurements for 36 keyboards, mice and gamepads connected via USB. Our research shows that devices differ not only in average latency, but also in the distribution of their latencies, and that forced polling at 1000 Hz decreases latency for some but not all devices. Existing practices – measuring end-to-end latency as a proxy of input latency and reporting only mean values and standard deviations – hide these characteristic latency distributions caused by device intrinsics and polling rates. A probabilistic model of input device latency demonstrates these issues and matches our measurements. Thus, our work offers guidance for researchers, engineers, and hobbyists who want to measure the latency of input devices or select devices with low latency.
Analyzing the Use of Camera Glasses in the Wild
Camera glasses enable people to capture point-of-view videos using a common accessory, hands-free. In this paper, we investigate how, when, and why people used one such product: Spectacles. We conducted 39 semi-structured interviews and surveys with 191 owners of Spectacles. We found that the form factor elicits sustained usage behaviors, and opens opportunities for new use-cases and types of content captured. We provide a usage typology, and highlight societal and individual factors that influence the classification of behaviors.
HawkEye – Deploying a Design Fiction Probe
This paper explores how a design fiction can be designed to be used as a pragmatic user-centred design method to generate insights on future technology use. We built HawkEye, a design fiction probe that embodies a future fiction of dementia care. To learn how participants respond to the probe, we employed it with eight participants for three weeks in their own homes as well as evaluating it with six HCI experts in sessions of 1.5h. In addition to presenting the probe in detail, we share insights into the process of building it and discuss the utility of design fiction as a tool to elicit empathetic and rich discussions about potential outcomes of future technologies.
Exploring Media Capture of Meaningful Experiences to Support Families Living with Dementia
Although designing interactive media experiences for people with dementia has become a growing interest in HCI, a strong focus on family members has rarely been recognised as worthy of design intervention. This paper presents a research through design (RTD) approach working closely with families living with dementia in order to create personalised media experiences. Three families took part in day trips, which they co-planned, with data collection during these days providing insights into their shared social experiences. Workshops were also held in order to personalise the experience of the media created during these days out. Our qualitative analysis outlines themes focusing on individuality, relationships, and accepting changed realities. Furthermore, we outline directions for future research focusing on designing for contested realities, the personhood of carers, and the ageing body and immersion.
Understanding the Impact of TVIs on Technology Use and Selection by Children with Visual Impairments
The use of technology in educational settings is extremely common. For many visually impaired children, educational settings are the first place they are exposed to the assistive technology that they will need to access mainstream computing devices. Current laws provide support for students to receive training from Teachers of the Visually Impaired (TVIs) on these assistive devices. Therefore, TVIs play an important role in the selection and training of technology. Through our interviews with TVIs, we discovered the factors that impact which technologies they select, how they attempt to mitigate the stigma associated with certain technologies, and the challenges that students face in learning assistive technologies. Through this research, we identified three needs that future research on assistive technology should address: (1) increasing focus on built-in accessibility features, (2) providing support for independent learning and exploration, and (3) creating technologies that can support users with progressive vision loss.
Student Perspectives on Digital Phenotyping: The Acceptability of Using Smartphone Data to Assess Mental Health
There is a mental health crisis facing universities internationally. A growing body of interdisciplinary research has successfully demonstrated that using sensor and interaction data from students’ smartphones can give insight into stress, depression, mood, suicide risk and more. The approach, which is sometimes termed Digital Phenotyping, has potential to transform how mental health and wellbeing can be monitored and understood. The approach could also transform how interventions are designed, delivered and evaluated. To date, little work has addressed the human and ethical side of digital phenotyping, including how students feel about being monitored. In this paper we report findings from in-depth focus groups, prototyping and interviews with students. We find they are positive about mental health technology, but also that there are multi-layered issues to address if digital phenotyping is to become acceptable. Using an acceptability framework, we set out the key design challenges that need to be addressed.
A-line: 4D Printing Morphing Linear Composite Structures
This paper presents A-line, a 4D printing system for designing and fabricating morphing three-dimensional shapes out of simple linear elements. In addition to the commonly known benefit of 4D printing to save printing time, printing materials, and packaging space, A-line also takes advantage of the unique properties of thin lines, including their suitability for compliant mechanisms and ability to travel through narrow spaces and self-deploy or self-lock on site. A-line integrates a method of bending angle control in up to eight directions for one printed line segment, using a single type of thermoplastic material. A software platform to support the design, simulation and tool path generation is developed to support the design and manufacturing of various A-line structures. Finally, the design space of A-line is explored through four application areas, including line sculpting, compliant mechanisms, self-deploying, and self-locking structures.
Detecting Visuo-Haptic Mismatches in Virtual Reality using the Prediction Error Negativity of Event-Related Brain Potentials
Designing immersion is the key challenge in virtual reality; this challenge has driven advancements in displays, rendering and recently, haptics. To increase our sense of physical immersion, for instance, vibrotactile gloves render the sense of touching, while electrical muscle stimulation (EMS) renders forces. Unfortunately, the established metric to assess the effectiveness of haptic devices relies on the user’s subjective interpretation of unspecific, yet standardized, questions.
Here, we explore a new approach to detect a conflict in visuo-haptic integration (e.g., inadequate haptic feedback based on poorly configured collision detection) using electroencephalography (EEG). We propose analyzing event-related potentials (ERPs) during interaction with virtual objects. In our study, participants touched virtual objects in three conditions and received either no haptic feedback, vibration, or vibration and EMS feedback. To provoke a brain response in unrealistic VR interaction, we also presented the feedback prematurely in 25% of the trials.
We found that the early negativity component of the ERP (so called prediction error) was more pronounced in the mismatch trials, indicating we successfully detected haptic conflicts using our technique. Our results are a first step towards using ERPs to automatically detect visuo-haptic mismatches in VR, such as those that can cause a loss of the user’s immersion.
A Field Study of Teachers Using a Curriculum-integrated Digital Game
We present a new framework describing how teachers use ST Math, a curriculum-integrated, year-long educational game, in 3rd-4th grade classrooms. We combined authentic classroom observations with teacher interviews to identify teacher needs and practices. Our findings extended and contrasted with prior work on teachers’ behaviors around classroom games, identifying differences likely arising from a digital platform and year-long curricular integration. We suggest practical ways that curriculum-integrated games can be designed to help teachers support effective classroom culture and practice.
Feeling Fireworks: An Inclusive Tactile Firework Display
This paper presents a novel design for a large-scale interactive tactile display. Fast dynamic tactile effects are created at high spatial resolution on a flexible screen, using directable nozzles that spray water jets onto the rear of the screen. The screen further has back-projected visual content and touch interaction. The technology is demonstrated in Feeling Fireworks, a tactile firework show. The goal is to make fireworks more inclusive for the Blind and Low-Vision (BLV) community. A BLV focus group provided input during the development process, and a user study with BLV users showed that Feeling Fireworks is an enjoyable and meaningful experience. A user study with sighted users showed that users could accurately label the correspondence between the designed tactile firework effects and corresponding visual fireworks. Beyond the Feeling Fireworks application, this is a novel approach for scalable tactile displays with potential for broader use.
Social Play in an Exergame: How the Need to Belong Predicts Adherence
The general trend in exercise interventions, including those based on exergames, is to see high initial enthusiasm but significantly declining adherence. Social play is considered a core tenet of the design of exercise interventions help foster motivation to play. To determine whether social play aids in adherence to exergames, we analyzed data from a study involving five waves of six-week exergame trials between a single-player and multiplayer group. In this paper, we examine the multiplayer group to determine who might benefit from social play and why. We found that people who primarily engage in group play have superior adherence to people who primarily play alone. People who play alone in a multiplayer exergame have worse adherence than playing a single-player version, which can undo any potential benefit of social play. The primary construct distinguishing group versus alone players is their sense of program belonging. Program belonging is, thus, crucial to multiplayer exergame design.
Around the (Virtual) World: Infinite Walking in Virtual Reality Using Electrical Muscle Stimulation
Virtual worlds are infinite environments in which the user can move around freely. When shifting from controller-based movement to regular walking as an input, the limitation of the real world also limits the virtual world. Tackling this challenge, we propose the use of electrical muscle stimulation to limit the necessary real-world space to create an unlimited walking experience. We thereby actuate the users` legs in a way that they deviate from their straight route and thus, walk in circles in the real world while still walking straight in the virtual world. We report on a study comparing this approach to vision shift – the state of the art approach – as well as combining both approaches. The results show that particularly combining both approaches yield high potential to create an infinite walking experience.
How Do Distance Learners Connect?
Distance learners often experience social isolation and impoverished social interaction with their remote peers. To better understand the connections that distance learners are able to build with peers, we interviewed them about whether and how they perceive or cultivate connections with one another. Our analysis reveals how connections in an online learning environment are formed and experienced across different social contexts and technology affordances, and what strategies and practices enable and inhibit these connections. We discuss the implications of our findings for concepts of shared identity and evolving peer relationships among online learners and for design directions that might address their social needs.
Security Managers Are Not The Enemy Either
Security managers are leading employees whose decisions shape security measures and thus influence the everyday work of all users in their organizations. To understand how security managers handle user requirements and behavior, we conducted semi-structured interviews with seven security managers from large-scale German companies. Our results indicate that due to the absence of organizational structures that include users into security development processes, security managers unintentionally obtain a negative view on users. Their distrust towards users leads to the creation of technical security measures that cannot be influenced by users in any way. However, as previous research has repeatedly shown, rigid security measures lead to frustration and discouragement of users, and also to creative (but usually insecure) methods of security circumvention. We conclude that in order to break through this vicious cycle, security managers need organizational structures, methods and tools that facilitate systematic feedback from users.
Older Voices: Supporting Community Radio Production for Civic Participation in Later Life
Community radio can support the process of having a voice in one’s community as a part of civic action, and promote community dialogue. However, older adults are underrepresented as producers of community radio shows in the UK, and face different challenges to their younger colleagues. By working within the radio production group of an existing organisation of older adults, we identify the motivations and challenges in supporting this type of civic participation in media in later life. Key challenges were identified, including audience engagement, content persistence and process sustainability. In response, we 1) supported the group’s audience engagement using Facebook Live and a phone-in option, and 2) developed a digital production tool. Reporting on the continued use of the tool by the organisation, we discuss how tailored and non-intrusive processes mediated by digital technology can support older adults in delivering richer media experiences whilst serving their civic participatory interests.
Reveal: Investigating Proactive Location-Based Reminiscing with Personal Digital Photo Repositories
Recording experiences and memories is an important role for digital photography, with smartphone cameras leading to individuals taking increasing numbers of pictures of everyday experiences. Increasingly, these are automatically stored in personal, cloud-backed, photo repositories. However, such experiences can be forgotten quickly, with images ‘lost’ within the user’s library, loosing their role in supporting reminiscing. We investigate how users might be provoked to view these images and the benefits they bring through the development and evaluation of a proactive, location-based reminiscing tool, called Reveal. We outline how a location-based approach allowed participants to reflect more widely on their photo practice, and the potential of such reminiscing tools to support effective management and curation of individual’s increasingly large personal photo collections.
"What is Fair Shipping, Anyway?": Using Design Fiction to Raise Ethical Awareness in an Industrial Context
The HCI community cares for the human and social aspects of technologies. Ethical discussion on the social implications of new technologies often happen among researchers, but it is important to raise this discussion also in the industry that designs and implements new systems. In this paper, we introduce a case in which design fiction was used as an ethical discussion tool among company partners. We report the process of creating and prototyping a fictional world embedded with conflicting values that aimed to shift the focus from industrial merits towards societal values and raise discussion among participants. Moreover, we examine the challenges and propose suggestions in crafting critiques and friction to the industrial context. Our findings suggest why and how one should use design fiction as a means to raise ethical awareness in a technology- and profit-focused context, to support further activities on developing more humane technological futures.
Our Story: Addressing Challenges in Development Contexts for Sustainable Participatory Video
Participatory Video (PV) is emerging as a rich and valuable method for monitoring and evaluating (M & E) projects in the International Development sector. Although shown to be useful for engaging communities within short-term monitoring exercises or promotion, PV in these contexts presents significant complexity and logistical challenges for sustained uptake by Development organizations. In this paper, we present Our Story, a digitally mediated work flow iteratively designed and deployed on initiatives in Indonesia and Namibia. Developed in collaboration with the International Federation of Red Cross and Red Crescent (IFRC), it supports end-to-end PV production in the field, and was specifically developed to make PV a more sustainable tool for monitoring. We discuss and evaluate Our Story, reporting on how by lowering skills barriers for facilitators and leveraging consumer technology, PV can be delivered at scale.
Integrating Multimedia Tools to Enrich Interactions in Live Streaming for Language Learning
Online language lessons have adopted live broadcasted videos to provide more real-time interactive experiences between language teachers and learners. However, learner interactions are primarily limited to the built-in text chat in the live stream. Using text alone, learners cannot get feedback on important aspects of a language, such as speaking skills, that are afforded only by offering richer types of interactions. We present results from a 2-week in-the-wild study, in which we investigate the use of text, audio, video, image, and stickers as interaction tools for language teachers and learners in live streaming. Our language teacher explored three different teaching strategies over four live streamed English lessons, while nine students watched and interacted using multimodal tools. The findings reveal that multimodal communication yields instant feedback and increased engagement, but its use is dependent on factors such as group size, surroundings, time, and online identity.
Spaces and Traces: Implications of Smart Technology in Public Housing
Smart home technologies are beginning to become more widespread and common, even as their deployment and implementation remain complex and spread across different competing commercial ecosystems. Looking beyond the middle-class, single-family home often at the center of the smart home narrative, we report on a series of participatory design workshops held with residents and building managers to better understand the role of smart home technologies in the context of public housing in the U.S. The design workshops enabled us to gather insight into the specific challenges and opportunities of deploying smart home technologies in a setting where issues of privacy, data collection and ownership, and autonomy collide with diverse living arrangements, where income, age, and the consequences of monitoring and data aggregation setup an expanding collection of design implications in the ecosystems of smart home technologies.
Mazi: Tangible Technologies as a Channel for Collaborative Play
This paper investigates how haptic and auditory stimulation can be playfully implemented as an accessible and stimulating form of interaction for children. We present the design of Mazi, a sonic Tangible User Interface (TUI) designed to encourage spontaneous and collaborative play between children with high support needs autism. We report on a five week study of Mazi with five children aged between 6 and 9 years old at a Special Education Needs (SEN) school in London, UK. We found that collaborative play emerged from the interaction with the system especially in regards to socialization and engagement. Our study contributes to exploring the potential of user-centered TUI development as a channel to facilitate social interaction while providing sensory regulation for children with SENs.
Making Diabetes Education Interactive: Tangible Educational Toys for Children with Type-1 Diabetes
Younger children (under 9 years) with type-1 diabetes are often very passive in the management of their condition and can face difficulties in accessing and understanding basic diabetes related information. This can make transitioning to self-management in later years very challenging. Previous research has mostly focused on educational interventions for older children.
To create an educational tool which can support the diabetes educational process of younger children, we conducted a multiphase and multi-stakeholder user-centred design process. The result is an interactive tool that illustrates diabetes concepts in an age-appropriate way with the use of tangible toys. The tool was evaluated inside a paediatric diabetes clinic with clinicians, children and parents and was found to be engaging, acceptable and effective. In addition to providing implications for the design and adoption of educational tools for children in a clinical setting, we discuss the challenges for conducting user-centred design in such a setting.
Ten-Minute Silence: A New Notification UX of Mobile Instant Messenger
People receive a tremendous number of messages through mobile instant messaging (MIM), which generates crowded notifications. This study highlights our attempt to create a new notification rule to reduce this crowdedness, which can be recognized by both senders and recipients. We developed an MIM app that provides only one notification per conversation session, which is a group of consecutive messages distinguished based on a ten-minute silence period. Through the two-week field study, 20,957 message logs and interview data from 17 participants revealed that MIM notifications affect not only the recipients’ experiences before opening the app but also the entire conversation experience, including that of the senders. The new notification rule created new social norms for the participants’ use of MIM. We report themes about the changes in the MIM experience, which will expand the role of notifications for future MIM apps.
Quantitative Measurement of Tool Embodiment for Virtual Reality Input Alternatives
Virtual reality (VR) strives to replicate the sensation of the physical environment by mimicking people’s perceptions and experience of being elsewhere. These experiences are of-ten mediated by the objects and tools we interact with in the virtual world (e.g., a controller). Evidence from psychology posits that when using the tool proficiently, it becomes em-bodied (i.e., an extension of one’s body). There is little work,however, on how to measure this phenomenon in VR, andon how different types of tools and controllers can affect the experience of interaction. In this work, we leverage cognitive psychology and philosophy literature to construct the Locus-of-Attention Index (LAI), a measure of tool embodiment. We designed and conducted a study that measures readiness-to-hand and unreadiness-to-hand for three VR interaction techniques: hands, a physical tool, and a VR controller. The study shows that LAI can measure differences in embodiment with working and broken tools and that using the hand directly results in more embodiment than using controllers.
DMove: Directional Motion-based Interaction for Augmented Reality Head-Mounted Displays
We present DMove, directional motion-based interaction for Augmented Reality (AR) Head-Mounted Displays (HMDs) that is both hands- and device-free. It uses directional walk-ing as a way to interact with virtual objects. To use DMove, a user needs to perform directional motions such as mov-ing one foot forward or backward. In this research, we first investigate the recognition accuracy of the motion direc-tions of our method and the social acceptance of this type of interactions together with users’ comfort rating for each direction. We then optimize its design and conduct a sec-ond study to compare DMove in task performance and user preferences (workload, motion sickness, user experience), with two approaches-Hand interaction (Meta 2-like) and Head+Hand interaction (HoloLens-like) for menu selection tasks. Based on the results of these two studies, we provide a set of guidelines for DMove and further demonstrate two applications that utilize directional motions.
Trigger-Action Programming for Personalising Humanoid Robot Behaviour
In the coming years humanoid robots will be increasingly used in a variety of contexts, thereby presenting many opportunities to exploit their capabilities in terms of what they can sense and do. One main challenge is to design technologies that enable those who are not programming experts to personalize robot behaviour. We propose an end user development solution based on trigger-action personalization rules. We describe how it supports editing such rules and its underlying software architecture, and report on a user test that involved end user developers. The test results show that users were able to perform the robot personalization tasks with limited effort, and found the trigger-action environment usable and suitable for the proposed tasks. Overall, we show the potential for using trigger-action programming to make robot behaviour personalization possible even to people who are not professional software developers.
Augmented Reality Views for Occluded Interaction
We rely on our sight when manipulating objects. When objects are occluded, manipulation becomes difficult. Such occluded objects can be shown via augmented reality to re-enable visual guidance. However, it is unclear how to do so to best support object manipulation. We compare four views of occluded objects and their effect on performance and satisfaction across a set of everyday manipulation tasks of varying complexity. The best performing views were a see-through view and a displaced 3D view. The former enabled participants to observe the manipulated object through the occluder, while the latter showed the 3D view of the manipulated object offset from the object’s real location. The worst performing view showed remote imagery from a simulated hand-mounted camera. Our results suggest that alignment of virtual objects with their real-world location is less important than an appropriate point-of-view and view stability.
A is for Artificial Intelligence: The Impact of Artificial Intelligence Activities on Young Children’s Perceptions of Robots
We developed a novel early childhood artificial intelligence (AI) platform, PopBots, where preschool children train and interact with social robots to learn three AI concepts: knowledge-based systems, supervised machine learning, and generative AI. We evaluated how much children learned by using AI assessments we developed for each activity. The median score on the cumulative assessment was 70% and children understood knowledge-based systems the best. Then, we analyzed the impact of the activities on children’s perceptions of robots. Younger children came to see robots as toys that were smarter than them, but their older counterparts saw them more as people that were not as smart as them. Children who performed worse on the AI assessments believed that robots were like toys that were not as smart as them, however children who did better on the assessments saw robots as people who were smarter than them. We believe early AI education can empower children to understand the AI devices that are increasingly in their lives.
i’sFree: Eyes-Free Gesture Typing via a Touch-Enabled Remote Control
Entering text without having to pay attention to the keyboard is compelling but challenging due to the lack of visual guidance. We propose i’sFree to enable eyes-free gesture typing on a distant display from a touch-enabled remote control. i’sFree does not display the keyboard or gesture trace but decodes gestures drawn on the remote control into text according to an invisible and shifting Qwerty layout. i’sFree decodes gestures similar to a general gesture typing decoder, but learns from the instantaneous and historical input gestures to dynamically adjust the keyboard location. We designed it based on the understanding of how users perform eyes-free gesture typing. Our evaluation shows eyes-free gesture typing is feasible: reducing visual guidance on the distant display hardly affects the typing speed. Results also show that the i’sFree gesture decoding algorithm is effective, enabling an input speed of 23 WPM, 46% faster than the baseline eyes-free condition built on a general gesture decoder. Finally, i’sFree is easy to learn: participants reached 22 WPM in the first ten minutes, even though 40% of them were first-time gesture typing users.
Beyond Tutoring: Opportunities for Intergenerational Mentorship at a Community Level
Community intergenerational mentorship offers an opportunity to address older adults’ social isolation while providing valuable one-on-one or small group learning experiences for elementary school students. Current organizations that support this kind of engagement focus on in-person visits that place the burden of logistics and transportation on the older adult. However, as older adults become less independent while aging, coming to schools in person becomes more challenging. We present a qualitative analysis of current intergenerational mentorship practices to understand opportunities for technology to expand access to this experience. We highlight elements critical for building successful mentorship: the importance of relationship building between older adults and children during mentoring activities, the skills mentors acquired to carry out mentoring activities, and support needed from teachers and schools. We contribute a rich description of current intergenerational mentorship practices and provide insights for opportunities for novel HCI technologies in this context.
Beyond Dyadic Interactions: Considering Chatbots as Community Members
Chatbots have grown as a space for research and development in recent years due both to the realization of their commercial potential and to advancements in language processing that have facilitated more natural conversations. However, nearly all chatbots to date have been designed for dyadic, one-on-one communication with users. In this paper we present a comprehensive review of research on chatbots supplemented by a review of commercial and independent chatbots. We argue that chatbots’ social roles and conversational capabilities beyond dyadic interactions have been underexplored, and that expansion into this design space could support richer social interactions in online communities and help address the longstanding challenges of maintaining, moderating, and growing these communities. In order to identify opportunities beyond dyadic interactions, we used research-through-design methods to generate more than 400 concepts for new social chatbots, and we present seven categories that emerged from analysis of these ideas.
Face and Ecological Validity in Simulations: Lessons from Search-and-Rescue HRI
In fields where in situ performance cannot be measured, ecological validity is difficult to estimate. Drawing on theory from social psychology and virtual reality, we argue that face validity can be a useful proxy for ecological validity. We provide illustrative examples of this relationship from work in search-and-rescue HRI, and conclude with some practical guidelines for the construction of immersive simulations in general.
PaCaPa: A Handheld VR Device for Rendering Size, Shape, and Stiffness of Virtual Objects in Tool-based Interactions
We present PaCaPa, a handheld device that renders haptics on a user’s palm when the user interacts with virtual objects using virtual tools such as a stick. PaCaPa is a cuboid device with two wings that open and close. As the user’s stick makes contact with a virtual object, the wings open by a specific degree to dynamically change the pressure on the palm and fingers. The open angle of the wings is calculated from the angle between the virtual stick and hand direction. As the stick bites into the target object, a large force is generated. Our device enables three kinds of renderings: size, shape, and stiffness. We conducted user studies to evaluate the performance of our device. We also evaluated our device in two application scenarios. User feedback and qualitative ratings indicated that our device can make indirect interaction with handheld tools more realistic.
Search as News Curator: The Role of Google in Shaping Attention to News Information
This paper presents an algorithm audit of the Google Top Stories box, a prominent component of search engine results and powerful driver of traffic to news publishers. As such, it is important in shaping user attention towards news outlets and topics. By analyzing the number of appearances of news article links we contribute a series of novel analyses that provide an in-depth characterization of news source diversity and its implications for attention via Google search. We present results indicating a considerable degree of source concentration (with variation among search terms), a slight exaggeration in the ideological skew of news in comparison to a baseline, and a quantification of how the presentation of items translates into traffic and attention for publishers. We contribute insights that underscore the power that Google wields in exposing users to diverse news information, and raise important questions and opportunities for future work on algorithmic news curation.
./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects
Hover, touch, and force are promising input modalities that get increasingly integrated into screens and everyday objects. However, these interactions are often limited to flat surfaces and the integration of suitable sensors is time-consuming and costly. To alleviate these limitations, we contribute Trilaterate: A fabrication pipeline to 3D print custom objects that detect the 3D position of a finger hovering, touching, or forcing them by combining multiple capacitance measurements via capacitive trilateration. Trilaterate places and routes actively-shielded sensors inside the object and operates on consumer-level 3D printers. We present technical evaluations and example applications that validate and demonstrate the wide applicability of Trilaterate.
Designing for Reproducibility: A Qualitative Study of Challenges and Opportunities in High Energy Physics
Reproducibility should be a cornerstone of scientific research and is a growing concern among the scientific community and the public. Understanding how to design services and tools that support documentation, preservation and sharing is required to maximize the positive impact of scientific research. We conducted a study of user attitudes towards systems that support data preservation in High Energy Physics, one of science’s most data-intensive branches. We report on our interview study with 12 experimental physicists, studying requirements and opportunities in designing for research preservation and reproducibility. Our findings suggest that we need to design for motivation and benefits in order to stimulate contributions and to address the observed scalability challenge. Therefore, researchers’ attitudes towards communication, uncertainty, collaboration and automation need to be reflected in design. Based on our findings, we present a systematic view of user needs and constraints that define the design space of systems supporting reproducible practices.
Color Builder: A Direct Manipulation Interface for Versatile Color Theme Authoring
Color themes or palettes are popular for sharing color combinations across many visual domains. We present a novel interface for creating color themes through direct manipulation of color swatches. Users can create and rearrange swatches, and combine them into smooth and step-based gradients and three-color blends — all using a seamless touch or mouse input. Analysis of existing solutions reveals a fragmented color design workflow, where separate software is used for swatches, smooth and discrete gradients and for in-context color visualization. Our design unifies these tasks, while encouraging playful creative exploration. Adjusting a color using standard color pickers can break this interaction flow with mechanical slider manipulation. To keep interaction seamless, we additionally design an in situ color tweaking interface for freeform exploration of an entire color neighborhood. We evaluate our interface with a group of professional designers and students majoring in this field.
A Review & Analysis of Mindfulness Research in HCI: Framing Current Lines of Research and Future Opportunities
Mindfulness is a term seen with increasing frequency in HCI literature, and yet the term itself is used almost as variously as the number of papers in which it appears. This diversity makes comparing or evaluating HCI approaches around mindfulness or understanding the design space itself a challenging task. We conducted a structured ACM literature search based on the term mindfulness. Our selection process yielded 38 relevant papers, which we analyzed for their definition, motivation, practice, evaluation and technology use around mindfulness. We identify similarities, divergences and areas of interest for each aspect, resulting in a framework composed of four perspectives and seven lines of research. We highlight challenges and opportunities for future HCI research and design.
Making Healthcare Infrastructure Work: Unpacking the Infrastructuring Work of Individuals
The U.S. healthcare infrastructure is fragmented with various breakdowns. Patients or caregivers have to rely on their own to overcome barriers and fix breakdowns in order to obtain necessary service, that is, infrastructuring work to make the healthcare infrastructure work for them. So far little attention has been paid to such infrastructuring work in healthcare. We present an interview study of 32 U.S. parents of young children to discuss the work of infrastructuring our participants carry out to deal with breakdowns within the healthcare infrastructure. We report how they repaired unexpected failures happening at the individual level, aligned components at organizational and cross-organizational level, and circumvented infrastructural constraints (e.g., policy and financial ones) that were perceived as ambiguous and demanding. We discuss infrastructuring work in light of the literature on patients’ and caregivers’ work, reflect upon the notion of patient engagement, and explore nuances along several dimensions of infrastructuring work.
"Like Popcorn": Crossmodal Correspondences Between Scents, 3D Shapes and Emotions in Children
There is increasing interest in multisensory experiences in HCI. However, little research considers how sensory modalities interact with each other and how this may impact interactive experiences. We investigate how children associate emotions with scents and 3D shapes. 14 participants (10-17yrs) completed crossmodal association tasks to attribute emotional characteristics to variants of the “Bouba/Kiki” stimuli, presented as 3D tangible models, in conjunction with lemon and vanilla scents. Our findings support pre-existing mappings between shapes and scents, and confirm the associations between the combination of angular shapes (“Kiki”) and lemon scent with arousing emotion, and of round shapes (“Bouba”) and vanilla scent with calming emotion. This extends prior work on crossmodal correspondences in terms of stimuli (3D as opposed to 2D shapes), sample (children), and conveyed content (emotions). We outline how these findings can contribute to designing more inclusive interactive multisensory technologies.
Gamification in Science: A Study of Requirements in the Context of Reproducible Research
The need for data preservation and reproducible research is widely recognized in the scientific community. Yet, researchers often struggle to find the motivation to contribute to data repositories and to use tools that foster reproducibility. In this paper, we explore possible uses of gamification to support reproducible practices in High Energy Physics. To understand how gamification can be effective in research tools, we participated in a workshop and performed interviews with data analysts. We then designed two interactive prototypes of a research preservation service that use contrasting gamification strategies. The evaluation of the prototypes showed that gamification needs to address core scientific challenges, in particular the fair reflection of quality and individual contribution. Through thematic analysis, we identified four themes which describe perceptions and requirements of gamification in research: Contribution, Metrics, Applications and Scientific practice. Based on these, we discuss design implications for gamification in science.
Understanding the Shared Experience of Runners and Spectators in Long-Distance Running Events
Increasingly popular, long-distance running events (LDRE) attract not just runners but an exponentially increasing number of spectators. Due to the long duration and broad geographic spread of such events, interactions between them are limited to brief moments when runners (R) pass by their supporting spectators (S). Current technology is limited in its potential for supporting interactions and mainly measures and displays basic running information to spectators who passively consume it. In this paper, we conducted qualitative studies for an in-depth understanding of the R&S’ shared experience during LDRE and how technology can enrich this experience. We propose a two-layer DyPECS framework, highlighting the rich dynamics of the R&S multi-faceted running journey and of their micro-encounters. DyPECS is enriched by the findings from our in depth qualitative studies. We finally present design implications for the multi-facet co-experience of R&S during LDRE.
Local Standards for Anonymization Practices in Health, Wellness, Accessibility, and Aging Research at CHI
When studying technologies pertaining to health, wellness, accessibility, and aging, researchers are often required to perform a balancing act between controlling and sharing sensitive data of the people in their studies and protecting the privacy of these participants. If the data can be anonymized and shared, it can boost the impact of the research by facilitating replication and extension. Despite anonymization, data reporting and sharing may lead to re-identification of participants, which can be particularly problematic when the research deals with sensitive topics, such as health. We analyzed 509 CHI papers in the domains of health, wellness, accessibility, and aging to examine data reporting and sharing practices. Our analysis revealed notable patterns and trends regarding the reporting of age, gender, participant types, sample sizes, methodology, ethical considerations, anonymization techniques, and data sharing. Based on our findings, we propose several suggestions for community standards and practices that could facilitate data reporting and sharing while limiting the privacy risks for study participants.
Dynamic Difficulty Adjustment Impact on Players’ Confidence
Difficulty is one of the major motivational pull of video games, and thus many games use Dynamic Difficulty Adjustment (DDA) systems to improve the game experience. This paper describes our research investigating the influence of DDA systems on player’s confidence, evaluated using an in-game bet system. Our hypothesis is that DDA systems may lead players to overconfidence, revealed by an overestimation of their success chances when betting. This boost of confidence may be a part of the positive impact of DDA systems on the quality of game experience. We explain our method to evaluate player’s confidence and implement it into three games related to logical, motor and sensory difficulties. We describe two experimental conditions where difficulty is either randomly chosen or adapted using a DDA algorithm. Results show how DDA systems can lead players to high level of overconfidence.
Continuous Alertness Assessments: Using EOG Glasses to Unobtrusively Monitor Fatigue Levels In-The-Wild
As the day progresses, cognitive functions are subject to fluctuations. While the circadian process results in diurnal peaks and drops, the homeostatic process manifests itself in a steady decline of alertness across the day. Awareness of these changes allows the design of proactive recommender and warning systems, which encourage demanding tasks during periods of high alertness and flag accident-prone activities in low alertness states. In contrast to conventional alertness assessments, which are often limited to lab conditions, bulky hardware, or interruptive self-assessments, we base our approach on eye blink frequency data known to directly relate to fatigue levels. Using electrooculography sensors integrated into regular glasses’ frames, we recorded the eye movements of 16 participants over the course of two weeks in-the-wild and built a robust model of diurnal alertness changes. Our proposed method allows for unobtrusive and continuous monitoring of alertness levels throughout the day.
"Everyone Brings Their Grain of Salt": Designing for Low-Literate Parental Engagement with a Mobile Literacy Technology in Côte d’Ivoire
Significant research has demonstrated the crucial role that parents play in supporting the development of children’s literacy, but in contexts where adults may lack sufficient literacy in the target language, it is not clear how to most effectively scaffold parental support for children’s literacy. Prior work has designed technologies to teach children literacy directly, but this work has not focused on designing for low-literate parents, particularly for multilingual and developing contexts. In this paper, we describe findings from a qualitative study conducted in several regions of rural Côte d’Ivoire to understand Ivorian parents’ beliefs, desires, and preferences for French literacy. We discuss themes that emerged from these interviews, surrounding ideas of trust, collaboration, and culturally-responsive design, and we highlight implications for the design of technology to scaffold low-literate parental support for children’s literacy.
Streaming, Multi-Screens and YouTube: The New (Unsustainable) Ways of Watching in the Home
Internet use and online services underpin everyday life, and the resultant energy demand is almost entirely hidden, yet significant and growing: it is anticipated to reach 21% of global electricity demand by 2030 and to eclipse half the greenhouse gas emissions of transportation by 2040. Driving this growth, real-time video streaming (‘watching’) is estimated at around 50% of all peak data traffic. Using a mixed-methods analysis of the use of 66 devices (e.g. smart TVs, tablets) across 20 participants in 9 households, we reveal the online activity of domestic watching and provide a detailed exploration of video-on-demand activities. We identify new ways in which watching is transitioning in more rather than less data demanding directions; and explore the role HCI may play in reducing this growing data demand. We further highlight implications for key HCI and societal stakeholders (policy makers, service providers, network engineers) to tackle this important issue.
Overcoming Distractions during Transitions from Break to Work using a Conversational Website-Blocking System
Work breaks–both physical and digital–play an important role in productivity and workplace wellbeing. Yet, the growing availability of digital distractions from online content can turn breaks into prolonged “cyberloafing”. In this paper, we present UpTime, a system that aims to support workers’ transitions from breaks back to work–moments susceptible to digital distractions. Combining a browser extension and chatbot, users interact with UpTime through proactive and reactive chat prompts. By sensing transitions from inactivity, UpTime helps workers avoid distractions by automatically blocking distracting websites temporarily, while still giving them control to take necessary digital breaks. We report findings from a 3-week comparative field study with 15 workers. Our results show that automatic, temporary blocking at transition points can significantly reduce digital distractions and stress without sacrificing workers’ sense of control. Our findings, however, also emphasize that overloading users’ existing communication channels for chatbot interaction should be done thoughtfully.
Techies Against Facebook: Understanding Negative Sentiment Toward Facebook via User Generated Content
Researchers have recognized the need to pay attention to negative aspects and non-use of social media services to uncover usage barriers and surface shortcomings of these systems. We contribute to these efforts by analyzing comments on posts related to Facebook on two blogs with a technically savvy readership: Slashdot and Schneier on Security. Our analysis indicates that technically savvy individuals exhibit notably large negative sentiment toward Facebook with nearly 45% of the 3,000 reader comments we coded expressing such views. Qualitative coding revealed Privacy and Security, User Experience, and Personal Disposition as key factors underlying the negative views. Our findings suggest that negative sentiment is an explicit higher level factor driving non-use practices. Further, we confirm several non-use practices reported in the literature and identify additional aspects connected to recent technological and societal developments. Our results demonstrate that analysis of user generated content can be useful for surfacing usage practices on a large scale.
Making Well-being: Exploring the Role of Makerspaces in Long Term Care Facilities
Fourth-age residents in long-term care facilities (LTCF) are known to suffer declines in well-being due to their advanced age and resulting loss of independence. Using an action research approach, we set up a makerspace in a New Jersey LTCF for eight weeks to see whether it could improve well-being for residents. Based on engaged observation over 280 hours and semi-structured interviews with participants, we find that people aged 80-99 years will spend (sometimes significant) time in a makerspace for the purposes of making and companionship; that makerspaces can contribute to both autonomy and well-being for older residents; and participants produced not only decorative art, but novel artifacts that solved real challenges in their daily lives. We situate these findings in the literature on art and activity therapy for fourth-age people, and make recommendations for makerspaces in long-term care facilities.
Supporting Communication About Values Between People with Multiple Chronic Conditions and their Providers
People with multiple chronic conditions (MCC) often disagree with healthcare providers on priorities for care, leading to worse health outcomes. To align priorities, there is a need to support patient-provider communication about what patients consider important for their well-being and health (i.e., their personal values). To address barriers to communication about values, we conducted a two-part study with key stakeholders in MCC care: patients, informal caregivers, and providers. In Part I, co-design activities generated seven dimensions that characterize stakeholders’ diverse ideas for supporting communication about values: explicitness, effort, disclosure, guidance, intimacy, scale, and synchrony. In Part II, we used the dimensions to generate three design concepts and presented them in focus groups to further scrutinize findings from Part I. Based on these findings we outline directions for research and design to improve patient-provider communication about patients’ personal values.
SmartEye: Assisting Instant Photo Taking via Integrating User Preference with Deep View Proposal Network
Instant photo taking and sharing has become one of the most popular forms of social networking. However, taking high-quality photos is difficult as it requires knowledge and skill in photography that most non-expert users lack. In this paper we present SmartEye, a novel mobile system to help users take photos with good compositions in-situ. The back-end of SmartEye integrates the View Proposal Network (VPN), a deep learning based model that outputs composition suggestions in real time, and a novel, interactively updated module (P-Module) that adjusts the VPN outputs to account for personalized composition preferences. We also design a novel interface with functions at the front-end to enable real-time and informative interactions for photo taking. We conduct two user studies to investigate SmartEye qualitatively and quantitatively. Results show that SmartEye effectively models and predicts personalized composition preferences, provides instant high-quality compositions in-situ, and outperforms the non-personalized systems significantly.
People Who Can Take It: How Women Wikipedians Negotiate and Navigate Safety
Wikipedia is one of the most successful online communities in history, yet it struggles to attract and retain women editors-a phenomenon known as the gender gap. We investigate this gap by focusing on the voices of experienced women Wikipedians. In this interview-based study (N=25), we identify a core theme among these voices: safety. We reveal how our participants perceive safety within their community, how they manage their safety both conceptually and physically, and how they act on this understanding to create safe spaces on and off Wikipedia. Our analysis shows Wikipedia functions as both a multidimensional and porous space encompassing a spectrum of safety. Navigating this space requires these women to employ sophisticated tactics related to identity management, boundary management, and emotion work. We conclude with a set of provocations to spur the design of future online environments that encourage equity, inclusivity, and safety for historically marginalized users.
SuperVision: Playing with Gaze Aversion and Peripheral Vision
In this work, we challenge the Gaze interaction paradigm “What you see is what you get” to introduce “playing with peripheral vision”. We developed the conceptual framework to introduce this novel gaze-aware game dynamic. We illustrated the concept with SuperVision, a collection of three games that play with peripheral vision. We propose perceptual and interaction challenges that require players not to look and rely on their periphery. To validate the game dynamic and experience, we conducted a user study with twenty-four participants. Results show how the game concept created an engaging and playful experience playing with peripheral vision. Participants showed proficiency in overcoming the game challenges, developing clear strategies to succeed. Moreover, we found evidence that playing the game can affect our visual skills, with greater peripheral awareness.
Psychologically Inclusive Design: Cues Impact Women’s Participation in STEM Education
Visual and verbal cues can reinforce barriers to access for women in science, technology, engineering, and math (STEM) disciplines. Psychologically inclusive design is an evidence-based approach to reduce psychological barriers by strategically placing content and design cues in the environment. Two large field experiments provide estimates of the behavioral impact of psychologically inclusive cues on women’s and men’s enrollment behaviors in an online learning environment. First, a gender-inclusive photo and statement in an online advertisement for a STEM course increased the click-through rate among women but not men by 26% (N=209,000). Second, an inclusivity statement with a gender-inclusive course image to the enrollment page raised the proportion of women enrolling in a STEM course by up to 18% (N=63,000). These findings contribute evidence of the behavioral impact of psychologically inclusive design to the literature and yield practical implications for the presentation of STEM opportunities.
What Makes a Good Conversation?: Challenges in Designing Truly Conversational Agents
Conversational agents promise conversational interaction but fail to deliver. Efforts often emulate functional rules from human speech, without considering key characteristics that conversation must encapsulate. Given its potential in supporting long-term human-agent relationships, it is paramount that HCI focuses efforts on delivering this promise. We aim to understand what people value in conversation and how this should manifest in agents. Findings from a series of semi-structured interviews show people make a clear dichotomy between social and functional roles of conversation, emphasising the long-term dynamics of bond and trust along with the importance of context and relationship stage in the types of conversations they have. People fundamentally questioned the need for bond and common ground in agent communication, shifting to more utilitarian definitions of conversational qualities. Drawing on these findings we discuss key challenges for conversational agent design, most notably the need to redefine the design parameters for conversational agent interaction.
Encumbered Interaction: a Study of Musicians Preparing to Perform
Guitars are physical instruments that require skillful two-handed use. Their use is also supported by diverse digital and physical resources, such as videos and chord charts. To understand the challenges of interacting with supporting resources at the same time as playing we conducted an ethnographic study of the preparation activities of working musicians. We observe successive stages of individual and collaborative preparation, in which working musicians engage with a diverse range of digital and physical resources to support their preparation. Interaction with this complex ecology of digital and physical resources is finely interwoven into their embodied musical practices, which are usually encumbered by having their instrument in hand, and often by playing. We identify challenges for augmenting guitars within the rehearsal process by supporting interaction that is encumbered, contextual and connected, and suggest a range of possible responses.
Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted Displays
From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user’s feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.
Only one item left?: Heuristic Information Trumps Calorie Count When Supporting Healthy Snacking Under Low Self-Control
Pursuing the goal of a healthy diet may be challenging, especially when self-control resources are low. Yet many persuasive user interfaces fostering healthy choices are designed for situations with ample self-control, e.g. showing nutritional information to support reflective decision making. In this paper we propose that under low self-control, persuasive user interfaces need to rely on simple heuristic decision making to be successful. We report an experiment that tested this assumption in a 2 (low vs. high self-control) x 2 (calorie vs. heuristic information) design. The results reveal a significant interaction effect. Participants with low self-control resources chose the healthy snack more often when snacks were labelled with heuristic information than when they were labelled with calorie information. Both strategies were about equally successful for participants with high self-control. Exploiting situations of low self-control with heuristic information is a new and promising approach to designing persuasive technology for healthy eating.
Some Prior(s) Experience Necessary: Templates for Getting Started With Bayesian Analysis
Bayesian statistical analysis has gained attention in recent years, including in HCI. The Bayesian approach has several advantages over traditional statistics, including producing results with more intuitive interpretations. Despite growing interest, few papers in CHI use Bayesian analysis. Existing tools to learn Bayesian statistics require significant time investment, making it difficult to casually explore Bayesian methods. Here, we present a tool that lowers the barrier to exploration: a set of R code templates that guide Bayesian novices through their first analysis. The templates are tailored to CHI, supporting analyses found to be most common in recent CHI papers. In a user study, we found that the templates were easy to understand and use. However, we found that participants without a statistical background were not confident in their use. Together our contributions provide a concise analysis tool and empirical results for understanding and addressing barriers to using Bayesian analysis in HCI.
Parting the Red Sea: Sociotechnical Systems and Lived Experiences of Menopause
Menopause is a major life change affecting roughly half of the population, resulting in physiological, emotional, and social changes. To understand experiences with menopause holistically, we conducted a study of a subreddit forum. The project was informed by feminist social science methodologies, which center knowledge production on women’s lived experiences. Our central finding is that the lived experience of menopause is social: menopause is less about bodily experiences by themselves and more about how experiences with the body become meaningful over time in the social context. We find that gendered marginalization shapes diverse social relationships, leading to widespread feelings of alienation and negative transformation – often expressed in semantically dense figurative language. Research and design can accordingly address menopause not only as a women’s health concern, but also as a matter of facilitating social support and a social justice issue.
Turn to the Self in Human-Computer Interaction: Care of the Self in Negotiating the Human-Technology Relationship
Everyday life is increasingly mediated by technology. Technology is rapidly growing capacity and complexity, especially evident in developments in artificial intelligence and big data analytics. As human-computer interaction (HCI) endeavors to examine and theorize how people act and interact with the ever-evolving technology, an important, emerging concern is how the self-the totality of internal qualities such as consciousness and agency-plays out in relation to the technology-mediated external world. To analyze this question, we draw from Michel Foucault’s ethics of “care of the self,” which examines how the self is constituted through conscious and reflective work on self-transformation. We present three case studies to illustrate how individuals carry out practices of the self to reflect upon and negotiate their relationship with technology. We discuss the importance of examining the self and foreground the notion of care of the self in HCI research and design.
Engaging High School Students in Cameroon with Exam Practice Quizzes via SMS and WhatsApp
We created a quiz-based intervention to help secondary school students in Cameroon with exam practice. We sent regularly-spaced, multiple-choice questions to students’ own mobile devices and examined factors which influenced quiz participation. These quizzes were delivered via either SMS or WhatsApp per each student’s preference. We conducted a 3-week deployment with 546 students at 3 schools during their month of independent study prior to their graduating exam. We found that participation rates were heavily impacted by trust in the intervening organization and perceptions of personal security in the socio-technical environment. Parents also played a key gate-keeping role on students’ digital activities. We describe how this role – along with different perceptions of smartphones versus basic phones – may manifest in lower participation rates among WhatsApp-based users as compared to SMS. Finally, we discuss design implications for future educational interventions that target students’ personal cellphones outside of the classroom.
VARI-SOUND: A Varifocal Lens for Sound
Centuries of development in optics have given us passive devices (i.e. lenses, mirrors and filters) to enrich audience immersivity with light effects, but there is nothing similar for sound. Beam-forming in concert halls and outdoor gigs still requires a large number of speakers, while headphones are still the state-of-the-art for personalized audio immersivity in VR. In this work, we show how 3D printed acoustic meta-surfaces, assembled into the equivalent of optical systems, may offer a different solution. We demonstrate how to build them and how to use simple design tools, like the thin-lens equation, also for sound. We present some key acoustic devices, like a “collimator”, to transform a standard computer speaker into an acoustic “spotlight”; and a “magnifying glass”, to create sound sources coming from distinct locations than the speaker itself. Finally, we demonstrate an acoustic varifocal lens, discussing applications equivalent to auto-focus cameras and VR headsets and the limitations of the technology.
Co-Performing Agent: Design for Building User-Agent Partnership in Learning and Adaptive Services
Intelligent agents have become prevalent in everyday IT products and services. To improve an agent’s knowledge of a user and the quality of personalized service experience, it is important for the agent to cooperate with the user (e.g., asking users to provide their information and feedback). However, few works inform how to support such user-agent co-performance from a human-centered perspective. To fill this gap, we devised Co-Performing Agent, a Wizard-of-Oz-based research probe of an agent that cooperates with a user to learn by helping users to have a partnership mindset. By incorporating the probe, we conducted a two-month exploratory study, aiming to understand how users experience co-performing with their agent over time. Based on the findings, this paper presents the factors that affected users’ co-performing behaviors and discusses design implications for supporting constructive co-performance and building a resilient user-agent partnership over time.
Leveraging Distal Vibrotactile Feedback for Target Acquisition
Many touch based interactions provide limited opportunities for direct tactile feedback; examples include multi-user touch displays, augmented reality based projections on passive surfaces, and mid-air input. In this paper, we consider distal feedback, through vibrotactile stimulation on a smart-watch placed on the user’s non-dominant wrist, as an alternative feedback mechanism to interaction location vibrotactile feedback, under the user’s finger. We compare the effectiveness of interaction location feedback vs. distal feedback through a Fitts’s Law task completed on a smartphone. Results show that distal and interaction location feedback both reduce errors in target acquisition and exhibit statistically comparable performance, suggesting that distal vibrotactile feedback is a suitable alternative when interaction location feedback is not readily available.
Designing for the Infrastructure of the Supply Chain of Malay Handwoven Songket in Terengganu
The growing HCI interest in developing contexts and cultural craft practices is ripe to focus on their under-explored homegrown sociotechnical infrastructures. This paper explores the creative infrastructural actions embedded within the practices of songket’s supply chain in Terengganu, Malaysia. We report on contextual interviews with 92 participants including preparation workers, weavers, designers, merchants, and customers. Findings indicate that increased creative infrastructural actions are reflected in these actors’ resourcefulness for mobilizing information, materials, and equipment, and for making creative artifacts through new technologies weaved within traditional practices. We propose two novel approaches to design in this craft-based infrastructure. First, we explore designing for the social layer of infrastructure and its mutually advantageous exploitative relationships rooted in culture and traditions. Second, we suggest designing for roaming value-creation artifacts, which blend physical and digital materializations of songket textile design. Developed through a collaborative and asynchronous process, we argue that these artifacts represent less-explored vehicles for value co-creation, and that sociotechnical infrastructures as emerging sites of innovation could benefit from HCI research.
Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems
Trust in a Recommender System (RS) is crucial for its overall success. However, it remains underexplored whether users trust personal recommendation sources (i.e. other humans) more than impersonal sources (i.e. conventional RS), and, if they do, whether the perceived quality of explanation provided account for the difference. We conducted an empirical study in which we compared these two sources of recommendations and explanations. Human advisors were asked to explain movies they recommended in short texts while the RS created explanations based on item similarity. Our experiment comprised two rounds of recommending. Over both rounds the quality of explanations provided by users was assessed higher than the quality of the system’s explanations. Moreover, explanation quality significantly influenced perceived recommendation quality as well as trust in the recommendation source. Consequently, we suggest that RS should provide richer explanations in order to increase their perceived recommendation quality and trustworthiness.
Springlets: Expressive, Flexible and Silent On-Skin Tactile Interfaces
We introduce Springlets, expressive, non-vibrating mechanotactile interfaces on the skin. Embedded with shape memory alloy springs, we implement Springlets as thin and flexible stickers to be worn on various body locations, thanks to their silent operation even on the neck and head. We present a technically simple and rapid technique for fabricating a wide range of Springlet interfaces and computer-generated tactile patterns. We developed Springlets for six tactile primitives: pinching, directional stretching, pressing, pulling, dragging, and expanding. A study placing Springlets on the arm and near the head demonstrates Springlets’ effectiveness and wearability in both stationary and mobile situations. We explore new interactive experiences in tactile social communication, physical guidance, health interfaces, navigation, and virtual reality gaming, enabled by Springlets’ unique and scalable form factor.
"Can you believe [1:21]?!": Content and Time-Based Reference Patterns in Video Comments
As videos become increasingly ubiquitous, so is video-based commenting. To contextualize comments, people often reference specific audio/visual content within video. However, the literature falls short of explaining the types of video content people refer to, how they establish references and identify referents, how video characteristics (e.g., genre) impact referencing behaviors, and how references impact social engagement. We present a taxonomy for classifying video references by referent type and temporal specificity. Using our taxonomy, we analyzed 2.5K references with quotations and timestamps collected from public YouTube comments. We found: 1) people reference intervals of video more frequently than time-points, 2) visual entities are referenced more often than sounds, and 3) comments with quotes are more likely to receive replies but not more “likes”. We discuss the need for in-situ dereferencing user interfaces, illustrate design concepts for typed referencing features, and provide a dataset for future studies.
Do We Care About Diversity in Human Computer Interaction: A Comprehensive Content Analysis on Diversity Dimensions in Research
In Human-Computer Interaction (HCI) research, awareness for the relevance of diversity of users is increasing. In this work, we analyze whether the articulated need for more diversity-sensitive research led indeed to a higher consideration of diversity in HCI research. Based on a comprehensive collection of diversity dimensions, we present results of a quantitative content analysis of articles accepted in the Proceedings of the Conference on Human Factors in Computing Systems 2006, 2011, and 2016. Results demonstrate how many and how intensively diversity dimensions were considered, and moreover highlight those dimensions that have so far received less attention. Uncovering continuous and discontinuous trends across time and differences between subfields of research, we identify research gaps and aim at contributing to a comprehensive understanding of diversity supporting diversity-sensitive research in HCI.
Gaze-Guided Narratives: Adapting Audio Guide Content to Gaze in Virtual and Real Environments
Exploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.
StoryBlocks: A Tangible Programming Game To Create Accessible Audio Stories
Block-based programming languages can support novice programmers through features such as simplified code syntax and user-friendly libraries. However, most block-based programming languages are highly visual, which makes them inaccessible to blind and visually impaired students. To address the inaccessibility of block-based languages, we introduce StoryBlocks, a tangible block-based game that enables blind programmers to learn basic programming concepts by creating audio stories. In this paper, we document the design of StoryBlocks and report on a series of design activities with groups of teachers, Braille experts, and students. Participants in our design sessions worked together to create accessible stories, and their feedback offers insights for the future development of accessible, tangible programming tools.
Managerial Visions: Stories of Upgrading and Maintaining the Public Restroom with IoT
This paper examines the entangled development of governance strategies and networked technologies in the pervasive but under-examined domain of public restrooms. Drawing on a mix of archival materials, participant observation, and interviews within and beyond the city of Seattle, Washington, we look at the motivations of public restroom facilities managers as they introduce (or consider introducing) networked technology in the spaces they administer. Over the course of the research, we found internet of things technologies-or, connected devices imbued with computational capacity-became increasingly tied up with cost-reducing efficiencies and exploitative regulatory techniques. Drawing from this case study, we develop the concept of managerial visions: ways of seeing that structure labor, enforce compliance, and define access to resources. We argue that these ways of seeing prove increasingly critical to HCI research as it attends to computer-mediated collaboration beyond white-collar settings.
User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms
Algorithms exert great power in curating online information, yet are often opaque in their operation, and even existence. Since opaque algorithms sometimes make biased or deceptive decisions, many have called for increased transparency. However, little is known about how users perceive and interact with potentially biased and deceptive opaque algorithms. What factors are associated with these perceptions, and how does adding transparency into algorithmic systems change user attitudes? To address these questions, we conducted two studies: 1) an analysis of 242 users’ online discussions about the Yelp review filtering algorithm and 2) an interview study with 15 Yelp users disclosing the algorithm’s existence via a tool. We found that users question or defend this algorithm and its opacity depending on their engagement with and personal gain from the algorithm. We also found adding transparency into the algorithm changed users’ attitudes towards the algorithm: users reported their intention to either write for the algorithm in future reviews or leave the platform.
Examining the "Global" Language of Emojis: Designing for Cultural Representation
Emojis are becoming an increasingly popular mode of communication between individuals worldwide, with researchers claiming them to be a type of “ubiquitous language” that can span different languages due to its pictorial nature. Our study uses a combination of methods to examine how emojis are adopted and perceived by individuals from diverse cultural backgrounds and 45 countries. Our survey and interview findings point to the existence of a cultural gap between user perceptions and the current emoji standard. Using participatory design, we sought to address this gap by designing 40 emojis and conducted another survey to evaluate their acceptability compared to existing Japanese emojis. We also draw on participant observation from a Unicode Consortium meeting on emoji addition. Our analysis leads us to discuss how emojis might be made more inclusive, diverse, and representative of the populations that use them.
Time to Scale: Generalizable Affect Detection for Tens of Thousands of Students across An Entire School Year
We developed generalizable affect detectors using 133,966 instances of 18 affective states collected from 69,174 students who interacted with an online math learning platform called Algebra Nation over the entire school year. To enable scalability and generalizability, we used generic interaction features (e.g., viewing a video, taking a quiz), which do not require specialized sensors and are domain- and (to a certain extent) system-independent. We experimented with standard classifiers, recurrent neural networks, and genetically evolved neural networks for affect modeling. Prediction accuracies, quantified with Spearman’s rho, were modest and ranged from .08 (for surprise) to .34 (for happiness) with a mean of .25. Our model trained on Algebra students generalized to a different set of Geometry students (n = 28,458) on the same platform. We discuss implications for scaling up affect detection for affect-sensitive online learning environments which aim to improve engagement and learning by detecting and responding to student affect.
Eye-Write: Gaze Sharing for Collaborative Writing
Online collaborative writing is an increasingly common practice. Despite its positive effect on productivity and quality of work, it poses challenges to co-authors in remote settings because of limitations in conversational grounding and activity awareness. This paper presents Eye-Write, a novel system which allows two co-authors to see at will the location of their partner’s gaze within a text editor. To investigate the effect of shared gaze on collaboration, we conducted a study on synchronous remote collaborative writing in academic settings with 20 dyads. Gaze sharing improved five aspects of perceived collaboration quality: mutual understanding, level of joint attention, flow of communication, level of negotiation, and awareness of the co-author’s activity. Furthermore, dyads whose participants deactivated the gaze visualization showed a smaller degree of collaboration. Our findings offer insights for future text editors by outlining the benefits of at-will gaze sharing in collaborative writing.
Heimdall: A Remotely Controlled Inspection Workbench For Debugging Microcontroller Projects
Students and hobbyists build embedded systems that combine sensing, actuation and microcontrollers on solderless breadboards. To help students debug such circuits, experienced teachers apply visual inspection, targeted measurements, and circuit modifications to diagnose and localize the problem(s). However, experienced helpers may not always be available to review student projects in person. To enable remote debugging of circuit problems, we introduce Heimdall, a remote electronics workbench that allows experts to visually inspect a student’s circuit; perform measurements; and to re-wire and inject test signals. These interactions are enabled by an actuated inspection camera; an augmented breadboard that enables flexible configuration of row connectivity and measurement/injection lines; and a web-based UI that teachers can use to perform measurements through interaction with the captured images. We demonstrate that common issues arising in embedded electronics classes can be successfully diagnosed remotely and report on preliminary user feedback from teaching assistants who frequently debug circuits.
HOPE for Computing Education: Towards the Infrastructuring of Support for University-School Partnerships
The state of computing education in the UK is described as “patchy and fragile” with universities tasked to provide further support to schools. However, little guidance exists towards the provision of this support. To explore the development of university-school partnerships, we present findings of an extended educational engagement coordinated by Newcastle University, as part of the national “Create, Learn and Inspire with the micro:bit and the BBC” initiative. Following an action research approach, we explore the experiences of undergraduate students, schoolteachers and an educational broker through the process, including recruitment, content development, and delivery of over 30 computing lessons by nine undergraduates. We identify a number of design considerations towards the development of High Opportunity Progression Ecosystems for the improvement of computing education, such as student identity, workload model,and process visibility. We then discuss the potential role of technology in infrastructuring support for university-school partnerships
Unintended Consonances: Methods to Understand Robot Motor Sound Perception
Recent research suggests that a robot’s motors make sounds that can influence users’ perception of the robot’s characteristics. To more deeply understand users’ associations with specific sonic characteristics, we adapted methods from sensory science including Check All That Apply (CATA) questions and Polarized Sensory Positioning (PSP) to tease out small differences in motor sounds in an online survey. These methods are straightforward for untrained people to do in an online setting, mathematically rigorous, and can explore a variety of subtle auditory and perceptual stimuli. We describe how to use these methods, interpret the results with several intuitive visual representations, and show that the results align with a previous study of the same dataset. We close by discussing benefits and limitations of applying these methods to study subtle phenomena in the HCI community.
PinchList: Leveraging Pinch Gestures for Hierarchical List Navigation on Smartphones
Intensive exploration and navigation of hierarchical lists on smartphones can be tedious and time-consuming as it often requires users to frequently switch between multiple views. To overcome this limitation, we present PinchList, a novel interaction design that leverages pinch gestures to support seamless exploration of multi-level list items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out gesture whereas a pinch-in gesture navigates back to the previous level. Additionally, pinch and flick gestures are used to navigate lists consisting of more than two levels. We conduct a user study to refine the design parameters of PinchList such as a suitable item size, and quantitatively evaluate the target acquisition performance using pinch-in/out gestures in both scrolling and non-scrolling conditions. In a second study, we compare the performance of PinchList in a hierarchal navigation task with two commonly used touch interfaces for list browsing: pagination and expand-and-collapse interfaces. The results reveal that PinchList is significantly faster than other two interfaces in accessing items located in hierarchical list views. Finally, we demonstrate that PinchList enables a host of novel applications in list-based interaction?
Together in Bed?: Couples’ Mobile Technology Use in Bed
In this paper, we investigate the use of mobile technology in an underexplored context, the bed that couples share. Despite large amounts of research on the impact of pre-bedtime technology use on our sleep and mental state, scant research in the HCI field focuses on the physical bed as a negotiated site of technology use by couples. This paper explores (a) the meaning of the bed accessed by mobile technology and (b) the strategies of both individual and shared technology use in bed, in the context of couple’s relationships. We investigate the effects of mobile technology to couples’ bed-sharing practices through in-depth interviews (n = 12) and an online survey (n = 117). We report on creative and negotiated bodily practices of mobile technology use by couples in bed, and the perceived effects on couples’ verbal and physical interaction and the intimacy of the bed.
23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human-Computer Interaction
Ten years ago, Thaler and Sunstein introduced the notion of nudging to talk about how subtle changes in the ‘choice architecture’ can alter people’s behaviors in predictable ways. This idea was eagerly adopted in HCI and applied in multiple contexts, including health, sustainability and privacy. Despite this, we still lack an understanding of how to design effective technology-mediated nudges. In this paper we present a systematic review of the use of nudging in HCI research with the goal of laying out the design space of technology-mediated nudging – the why (i.e., which cognitive biases do nudges combat) and the how (i.e., what exact mechanisms do nudges employ to incur behavior change). All in all, we found 23 distinct mechanisms of nudging, grouped in 6 categories, and leveraging 15 different cognitive biases. We present these as a framework for technology-mediated nudging, and discuss the factors shaping nudges’ effectiveness and their ethical implications.
ALAP: Accessible LaTeX Based Mathematical Document Authoring and Presentation
Assistive technologies such as screen readers and text editors have been used in past to improve the accessibility and authoring of scientific and mathematical documents. However, most screens readers fail to narrate complex mathematical notations and expressions as they skip symbols and necessary information required for the accurate narration of mathematical content. This study aims at evaluating a new Accessible LaTeX Based Mathematical Document Authoring and Presentation (ALAP) tool, which assist people with visual impairments in reading and writing mathematical documents. ALAP includes features like, assistive debugging, Math Mode for reading and writing mathematical notations, and automatic generation of an accessible PDF document. These features aim to improve the LaTeX debugging experience and make it simple for blind users to author mathematical content by narrating it in natural language through the use of integrated text to speech (TTS) engine. We evaluated ALAP by conducting a study with 18 visually impaired LaTeX users. The results showed that users preferred ALAP over another comparable LaTeX based authoring tool and were relatively more comfortable in completing the tasks while using ALAP.
Embodied Imagination: An Approach to Stroke Recovery Combining Participatory Performance and Interactive Technology
Participatory performance provides methods for exploring social identities and situations in ways that can help people to imagine new ways of being. Digital technologies provide tools that can help people envision these possibilities. We explore this combination through a performance workshop process designed to help stroke survivors imagine new physical and social possibilities by enacting fantasies of “things they always wanted to do”. This process uses performance methods combined with specially designed real-time movement visualisations to progressively build fantasy narratives that are enacted with and for other workshop participants. Qualitative evaluations suggest this process successfully stimulates participant’s embodied imagination and generates a diverse range of fantasies. The interactive and communal aspects of the workshop process appear to be especially important in achieving these effects. This work highlights how the combination of performance methods and interactive tools can bring a rich, prospective and political understanding of people’s lived experience to design.
Behind the Curtain of the "Ultimate Empathy Machine": On the Composition of Virtual Reality Nonfiction Experiences
Virtual Reality nonfiction (VRNF) is an emerging form of immersive media experience created for consumption using panoramic “Virtual Reality” headsets. VRNF promises nonfiction content producers the potential to create new ways for audiences to experience “the real”; allowing viewers to transition from passive spectators to active participants. Our current project is exploring VRNF through a series of ethnographic and experimental studies. In order to document the content available, we embarked on an analysis of VR documentaries produced to date. In this paper, we present an analysis of a representative sample of 150 VRNF titles released between 2012-2018. We identify and quantify 64 characteristics of the medium over this period, discuss how producers are exploiting the affordances of VR, and shed light on new audience roles. Our findings provide insight into the current state of the art in VRNF and provide a digital resource for other researchers in this area.
Online, VR, AR, Lab, and In-Situ: Comparison of Research Methods to Evaluate Smart Artifacts
Empirical studies are a cornerstone of HCI research. Technical progress constantly enables new study methods. Online surveys, for example, make it possible to collect feedback from remote users. Progress in augmented and virtual reality enables to collect feedback with early designs. In-situ studies enable researchers to gather feedback in natural environments. While these methods have unique advantages and disadvantages, it is unclear if and how using a specific method affects the results. Therefore, we conducted a study with 60 participants comparing five different methods (online, virtual reality, augmented reality, lab setup, and in-situ) to evaluate early prototypes of smart artifacts. We asked participants to assess four different smart artifacts using standardized questionnaires. We show that the method significantly affects the study result and discuss implications for HCI research. Finally, we highlight further directions to overcome the effect of the used methods.
Guideline-Based Evaluation of Web Readability
Effortless reading remains an issue for many Web users, despite a large number of readability guidelines available to designers. This paper presents a study of manual and automatic use of 39 readability guidelines in webpage evaluation. The study collected the ground-truth readability for a set of 50 webpages using eye-tracking with average and dyslexic readers (n = 79). It then matched the ground truth against human-based (n = 35) and automatic evaluations. The results validated 22 guidelines as being connected to readability. The comparison between human-based and automatic results also revealed a complex framework: algorithms were better or as good as human experts at evaluating webpages on specific guidelines – particularly those about low-level features of webpage legibility and text formatting. However, multiple guidelines still required a human judgment related to understanding and interpreting webpage content. These results contribute a guideline categorization laying the ground for future design evaluation methods.
Trolled by the Trolley Problem: On What Matters for Ethical Decision Making in Automated Vehicles
Automated vehicles have to make decisions, such as driving maneuvers or rerouting, based on environment data and decision algorithms. There is a question whether ethical aspects should be considered in these algorithms. When all available decisions within a situation have fatal consequences, this leads to a dilemma. Contemporary discourse surrounding this issue is dominated by the trolley problem, a specific version of such a dilemma. Based on an outline of its origins, we discuss the trolley problem and its viability to help solve the questions regarding ethical decision making in automated vehicles. We show that the trolley problem serves several important functions but is an ill-suited benchmark for the success or failure of an automated algorithm. We argue that research and design should focus on avoiding trolley-like problems at all rather than trying to solve an unsolvable dilemma and discuss alternative approaches on how to feasibly address ethical issues in automated agents.
Exploring and Designing for Memory Impairments in Depression
Depression is an affective disorder with distinctive autobiographical memory impairments, including negative bias, overgeneralization and reduced positivity. Several clinical therapies address these impairments, and there is an opportunity to develop new supports for treatment by considering depression-associated memory impairments within design. We report on interviews with ten experts in treating depression, with expertise in both neuropsychology and cognitive behavioral therapies. The interviews explore approaches for addressing each of these memory impairments. We found consistent use of positive memories for treating all memory impairments, the challenge of direct retrieval, and the need to support the experience of positive memories. We aim to sensitize HCI researchers to the limitations of memory technologies, broaden their awareness of memory impairments beyond episodic memory recall, and inspire them to engage with this less explored design space. Our findings open up new design opportunities for memory technologies for depression, including positive memory banks for active encoding and selective retrieval, novel cues for supporting generative retrieval, and novel interfaces to strengthen the reliving of positive memories.
Understanding Kinaesthetic Creativity in Dance
Kinaesthetic creativity refers to the body’s ability to generate alternate futures in activities such as role-playing in participatory design workshops. This has relevance not only to the design of methods for inspiring creativity but also to the design of systems that promote engaging experiences via bodily interaction. This paper probes this creative process by studying how dancers interact with technology to generate ideas. We developed a series of parameterized interactive visuals and asked dance practitioners to use them in generating movement materials. From our study, we define a taxonomy that comprises different relationships and movement responses dancers form with the visuals. Against this taxonomy, we describe six types of interaction patterns and demonstrate how dance creativity is driven by the ability to shift between these patterns. We then propose a set of interaction design qualities to support kinaesthetic creativity.
Evaluating Preference Collection Methods for Interactive Ranking Analytics
Rankings distill a large number of factors into simple comparative models to facilitate complex decision making. Yet key questions remain in the design of mixed-initiative systems for ranking, in particular how best to collect users’ preferences to produce high-quality rankings that users trust and employ in the real world. To address this challenge we evaluate the relative merits of three preference collection methods for ranking in a crowdsourced study. We find that with a categorical binning technique, users interact with a large amount of data quickly, organizing information using broad strokes. Alternative interaction modes using pairwise comparisons or sub-lists result in smaller, targeted input from users. We consider how well each interaction mode addresses design goals for interactive ranking systems. Our study indicates that the categorical approach provides the best value-added benefit to users, requiring minimal effort to create sufficient training data for the underlying ranking algorithm.
To Repeat or Not to Repeat?: Redesigning Repeating Auditory Alarms Based on EEG Analysis
Auditory alarms that repeatedly interrupt users until they react are common, especially in the context of alarms. However, when an alarm repeats, our brains habituate to it and perceive it less and less, with reductions in both perception and attention-shifting: a phenomenon known as the repetition-suppression effect (RS). To retain users’ perception and attention, this paper proposes and tests the use of pitch- and intensity-modulated alarms. Its experimental findings suggest that the proposed modulated alarms can reduce RS, albeit in different patterns, depending on whether pitch or intensity is the focus of the modulation. Specifically, pitch-modulated alarms were found to reduce RS more when the number of repetitions was small, while intensity-modulated alarms reduced it more as the number of repetitions increased. Based on these results, we make several recommendations for the design of improved repeating alarms, based on which modulation approach should be adopted in various situations.
"It Broadens My Mind": Empowering People with Cognitive Disabilities through Computing Education
Computer science education is widely viewed as a path to empowerment for young people, potentially leading to higher education, careers, and development of computational thinking skills. However, few resources exist for people with cognitive disabilities to learn computer science. In this paper, we document our observations of a successful program in which young adults with cognitive disabilities are trained in computing concepts. Through field observations and interviews, we identify instructional strategies used by this group, accessibility challenges encountered by this group, and how instructors and students leverage peer learning to support technical education. Our findings lead to guidelines for developing tools and curricula to support young adults with cognitive disabilities in learning computer science.
Should I Agree?: Delegating Consent Decisions Beyond the Individual
Obtaining meaningful user consent is increasingly problematic in a world of numerous, heterogeneous digital services. Current approaches (e.g. agreeing to Terms and Conditions) are rooted in the idea of individual control despite growing evidence that users do not (or cannot) exercise such control in informed ways. We consider an alternative approach whereby users can opt to delegate consent decisions to an ecosystem of third-parties including friends, experts, groups and AI entities. We present the results of a study that used a technology probe at a large festival to explore initial public responses to this reframing — focusing on when and to whom users would delegate such decisions. The results reveal substantial public interest in delegating consent and identify differing preferences depending on the privacy context, highlighting the need for alternative decision mechanisms beyond the current focus on individual choice.
Design and Evaluation of a Social Media Writing Support Tool for People with Dyslexia
People with dyslexia face challenges expressing themselves in writing on social networking sites (SNSs). Such challenges come from not only the technicality of writing, but also the self-representation aspect of sharing and communicating publicly on social networking sites such as Facebook. To empower people with dyslexia-style writing to express them-selves more confidently on SNSs, we designed and implemented Additional Writing Help(AWH) – a writing assistance tool to proofread text produced by users with dyslexia before they post on Facebook. AWH was powered by a neural machine translation (NMT) model that translates dyslexia style to non-dyslexia style writing. We evaluated the performance and the design of AWH through a week-long field study with 19 people with dyslexia and received highly positive feedback. Our field study demonstrated the value of providing better and more extensive writing support on SNSs, and the potential of AI for building a more inclusive Internet.
VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction
Modern touchscreen keyboards are all powered by the word-level auto-correction ability to handle input errors. Unfortunately, visually impaired users are deprived of such benefit because a screen-reader keyboard offers only character-level input and provides no correction ability. In this paper, we present VIPBoard, a smart keyboard for visually impaired people, which aims at improving the underlying keyboard algorithm without altering the current input interaction. Upon each tap, VIPBoard predicts the probability of each key considering both touch location and language model, and reads the most likely key, which saves the calibration time when the touchdown point misses the target key. Meanwhile, the keyboard layout automatically scales according to users’ touch point location, which enables them to select other keys easily. A user study shows that compared with the current keyboard technique, VIPBoard can reduce touch error rate by 63.0% and increase text entry speed by 12.6%.
Put Your Warning Where Your Link Is: Improving and Evaluating Email Phishing Warnings
Phishing emails often disguise a link’s actual URL. Thus, common anti-phishing advice is to check a link’s URL before clicking, but email clients do not support this well. Automated phishing detection enables email clients to warn users that an email is suspicious, but current warnings are often not specific. We evaluated the effects on phishing susceptibility of (1) moving phishing warnings close to the suspicious link in the email, (2) displaying the warning on hover interactions with the link, and (3) forcing attention to the warning by deactivating the original link, forcing users to click the URL in the warning. We assessed the effectiveness of such link-focused phishing warning designs in a between-subjects online experiment (n=701). We found that link-focused phishing warnings reduced phishing click-through rates compared to email banner warnings; forced attention warnings were most effective. We discuss the implications of our findings for phishing warning design.
Encoding Materials and Data for Iterative Personalization
Data is changing how we design consumer products. Shoe production is a prime example of this; foot size, footstep pressure and personal preferences can be used to design personalized shoes. Research done around metamaterials, programming materials and computational composites illustrate the possibilities of creating complex data & material relationships. These new relationships allow us to look at future products almost like software apps, becoming a kind of product service systems, where the focus is on its iterative personalization improvement over time. Can we create systems of such data driven objects that in turn allow us to design new objects that are informed by the data trail? In this paper we report on four RtD project iterations that explore this challenge and provide a set of insights on how to close this new iterative loop.
Automation Accuracy Is Good, but High Controllability May Be Better
When automating tasks using some form of artificial intelligence, some inaccuracy in the result is virtually unavoidable. In many cases, the user must decide whether to try the automated method again, or fix it themselves using the available user interface. We argue this decision is influenced by both perceived automation accuracy and degree of task “controllability” (how easily and to what extent an automated result can be manually modified). This relationship between accuracy and controllability is investigated in a 750-participant crowdsourced experiment using a controlled, gamified task. With high controllability, self-reported satisfaction remained constant even under very low accuracy conditions, and overall, a strong preference was observed for using manual control rather than automation, despite much slower performance and regardless of very poor controllability.
Gehna: Exploring the Design Space of Jewelry as an Input Modality
Jewelry weaves into our everyday lives as no other wearable does. It comes in many wearable forms, is fashionable, and can adorn any part of the body. In this paper, through an exploratory, Research through Design (RtD) process, we tap into this vast potential space of input interaction that jewelry can enable. We do so by first identifying a small set of fundamental structural elements — called Jewelements — that any jewelry is composed of, and then defining their properties that enable the interaction. We leverage this synthesis along with observational data and literature to formulate a design space of jewelry-enabled input techniques. This work encapsulates both the extensions of common existing input methods (e.g., touch) as well as new ones inspired by jewelry. Furthermore, we discuss our prototypical sensor-based implementations. Through this work, we invite the community to engage in the conversation on how jewelry as a material can help shape wearable-based input.
I’m a Giant: Walking in Large Virtual Environments at High Speed Gains
Advances in tracking technology and wireless headsets enable walking as a means of locomotion in Virtual Reality. When exploring virtual environments larger than room-scale, it is often desirable to increase users’ perceived walking speed, for which we investigate three methods. (1) Ground-Level Scaling increases users’ avatar size, allowing them to walk farther. (2) Eye-Level Scaling enables users to walk through a World in Miniature, while maintaining a street-level view. (3) Seven-League Boots amplifies users’ movements along their walking path. We conduct a study comparing these methods and find that users feel most embodied using Ground-Level Scaling and consequently increase their stride length. Using Seven-League Boots, unlike the other two methods, diminishes positional accuracy at high gains, and users modify their walking behavior to compensate for the lack of control. We conclude with a discussion on each technique’s strength and weaknesses and the types of situation they might be appropriate for.
Understanding the Impact of Information Representation on Willingness to Share Information
Since the release of the first activity tracker, there has been a steady increase in the number of sensors embedded in wearable devices and with it in the amount and diversity of information that can be derived from these sensors. This development leads to novel privacy threats for users. In a web survey with 248 participants, we explored whether users’ willingness to share private data is dependent on how the data is requested by an application. Specifically, requests can be formulated as access to sensor data or as access to information derived from the sensor data (e.g., accelerometer vs. sleep quality). We show that non-expert users lack an understanding of how the two representation levels relate to each other. The results suggest that the willingness to share sensor data over derived information is governed by whether the derived information has positive or negative connotations (e.g., training intensity vs. life expectancy). Using the results of the survey, we derive implications for supporting users in protecting their private data collected via wearable sensors.
The Impact of Web Browser Reader Views on Reading Speed and User Experience
As reading increasingly shifts from paper to online media, many web browsers now provide a “Reader View,” which modifies web page layout and design for better readability. However, research has yet to establish whether Reader Views are effective in improving readability and how they might change the user experience. We characterize how Mozilla Firefox’s Reader View significantly reduces the visual complexity of websites by excluding menus, images, and content. We then conducted an online study with 391 participants (including 42 who self-reported having been diagnosed with dyslexia), showing that compared to standard websites the Reader View increased reading speed by 5% for readers on average, and significantly improved perceived readability and visual appeal. We suggest guidelines for the design of websites and browsers that better support people with varying reading skills.
Does Being Verified Make You More Credible?: Account Verification’s Effect on Tweet Credibility
Many popular social networking and microblogging sites support verified accounts—user accounts that are deemed of public interest and whose owners have been authenticated by the site. Importantly, the content of messages contributed by verified account owners is not verified. Such messages may be factually correct, or not. This paper investigates whether users confuse authenticity with credibility by posing the question: Are users more likely to believe content from verified accounts than from non-verified accounts? We conduct two online studies, a year apart, with 748 and 2041 participants respectively, to assess how the presence or absence of verified account indicators influences users’ perceptions of tweets. Surprisingly, across both studies, we find that—in the context of unfamiliar accounts—most users can effectively distinguish between authenticity and credibility. The presence or absence of an authenticity indicator has no significant effect on willingness to share a tweet or take action based on its contents.
Does Who Matter?: Studying the Impact of Relationship Characteristics on Receptivity to Mobile IM Messages
This study examines the characteristics of mobile instant-messaging users’ relationships with their social contacts and the effects of both relationship and interruption context on four measures of receptivity: Attentiveness, Responsiveness, Interruptibility, and Opportuneness. Overall, interruption context overshadows relationship characteristics as predictors of all four of these facets of receptivity; this overshadowing was most acute for Interruptibility and Opportuneness, but existed for all factors. In addition, while Mobile Maintenance Expectation and Activity Engagement were negatively correlated with all receptivity measures, each such measure had its own set of predictors, highlighting the conceptual differences among the measures. Finally, delving more deeply into potential relationship effects, we found that a single, simple closeness question was as effective at predicting receptivity as the 12-item Unidimensional Relationship Closeness Scale.
#HandsOffMyADA: A Twitter Response to the ADA Education and Reform Act
Twitter continues to be used increasingly for communication related advocacy, activism, and social change. This is also the case for the disability community. In light of the recently proposed ADA Education and Reform in the United States, we investigate factors for effectiveness of sharing or retweeting messages about topics affecting the rights of people with disabilities. We perform a multifaceted study of the #HandsOffMyADA campaign against the proposed H.R.620 bill to: (1) explore how communication via Twitter compares to previous disability rights movements; (2) characterize the campaign in terms of hashtags, user groups, and content such as accessible multimedia that contribute to dissemination of campaign messages; (3) identify major themes in tweets and responses, and their variation among user groups; and (4) understand how the disability community mobilized for this campaign compared to previous Twitter initiatives.
Poirot: A Web Inspector for Designers
To better understand the issues designers face as they interact with developers and use developer tools to create websites, we conducted a formative investigation consisting of interviews, a survey, and an analysis of professional design documents. Based on insights gained from these efforts, we developed Poirot, a web inspection tool for designers that enables them to make style edits to websites using a familiar graphical interface. We compared Poirot to Chrome DevTools in a lab study with 16 design professionals. We observed common problems designers experience when using Chrome DevTools and found that when using Poirot, designers were more successful in accomplishing typical design tasks (97% to 63%). In addition, we found that Poirot had a significantly lower perceived cognitive load and was overwhelmingly preferred by the designers in our study.
Understanding and Designing for Deaf or Hard of Hearing Drivers on Uber
We used content analysis of in-app driver survey responses, customer support tickets, and tweets, and face-to-face interviews of DHH Uber drivers to better understand the DHH driver experience. Here we describe challenges DHH drivers experience and how they address those difficulties via Uber’s accessibility features and their own workarounds. We also identify and discuss design and product opportunities to improve the DHH driver experience on Uber.
Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions
Errors and biases are earning algorithms increasingly malignant reputations in society. A central challenge is that algorithms must bridge the gap between high-level policy and on-the-ground decisions, making inferences in novel situations where the policy or training data do not readily apply. In this paper, we draw on the theory of street-level bureaucracies, how human bureaucrats such as police and judges interpret policy to make on-the-ground decisions. We present by analogy a theory of street-level algorithms, the algorithms that bridge the gaps between policy and decisions about people in a socio-technical system. We argue that unlike street-level bureaucrats, who reflexively refine their decision criteria as they reason through a novel situation, street-level algorithms at best refine their criteria only after the decision is made. This loop-and-a-half delay results in illogical decisions when handling new or extenuating circumstances. This theory suggests designs for street-level algorithms that draw on historical design patterns for street-level bureaucracies, including mechanisms for self-policing and recourse in the case of error.
Cicero: Multi-Turn, Contextual Argumentation for Accurate Crowdsourcing
Traditional approaches for ensuring high quality crowdwork have failed to achieve high-accuracy on difficult problems. Aggregating redundant answers often fails on the hardest problems when the majority is confused. Argumentation has been shown to be effective in mitigating these drawbacks. However, existing argumentation systems only support limited interactions and show workers general justifications, not context-specific arguments targeted to their reasoning. This paper presents Cicero, a new workflow that improves crowd accuracy on difficult tasks by engaging workers in multi-turn, contextual discussions through real-time, synchronous argumentation. Our experiments show that compared to previous argumentation systems which only improve the average individual worker accuracy by 6.8 percentage points on the Relation Extraction domain, our workflow achieves 16.7 percentage point improvement. Furthermore, previous argumentation approaches don’t apply to tasks with many possible answers; in contrast, Cicero works well in these cases, raising accuracy from 66.7% to 98.8% on the Codenames domain.
Personalising the TV Experience using Augmented Reality: An Exploratory Study on Delivering Synchronised Sign Language Interpretation
Augmented Reality (AR) technology has the potential to extend the screen area beyond the rigid frames of televisions. The additional display area can be used to augment televisions (TVs) with extra information tailored to individuals, for instance, the provision of access services like sign language interpretations. We invited 23 (11 in the UK, 12 in Germany) users of signed content to evaluate three methods of watching a sign language interpreted programme – one traditional in-vision method with signed programme content on TV and two AR-enabled methods in which an AR sign language interpreter (a ‘half-body’ version and a ‘full-body’ version) is projected just outside the frame of the TV presenting the programme. In the UK, participants were split 3-ways in their preferences while in Germany, half the participants preferred the traditional method followed closely by the ‘half-body’ version. We discuss our participants reasoning behind their preferences and implications for future research.
FTVR in VR: Evaluation of 3D Perception With a Simulated Volumetric Fish-Tank Virtual Reality Display
Spherical fish tank virtual reality (FTVR) displays attempt to create a virtual “crystal ball” experience using head-tracked rendering. Almost all of these systems have omitted stereo cues, making them easy to build, but it is not clear how much this omission degrades the 3D experience. In this study, we evaluate performance and subjective effects of stereo on 3D perception and interaction tasks with a spherical FTVR display. To control for calibration error and tracking latency, we perform the evaluation on a simulated spherical display in VR. The results of our study provide a clear recommendation for the design and use of spherical FTVR displays: while omitting stereo may not be readily apparent for users, their performance will be significantly degraded (20% – 91% increase in median task time). Therefore, including stereo viewing in spherical displays is critical for use in FTVR.
Exploring How Privacy and Security Factor into IoT Device Purchase Behavior
Despite growing concerns about security and privacy of Internet of Things (IoT) devices, consumers generally do not have access to security and privacy information when purchasing these devices. We interviewed 24 participants about IoT devices they purchased. While most had not considered privacy and security prior to purchase, they reported becoming concerned later due to media reports, opinions shared by friends, or observing unexpected device behavior. Those who sought privacy and security information before purchase, reported that it was difficult or impossible to find. We asked interviewees to rank factors they would consider when purchasing IoT devices; after features and price, privacy and security were ranked among the most important. Finally, we showed interviewees our prototype privacy and security label. Almost all found it to be accessible and useful, encouraging them to incorporate privacy and security in their IoT purchase decisions.
An Explanation of Fitts’ Law-like Performance in Gaze-Based Selection Tasks Using a Psychophysics Approach
Eye gaze as an input method has been studied since the 1990s, to varied results: some studies found gaze to be more efficient than traditional input methods like a mouse, others far behind. Comparisons are often backed up by Fitts’ Law without explicitly acknowledging the ballistic nature of saccadic eye movements. Using a vision science-inspired model, we here show that a Fitts’-like distribution of movement times can arise due to the execution of secondary saccades, especially when targets are small. Study participants selected circular targets using gaze. Seven different target sizes and two saccade distances were used. We then determined performance across target sizes for different sampling windows (“dwell times”) and predicted an optimal dwell time range. Best performance was achieved for large targets reachable by a single saccade. Our findings highlight that Fitts’ Law, while a suitable approximation in some cases, is an incomplete description of gaze interaction dynamics.
MultiTrack: Multi-User Tracking and Activity Recognition Using Commodity WiFi
This paper presents MultiTrack, a commodity WiFi based human sensing system that can track multiple users and recognize activities of multiple users performing them simultaneously. Such a system can enable easy and large-scale deployment for multi-user tracking and sensing without the need for additional sensors through the use of existing WiFi devices (e.g., desktops, laptops and smart appliances). The basic idea is to identify and extract the signal reflection corresponding to each individual user with the help of multiple WiFi links and all the available WiFi channels at 5GHz. Given the extracted signal reflection of each user, MultiTrack examines the path of the reflected signals at multiple links to simultaneously track multiple users. It further reconstructs the signal profile of each user as if only a single user has performed activity in the environment to facilitate multi-user activity recognition. We evaluate MultiTrack in different multipath environments with up to 4 users for multi-user tracking and up to 3 users for activity recognition. Experimental results show that our system can achieve decimeter localization accuracy and over 92% activity recognition accuracy under multi-user scenarios.
What is Mixed Reality?
What is Mixed Reality (MR)? To revisit this question given the many recent developments, we conducted interviews with ten AR/VR experts from academia and industry, as well as a literature survey of 68 papers. We find that, while there are prominent examples, there is no universally agreed on, one-size-fits-all definition of MR. Rather, we identified six partially competing notions from the literature and experts’ responses. We then started to isolate the different aspects of reality relevant for MR experiences, going beyond the primarily visual notions and extending to audio, motion, haptics, taste, and smell. We distill our findings into a conceptual framework with seven dimensions to characterize MR applications in terms of the number of environments, number of users, level of immersion, level of virtuality, degree of interaction, input, and output. Our goal with this paper is to support classification and discussion of MR applications’ design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.
Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information
In this day and age of identity theft, are we likely to trust machines more than humans for handling our personal information? We answer this question by invoking the concept of “machine heuristic,” which is a rule of thumb that machines are more secure and trustworthy than humans. In an experiment (N = 160) that involved making airline reservations, users were more likely to reveal their credit card information to a machine agent than a human agent. We demonstrate that cues on the interface trigger the machine heuristic by showing that those with higher cognitive accessibility of the heuristic (i.e., stronger prior belief in the rule of thumb) were more likely than those with lower accessibility to disclose to a machine, but they did not differ in their disclosure to a human. These findings have implications for design of interface cues conveying machine vs. human sources of our online interactions.
Critter: Augmenting Creative Work with Dynamic Checklists, Automated Quality Assurance, and Contextual Reviewer Feedback
Checklists and guidelines have played an increasingly important role in complex tasks ranging from the cockpit to the operating theater. Their role in creative tasks like design is less explored. In a needfinding study with expert web designers, we identified designers’ challenges in adhering to a checklist of design guidelines. We built Critter, which addressed these challenges with three components: Dynamic Checklists that progressively disclose guideline complexity with a self-pruning hierarchical view, AutoQA to automate common quality assurance checks, and guideline-specific feedback provided by a reviewer to highlight mistakes as they appear. In an observational study, we found that the more engaged a designer was with Critter, the fewer mistakes they made in following design guidelines. Designers rated the AutoQA and contextual feedback experience highly, and provided feedback on the tradeoffs of the hierarchical Dynamic Checklists. We additionally found that a majority of designers rated the AutoQA experience as excellent and felt that it increased the quality of their work. Finally, we discuss broader implications for supporting complex creative tasks.
Do People Consume the News they Trust?
It is reasonable to expect trusted news organizations to have more engaged users. However, given the lowest levels of trust in media and the several intermediaries involved in digital news consumption, recent studies posit that trust and usage may not be related. We argue that while trust may not relate to overall news usage, given that much of it is incidental, but it could still explain intentional usage. We correlated passively metered usage from digital trace data on 35 national news outlets in the US with their trustworthiness from a nationally representative survey, for three discrete months. We find no association between trust and overall user engagement, but a positive relationship between trustworthiness and direct visits, the latter a measure of intentional usage. These relationships held for outlets despite their partisan leanings, multi-platform presence and their mainstream nature.
Saliency Deficit and Motion Outlier Detection in Animated Scatterplots
We report the results of a crowdsourced experiment that measured the accuracy of motion outlier detection in multivariate, animated scatterplots. The targets were outliers either in speed or direction of motion, and were presented with varying levels of saliency in dimensions that are irrelevant to the task of motion outlier detection (e.g., color, size, position). We found that participants had trouble finding the outlier when it lacked irrelevant salient features and that visual channels contribute unevenly to the odds of an outlier being correctly detected. Direction of motion contributes the most to accurate detection of speed outliers, and position contributes the most to accurate detection of direction outliers. We introduce the concept of saliency deficit in which item importance in the data space is not reflected in the visualization due to a lack of saliency. We conclude that motion outlier detection is not well supported in multivariate animated scatterplots.
Understanding Affective Experiences with Conversational Agents
While previous studies of Conversational Agents (e.g. Siri, Google Assistant, Alexa and Cortana) have focused on evaluating usability and exploring capabilities of these systems, little work has examined users’ affective experiences. In this paper we present a survey study with 171 participants to examine CA users’ affective experiences. Specifically, we present four major usage scenarios, users’ affective responses in these scenarios, and the factors which influenced the affective responses. We found that users’ overall experience was positive with interest being the most salient positive emotion. Affective responses differed depending on the scenarios. Both pragmatic and hedonic qualities influenced affect. The factors underlying pragmatic quality are: helpfulness, proactivity, fluidity, seamlessness and responsiveness. The factors underlying hedonic quality are: comfort in human-machine conversation, pride of using cutting-edge technology, fun during use, perception of having a human-like assistant, concern about privacy and fear of causing distraction.
Rehumanized Crowdsourcing: A Labeling Framework Addressing Bias and Ethics in Machine Learning
The increased use of machine learning in recent years led to large volumes of data being manually labeled via crowdsourcing microtasks completed by humans. This brought about dehumanization effects, namely, when task requesters overlook the humans behind the task, leading to issues of ethics (e.g., unfair payment) and amplification of human biases, which are transferred into training data and affect machine learning in the real world. We propose a framework that allocates microtasks considering human factors of workers such as demographics and compensation. We deployed our framework to a popular crowdsourcing platform and conducted experiments with 1,919 workers collecting 160,345 human judgments. By routing microtasks to workers based on demographics and appropriate pay, our framework mitigates biases in the contributor sample and increases the hourly pay given to contributors. We discuss potential extensions and how it can promote transparency in crowdsourcing.
What Can We Learn from Augmented Reality (AR)?
Emerging technologies such as Augmented Reality (AR), have the potential to radically transform education by making challenging concepts visible and accessible to novices. In this project, we have designed a Hololens-based system in which collaborators are exposed to an unstructured learning activity in which they learned about the invisible physics involved in audio speakers. They learned topics ranging from spatial knowledge, such as shape of magnetic fields, to abstract conceptual knowledge, such as relationships between electricity and magnetism. We compared participants’ learning, attitudes and collaboration with a tangible interface through multiple experimental conditions containing varying layers of AR information. We found that educational AR representations were beneficial for learning specific knowledge and increasing participants’ self-efficacy (i.e., their ability to learn concepts in physics). However, we also found that participants in conditions that did not contain AR educational content, learned some concepts better than other groups and became more curious about physics. We discuss learning and collaboration differences, as well as benefits and detriments of implementing augmented reality for unstructured learning activities.
Serpentine: A Self-Powered Reversibly Deformable Cord Sensor for Human Input
We introduce Serpentine, a self-powered sensor that is a reversibly deformable cord capable of sensing a variety of human input. The material properties and structural design of Serpentine allow it to be flexible, twistable, stretchable and squeezable, enabling a broad variety of expressive input modalities. The sensor operates using the principle of Triboelectric Nanogenerators (TENG), which allows it to sense mechanical deformation without an external power source. The affordances of the cord include six interactions—Pluck, Twirl, Stretch, Pinch, Wiggle and Twist. Serpentine demonstrates the ability to simultaneously recognize these inputs through a single physical interface. A 12-participant user study illustrates 95.7% accuracy for a user-dependent recognition model using a realtime system and 92.17% for user-independent offline detection. We conclude by demonstrating how Serpentine can be employed in everyday ubiquitous computing applications.
VRsneaky: Increasing Presence in VR Through Gait-Aware Auditory Feedback
While Virtual Reality continues to increase in fidelity, it remains an open question how to effectively reflect the user’s movements and provide congruent feedback in virtual environments. We present VRsneaky, a system for producing auditory movement feedback, which helps participants orient themselves in a virtual environment by providing footstep sounds. The system reacts to the user’s specific gait features and adjusts the audio accordingly. In a user study with 28 participants, we found that VRsneaky increases users’ sense of presence as well as awareness of their own posture and gait. Additionally, we find that increasing auditory realism significantly influences certain characteristics of participants’ gait. Our work shows that gait-aware audio feedback is a means to increase presence in virtual environments. We discuss opportunities and design requirements for future scenarios where users walk through immersive virtual worlds.
Comparing the Effects of Paper and Digital Checklists on Team Performance in Time-Critical Work
This mixed-methods study examines the effects of a tablet-based checklist system on team performance during a dynamic and safety-critical process of trauma resuscitation. We compared team performance from 47 resuscitations that used a paper checklist to that from 47 cases with a digital checklist to determine if digitizing a checklist led to improvements in task completion rates and in how fast the tasks were initiated for 18 most critical assessment and treatment tasks. We also compared if the checklist compliance increased with the digital design. We found that using the digital checklist led to more frequent completions of the initial airway assessment task but fewer completions of ear and lower extremities exams. We did not observe any significant differences in time to task performance, but found increased compliance with the checklist. Although improvements in team performance with the digital checklist were minor, our findings are important because they showed no adverse effects as a result of the digital checklist introduction. We conclude by discussing the takeaways and implications of these results for effective digitization of medical work.
Enabling Identification and Behavioral Sensing in Homes using Radio Reflections
Understanding users’ behavior at home is central to behavioral research. For example, social researchers are interested in studying domestic abuse, and healthcare professionals are interested in caregiver-patient interaction. Today, such studies rely on diaries and questionnaires, which are subjective, erroneous, and hard to sustain in longitudinal studies. We introduce Marko, a system that automatically collects behavior-related data, without asking people to write diaries or wear sensors. Marko transmits a low power wireless signal and analyses its reflections from the environment. It maps those reflections to how users interact with the environment (e.g., access to medication cabinet) and with each other (e.g., watch TV together). It provides novel algorithms for identifying who-does-what, and bootstrapping the system in new homes without asking users for new annotations. We evaluate Marko with a one-month deployment in six homes, and demonstrate its value for studying couple relationships and caregiver-patient interaction.
I (Don’t) See What You Typed There! Shoulder-surfing Resistant Password Entry on Gamepads
Using gamepad-driven devices like games consoles is an activity frequently shared with others. Thus, shoulder-surfing is a serious threat. To address this threat, we present the first investigation of shoulder-surfing resistant text password entry on gamepads by (1) identifying the requirements of this context; (2) assessing whether shoulder-surfing resistant authentication schemes proposed in non-gamepad contexts can be viably adapted to meet these requirements; (3) proposing “Colorwheels”, a novel shoulder-surfing resistant authentication scheme specifically geared towards this context; (4) using two different methodologies proposed in the literature for evaluating shoulder-surfing resistance to compare “Colorwheels”, on-screen keyboards (the de facto standard in this context), and an existing shoulder-surfing resistant scheme which we identified during our assessment and adapted for the gamepad context; (5) evaluating all three schemes regarding their usability. Having applied different methodologies to measure shoulder-surfing resistance, we discuss their strengths and pitfalls and derive recommendations for future research.
NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze Estimation
Quality, diversity, and size of training data are critical factors for learning-based gaze estimators. We create two datasets satisfying these criteria for near-eye gaze estimation under infrared illumination: a synthetic dataset using anatomically-informed eye and face models with variations in face shape, gaze direction, pupil and iris, skin tone, and external conditions (2M images at 1280×960), and a real-world dataset collected with 35 subjects (2.5M images at 640×480). Using these datasets we train neural networks performing with sub-millisecond latency. Our gaze estimation network achieves 2.06(±0.44)° of accuracy across a wide 30°×40° field of view on real subjects excluded from training and 0.5° best-case accuracy (across the same FOV) when explicitly trained for one real subject. We also train a pupil localization network which achieves higher robustness than previous methods.
Transforming Game Difficulty Curves using Function Composition
Player engagement within a game is often influenced by its difficulty curve: the pace at which in-game challenges become harder. Thus, finding an optimal difficulty curve is important. In this paper, we present a flexible and formal approach to transforming game difficulty curves by leveraging function composition. This allows us to describe changes to difficulty curves, such as making them “smoother”, in a more precise way. In an experiment with 400 players, we used function composition to modify the existing difficulty curve of the puzzle game Paradox to generate new curves. We found that transforming difficulty curves in this way impacted player engagement, including the number of levels completed and the estimated skill needed to complete those levels, as well as perceived competence. Further, we found some transformed curves dominated others with respect to engagement, indicating that different design goals can be traded-off by considering a subset of curves.
How Users Interpret Bugs in Trigger-Action Programming
Trigger-action programming (TAP) is a programming model enabling users to connect services and devices by writing if-then rules. As such systems are deployed in increasingly complex scenarios, users must be able to identify programming bugs and reason about how to fix them. We first systematize the temporal paradigms through which TAP systems could express rules. We then identify ten classes of TAP programming bugs related to control flow, timing, and inaccurate user expectations. We report on a 153-participant online study where participants were assigned to a temporal paradigm and shown a series of pre-written TAP rules. Half of the rules exhibited bugs from our ten bug classes. For most of the bug classes, we found that the presence of a bug made it harder for participants to correctly predict the behavior of the rule. Our findings suggest directions for better supporting end-user programmers.
Evaluating Expert Curation in a Baby Milestone Tracking App
Early childhood developmental screening is critical for timely detection and intervention. babyTRACKS (Formerly Baby CROINC, CROwd INtelligence Curation.) is a free, live, interactive developmental tracking mobile app with over 3,000 children’s diaries. Parents write or select short milestone texts, like “began taking first steps,” to record their babies’ developmental achievements, and receive crowd-based percentiles to evaluate development and catch potential delays.
Currently, an expert-based Curated Crowd Intelligence (CCI) process manually groups incoming novel parent-authored milestone texts according to their similarity to existing milestones in the database (for example, starting to walk), or determining that the milestone represents a new developmental concept not seen before in another child’s diary. CCI cannot scale well, however, and babyTRACKS is mature enough, with a rich enough database of existing milestone texts, to now consider machine learning tools to replace or assist the human curators. Three new studies explore (1) the usefulness of automation, by analyzing the human cost of CCI and how the work is currently broken down; (2) the validity of automation, by testing the inter-rater reliability of curators; and (3) the value of automation, by appraising the “real world” clinical value of milestones when assessing child development.
We conclude that automation can indeed be appropriate and helpful for a large percentage, though not all, of CCI work. We further establish realistic upper bounds for algorithm performance; confirm that the babyTRACKS milestones dataset is valid for training and testing purposes; and verify that it represents clinically meaningful developmental information.
Pose-Guided Level Design
Player’s physical experience is a critical factor to consider in designing motion-based games that are played through motion sensor gaming consoles or virtual reality devices. However, adjusting the physical challenge involved in a motion-based game is difficult and tedious, as it is typically done manually by level designers on a trial-and-error basis. In this paper, we propose a novel approach for automatically synthesizing levels for motion-based games that can achieve desired physical movement goals. By formulating the level design problem as a trans-dimensional optimization problem which is solved by a reversible-jump Markov chain Monte Carlo technique, we show that our approach can automatically synthesize a variety of game levels, each carrying the desired physical movement properties. To demonstrate the generality of our approach, we synthesize game levels for two different types of motion-based games and conduct a user study to validate the effectiveness of our approach.
Improving Early Navigation in Time-Lapse Video with Spread-Frame Loading
Time-lapse videos are often navigated by scrubbing with a slider. When networks are slow or images are large, however, even thumbnail versions load so slowly that scrubbing is limited to the start of the video. We developed a frame-loading technique called spread-loading that enables scrubbing regardless of delivery rate. Spread-loading orders frame delivery to maximize coverage of the entire sequence; this provides a temporal overview of the entire video that can be fully navigated at any time during delivery. The overview initially has a coarse temporal resolution, becoming finer-grained with each new frame. We compared spread-loading with traditional linear loading in a study where participants were asked to find specific episodes in a long time-lapse sequence, using three views with increasing levels of detail. Results show that participants found target episodes significantly and substantially faster with spread-loading, regardless of whether they could click to change the load point. Users rated spread-loading as requiring less effort, and strongly preferred the new technique.
Evaluating Pan and Zoom Timelines and Sliders
Pan and zoom timelines and sliders help us navigate large time series data. However, designing efficient interactions can be difficult. We study pan and zoom methods via crowd-sourced experiments on mobile and computer devices, asking which designs and interactions provide faster target acquisition. We find that visual context should be limited for low-distance navigation, but added for far-distance navigation; that timelines should be oriented along the longer axis, especially on mobile; and that, as compared to default techniques, double click, hold, and rub zoom appear to scale worse with task difficulty, whereas brush and especially ortho zoom seem to scale better. Software and data used in this research are available as open source.
Investigating Implicit Gender Bias and Embodiment of White Males in Virtual Reality with Full Body Visuomotor Synchrony
Previous research has shown that when White people embody a black avatar in virtual reality (VR) with full body visuomotor synchrony, this can reduce their implicit racial bias. In this paper, we put men in female and male avatars in VR with full visuomotor synchrony using wearable trackers and investigated implicit gender bias and embodiment. We found that participants embodied in female avatars displayed significantly higher levels of implicit gender bias than those embodied in male avatars. The implicit gender bias actually increased after exposure to female embodiment in contrast to male embodiment. Results also showed that participants felt embodied in their avatars regardless of gender matching, demonstrating that wearable trackers can be used for a realistic sense of avatar embodiment in VR. We discuss the future implications of these findings for both VR scenarios and embodiment technologies.
Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, interaction designers are rarely engineers, and 30 years of haptic inventions are buried in a fragmented literature that describes devices mechanically rather than by potential purpose. We conceived of Haptipedia to unlock this trove of examples: Haptipedia presents a device corpus for exploration through metadata that matter to both device and interaction designers. It is a taxonomy of device attributes that go beyond physical description to capture potential utility, applied to a growing database of 105 grounded force-feedback devices, and accessed through a public visualization that links utility to morphology. Haptipedia’s design was driven by both systematic review of the haptic device literature and rich input from diverse haptic designers. We describe Haptipedia’s reception (including hopes it will redefine device reporting standards) and our plans for its sustainability through community participation.
Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders
Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users’ “right to explanation”. We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users’ objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and “white-box” explanations (i.e. that show the inner workings of an algorithm) can improve users’ comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users’ trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.
Geometrically Compensating Effect of End-to-End Latency in Moving-Target Selection Games
Effects of unintended latency on gamer performance have been reported. End-to-end latency can be corrected by post-input manipulation of activation times, but this gives the player unnatural gameplay experience. For moving-target selection games such as Flappy Bird, the paper presents a predictive model of latency on error rate and a novel compensation method for the latency effects by adjusting the game’s geometry design — e.g., by modifying the size of the selection region. Without manipulation of the game clock, this can keep the user’s error rate constant even if the end-to-end latency of the system changes. The approach extends the current model of moving-target selection with two additional assumptions about the effects of latency: (1) latency reduces players’ cue-viewing time and (2) pushes the mean of the input distribution backward. The model and method proposed have been validated through precise experiments.
Who Gets to Future?: Race, Representation, and Design Methods in Africatown
This paper draws on a collaborative project called the Africatown Activation to examine the role design practices play in contributing to (or conspiring against) the flourishing of the Black community in Seattle, Washington. Specifically, we describe the efforts of a community group called Africatown to design and build an installation that counters decades of disinvestment and ongoing displacement in the historically Black Central Area neighborhood. Our analysis suggests that despite efforts to include community, conventional design practices may perpetuate forms of institutional racism: enabling activities of community engagement that may further legitimate racialized forms of displacement. We discuss how focusing on amplifying the legacies of imagination already at work may help us move beyond a simple reading of design as the solution to systemic forms of oppression.
Cross-Device Taxonomy: Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices
Designing interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. Research addressing the opportunities and challenges of interactions with multiple devices in concert is of continued focus in HCI research. To inform the future research agenda of this field, we contribute an analysis and taxonomy of a corpus of 510 papers in the cross-device computing domain. For both new and experienced researchers in the field we provide: an overview, historic trends and unified terminology of cross-device research; discussion of major and under-explored application areas; mapping of enabling technologies; synthesis of key interaction techniques spanning across multiple devices; and review of common evaluation strategies. We close with a discussion of open issues. Our taxonomy aims to create a unified terminology and common understanding for researchers in order to facilitate and stimulate future cross-device research.
The Gendered Geography of Contributions to OpenStreetMap: Complexities in Self-Focus Bias
Millions of people worldwide contribute content to peer production repositories that serve human information needs and provide vital world knowledge to prominent artificial intelligence systems. Yet, extreme gender participation disparities exist in which men significantly outnumber women. A central concern has been that due to self-focus bias, these disparities can lead to corresponding gender content disparities, in which content of interest to men is better represented than content of interest to women. This paper investigates the relationship between participation and content disparities in OpenStreetMap. We replicate findings that women are dramatically under-represented as OSM contributors, and observe that men and women contribute different types of content and do so about different places. However, the character of these differences confound simple narratives about self-focus bias: we find that on a proportional basis, men produced a higher proportion of contributions in feminized spaces compared to women, while women produced a higher proportion of contributions in masculinized spaces compared to men.
Shaping Pro-Social Interaction in VR: An Emerging Design Framework
Commercial social VR applications represent a diverse and evolving ecology with competing models of what it means to be social in VR. Drawing from expert interviews, this paper examines how the creators of different social VR applications think about how their platforms frame, support, shape, or constrain social interaction. The study covers a range of applications including: Rec Room, High Fidelity, VRChat, Mozilla Hubs, Altspace VR, AnyLand, and Facebook Spaces. We contextualize design choices underlying these applications, with particular attention paid to the ways that industry experts perceive, and seek to shape, the relationship between user experiences and design choices. We underscore considerations related to: (1) aesthetics of place (2) embodied affordances, (3) social mechanics, (4) and tactics for shaping social norms and mitigating harassment. Drawing on this analysis, we discuss the stakes of these choices, suggest future research directions, and propose an emerging design framework for shaping pro-social behavior in VR.
Behind the Voices: The Practice and Challenges of Esports Casters
Casters commentate on a live, streamed video game for a large online audience. Drawing from 20 semi-structured interviews with amateur casters of either Dota 2 or Rocket League video games and over 20 hours of participant observations, we describe the distinctive practices of two types of casters, play-by-play and color commentary. Play-by-play casters are adept at improvising a rich narrative of hype on top of live games, whereas color commentators methodically prepare to fill in the gaps of live play with informative analysis. Casters often start out alone, relying upon reflective practice to hone their craft. Through examining challenges faced by amateur casters, we identified three design opportunities for game designers to support casters and would-be casters as first-class users. Such designs would provide an antidote to the challenges faced by amateur casters: those of the lack of social support for casting, camerawork, and data availability.
Kyub: A 3D Editor for Modeling Sturdy Laser-Cut Objects
We present an interactive editing system for laser cutting called kyub. Kyub allows users to create models efficiently in 3D, which it then unfolds into the 2D plates laser cutters expect. Unlike earlier systems, such as FlatFitFab, kyub affords construction based on closed box structures, which allows users to turn very thin material, such as 4mm plywood, into objects capable of withstanding large forces, such as chairs users can actually sit on. To afford such sturdy construction, every kyub project begins with a simple finger-joint “boxel”-a structure we found to be capable of withstanding over 500kg of load. Users then extend their model by attaching additional boxels. Boxels merge automatically, resulting in larger, yet equally strong structures. While the concept of stacking boxels allows kyub to offer the strong affordance and ease of use of a voxel-based editor, boxels are not confined to a grid and readily combine with kuyb’s various geometry deformation tools. In our technical evaluation, objects built with kyub withstood hundreds of kilograms of loads. In our user study, non-engineers rated the learnability of kyub 6.1/7.
FiberWire: Embedding Electronic Function into 3D Printed Mechanically Strong, Lightweight Carbon Fiber Composite Objects
3D printing offers significant potential in creating highly customized interactive and functional objects. However, at present ability to manufacture functional objects is limited by available materials (e.g., various polymers) and their process properties. For instance, many functional objects need stronger materials which may be satisfied with metal printers. However, to create wholly interactive devices, we need both conductors and insulators to create wiring, and electronic components to complete circuits. Unfortunately, the single material nature of metal printing, and its inherent high temperatures, preclude this. Thus, in 3D printed devices, we have had a choice of strong materials, or embedded interactivity, but not both. In this paper, we introduce a set of techniques we call FiberWire, which leverages a new commercially available capability to 3D print carbon fiber composite objects. These objects are light weight and mechanically strong, and our techniques demonstrate a means to embed circuitry for interactive devices within them. With FiberWire, we describe a fabrication pipeline takes advantage of laser etching and fiber printing between layers of carbon-fiber composite to form low resistance conductors, thereby enabling the fabrication of electronics directly embedded into mechanically strong objects. Utilizing the fabrication pipeline, we show a range of sensor designs, their performance characterization on these new materials and finally three fully printed example object that are both interactive and mechanically strong — a bicycle handle bar with interactive controls, a swing and impact sensing golf club and an interactive game controller (Figure 1).
Tool Extension in Human-Computer Interaction
Tool use extends people’s representations of the immediately actionable space around them. Physical tools thereby become integrated in people’s body schemas. We introduce a measure for tool extension in HCI by using a visual-tactile interference paradigm. In this paradigm, an index of tool extension is given by response time differences between crossmodally congruent and incongruent stimuli; tactile on the hand and visual on the tool. We use this measure to examine if and how findings on tool extension apply to interaction with computer-based tools. Our first experiment shows that touchpad and mouse both provide tool extension over a baseline condition without a tool. A second experiment shows a higher degree of tool extension for a realistic avatar hand compared to an abstract pointer for interaction in virtual reality. In sum, our measure can detect tool extension with computer-based tools and differentiate interfaces by their degree of extension.
Participatory Design of VR Scenarios for Exposure Therapy
Virtual reality (VR) applications for exposure therapy predominantly use computer-generated imagery to create controlled environments in which users can be exposed to their fears. Creating 3D animations, however, is demanding and time-consuming. This paper presents a participatory approach for prototyping VR scenarios that are enabled by 360° video and grounded in lived experiences. We organized a participatory workshop with adolescents to prototype such scenarios, consisting of iterative phases of ideation, storyboarding, live-action plays recorded by a 360° camera, and group evaluation. Through an analysis of the participants’ interactions, we outline how they worked to design prototypes that depict situations relevant to those with a fear of public speaking. Our analysis also explores how participants used their experiences and reflections as resources for design. Six clinical psychologists evaluated the prototypes from the workshop and concluded they were viable therapeutic tools, emphasizing the immersive, realistic experience they presented. We argue that our approach makes the design of VR scenarios more accessible.
Steering Performance with Error-accepting Delays
In steering law tasks, deviating from the path is immediately considered an error operation. However, in navigating a hierarchical menu item, which is a representative application of the law, a deviation within a short duration is sometimes permitted. We tested the validity of the steering law model with various durations of such error-accepting delays and found that it showed high fits for each delay condition (R2 > 0.96) but poor fits if the delay values were not separated (R2 = 0.58). Because the average movement speed linearly increased as the delay increased, we refined the model by taking the delay into account, and the fitness was significantly improved (R2 = 0.97). Our model will help GUI designers estimate the average operational time on the basis of the menu item length, width, and error-accepting delay.
Card Mapper: Enabling Data-Driven Reflections on Ideation Cards
We explore how usage data captured from ideation cards can enable reflection on design. We deployed a deck of ideation cards on a Masters level module over two years, developing the means to capture the students’ designs into a digital repository. We created two visualisations to reveal the relative co-occurrences of the cards as concept space and the relative proximity of designs (through cards used in common) as design space. We used these to elicit reflections from the perspectives of students, teachers and card designers. Our findings inspire ideas for extending the data-driven use of ideation cards throughout the design process; informing the redesign of cards, the rules for using them and their live connection to supporting materials and enabling stakeholders to reflect and recognise challenges and opportunities. We also identified the need, and potential ways, to capture a richer design rationale, including annotations, discarded cards and varying card interpretations.
REsCUE: A framework for REal-time feedback on behavioral CUEs using multimodal anomaly detection
Executive coaching has been drawing more and more attention for developing corporate managers. While conversing with managers, coach practitioners are also required to understand internal states of coachees through objective observations. In this paper, we present REsCUE, an automated system to aid coach practitioners in detecting unconscious behaviors of their clients. Using an unsupervised anomaly detection algorithm applied to multimodal behavior data such as the subject’s posture and gaze, REsCUE notifies behavioral cues for coaches via intuitive and interpretive feedback in real-time. Our evaluation with actual coaching scenes confirms that REsCUE provides the informative cues to understand internal states of coachees. Since REsCUE is based on the unsupervised method and does not assume any prior knowledge, further applications beside executive coaching are conceivable using our framework.
Visualizing Uncertainty and Alternatives in Event Sequence Predictions
Data analysts apply machine learning and statistical methods to timestamped event sequences to tackle various problems but face unique challenges when interpreting the results. Especially in event sequence prediction, it is difficult to convey uncertainty and possible alternative paths or outcomes. In this work, informed by interviews with five machine learning practitioners, we iteratively designed a novel visualization for exploring event sequence predictions of multiple records where users are able to review the most probable predictions and possible alternatives alongside uncertainty information. Through a controlled study with 18 participants, we found that users are more confident in making decisions when alternative predictions are displayed and they consider the alternatives more when deciding between two options with similar top predictions.
Towards Understanding the Design of Positive Pre-sleep Through a Neurofeedback Artistic Experience
Poor sleep has been acknowledged as an increasingly prevalent global health concern, however, how to design for promoting sleep is relatively underexplored. We propose neurofeedback technology may potentially facilitate restfulness and sleep onset, and we explore this through the creation and study of “Inter-Dream”, a novel multisensory interactive artistic experience driven by neurofeedback. Twelve participants individually rested, augmented by Inter-Dream. Results demonstrated: statistically significant decreases in pre-sleep cognitive arousal (p = .01), negative emotion (p = .008), and negative affect (p = .004). EEG readings were also indicative of restorative restfulness and cognitive stillness, while interview responses described experiences of mindfulness and playful self-exploration. Taken together, our work highlights neurofeedback as a potential pathway for future research in the promotion of sleep, while also suggesting strategies for designing towards this within the context of pre-sleep.
MessageOnTap: A Suggestive Interface to Facilitate Messaging-related Tasks
Text messages are sometimes prompts that lead to information related tasks, e.g. checking one’s schedule, creating reminders, or sharing content. We introduce MessageOnTap, a suggestive inter-face for smartphones that uses the text in a conversation to suggest task shortcuts that can streamline likely next actions. When activated, MessageOnTap uses word embeddings to rank relevant external apps, and parameterizes associated task shortcuts using key phrases mentioned in the conversation, such as times, persons, or events. MessageOnTap also tailors the auto-complete dictionary based on text in the conversation, to streamline any text input.We first conducted a month-long study of messaging behaviors(N=22) that informed our design. We then conducted a lab study to evaluate the effectiveness of MessageOnTap’s suggestive interface, and found that participants can complete tasks 3.1x faster withMessageOnTap than their typical task flow.
HeatCraft: Designing Playful Experiences with Ingestible Sensors via Localized Thermal Stimuli
Ingestible sensors are pill-like sensors that people swallow mainly for medical purposes. We propose that ingestible sensors also offer unique opportunities to facilitate intriguing bodily experiences in a playful manner. To explore this, we present “HeatCraft”, a two-player system that translates the user’s body temperature measured by an ingestible sensor to localized thermal stimuli delivered through a waist belt equipped with heating pads. We conducted a study with 16 participants. The study revealed three design themes (Integration of body and technology, Integration of internal body and outside world, and Integration of play and life) along with some open challenges. In summary, this work contributes knowledge to the future design of playful experiences with ingestible sensors.
TalkTraces: Real-Time Capture and Visualization of Verbal Content in Meetings
Group Support Systems provide ways to review and edit shared content during meetings, but typically require participants to explicitly generate the content. Recent advances in speech-to-text conversion and language processing now make it possible to automatically record and review spoken information. We present the iterative design and evaluation of TalkTraces, a real-time visualization that helps teams identify themes in their discussions and obtain a sense of agenda items covered. We use topic modeling to identify themes within the discussions and word embeddings to compute the discussion “relatedness” to items in the meeting agenda. We evaluate TalkTraces iteratively: we first conduct a comparative between-groups study between two teams using TalkTraces and two teams using traditional notes, over four sessions. We translate the findings into changes in the interface, further evaluated by one team over four sessions. Based on our findings, we discuss design implications for real-time displays of discussion content.
DreamGigs: Designing a Tool to Empower Low-Resource Job Seekers
Technology allows us to scale the number of jobs we search for and apply to, train for work, and earn money online. However, these technologies do not benefit all job seekers equally and must be designed to better support the needs of underserved job seekers. Research suggests that underserved job seekers prefer employment technologies that can support them in articulating their skills and experiences and in identifying pathways to achieve their career goals. Therefore, we present the design, implementation, and evaluation of DreamGigs, a tool that identifies the skills job seekers need to reach their dream jobs and presents volunteer and employment opportunities for them to acquire those skills. Our evaluation results show that DreamGigs aids in the process of personal empowerment. We contribute design implications for mitigating aspects of powerlessness that low-resource job seekers experience and discuss ways to promote action-taking in these job seekers.
Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models
Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations. This has led to a rallying cry for model interpretability. Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools. Through an iterative design process with expert machine learning researchers and practitioners, we designed a visual analytics system, Gamut, to explore how interactive interfaces could better support model interpretation. Using Gamut as a probe, we investigated why and how professional data scientists interpret models, and how interface affordances can support data scientists in answering questions about model interpretability. Our investigation showed that interpretability is not a monolithic concept: data scientists have different reasons to interpret models and tailor explanations for specific audiences, often balancing competing concerns of simplicity and completeness. Participants also asked to use Gamut in their work, highlighting its potential to help data scientists understand their own data.
Guerilla Warfare and the Use of New (and Some Old) Technology: Lessons from FARC’s Armed Struggle in Colombia
Studying armed political struggles from a CSCW perspective can throw the complex interactions between culture, technology, materiality and political conflict into sharp relief. Such studies highlight interrelations that otherwise remain under-remarked upon, despite their severe consequences. The present paper provides an account of the armed struggle of one of the Colombian guerrillas, FARC-EP, with the Colombian army. We document how radio-based communication became a crucial, but ambiguous infrastructure of war. The sudden introduction of localization technologies by the Colombian army presented a lethal threat to the guerrilla group. Our interviewees report a severe learning process to diminish this new risk, relying on a combination of informed beliefs and significant technical understanding. We end with a discussion of the role of HCI in considerations of ICT use in armed conflicts and introduce the concept of counter-appropriation as process of adapting one’s practices to other’s appropriation of technology in conflict.
Abstract Machines: Overlaying Virtual Worlds on Physical Rides
Overlaying virtual worlds onto existing physical rides and altering the sensations of motion can deliver new experiences of thrill, but designing how motion is mapped between physical ride and virtual world is challenging. In this paper, we present the notion of an abstract machine, a new form of intermediate design knowledge that communicates motion mappings at the level of metaphor, mechanism and implementation. Following a performance-led, in-the-wild approach we report lessons from creating and touring VR Playground, a ride that overlays four distinct abstract machines and virtual worlds on a playground swing. We compare the artist’s rationale with riders’ reported experiences and analysis of their physical behaviours to reveal the distinct thrills of each abstract machine. Finally, we discuss how to make and use abstract machines in terms of heuristics for designing motion mappings, principles for virtual world design and communicating experiences to riders.
What’s Missing: The Role of Instructional Design in Children’s Games-Based Learning
Learning games that address targeted curriculum areas are widely used in schools. Within games, productive learning episodes can result from breakdowns when followed by a breakthrough, yet their role in children’s learning has not been investigated. This paper examines the role of game and instructional design during and after breakdowns. We observed 26 young children playing several popular learning games and conducted a moment-by-moment analysis of breakdown episodes. Our findings show children achieve productive breakthroughs independently less than half of the time. In particular, breakdowns caused by game actions are difficult for children to overcome independently and prevent engagement with the domain skills. Importantly, we identify specific instructional game components and their role in fostering strategies that result in successful breakthroughs. We conclude with intrinsic and extrinsic instructional design implications for both game designers and primary teachers to better enable children’s games-based learning.
Impacts of Telemanipulation in Robotic Assisted Surgery
Robotic-assisted Minimally Invasive Surgery (MIS) is adopted more and more as it overcomes the shortcomings of classic MIS for surgeons while keeping the benefits of small incisions for patients. However, introducing new technology oftentimes affects the work of skilled practitioners. Our goals are to investigate the impacts of telemanipulated surgical robots on the work practices of surgical teams and to understand their cause. We conducted a field study observing 21 surgeries, conducting 12 interviews and performing 3 data validation sessions with surgeons. Using Thematic Analysis, we find that physically separating surgeons from their teams makes them more autonomous, shifts their use of perceptual senses, and turns the surgeon’s assistant into the robot’s assistant. We open design opportunities for the HCI field by questioning the telemanipulated approach and discussing alternatives that keep surgeons on the surgical field.
SmartManikin: Virtual Humans with Agency for Design Tools
When designing comfort and usability in products, designers need to evaluate aspects ranging from anthropometrics to use scenarios. Therefore, virtual and poseable mannequins are employed as a reference in early-stage tools and for evaluation in the later stages. However, tools to intuitively interact with virtual humans are lacking. In this paper, we introduceSmartManikin, a mannequin with agency that responds to high-level commands and to real-time design changes. We first captured human poses with respect to desk configurations, identified key features of the pose and trained regression functions to estimate the optimal features at a given desk setup. The SmartManikin’s pose is generated by the predicted features as well as by using forward and inverse kinematics. We present our design, implementation, and an evaluation with expert designers. The results revealed that SmartManikin enhances the design experience by providing feedback concerning comfort and health in real time.
TrackCap: Enabling Smartphones for 3D Interaction on Mobile Head-Mounted Displays
The latest generation of consumer market Head-mounted displays (HMD) now include self-contained inside-out tracking of head motions, which makes them suitable for mobile applications. However, 3D tracking of input devices is either not included at all or requires to keep the device in sight, so that it can be observed from a sensor mounted on the HMD. Both approaches make natural interactions cumbersome in mobile applications. TrackCap, a novel approach for 3D tracking of input devices, turns a conventional smartphone into a precise 6DOF input device for an HMD user. The device can be conveniently operated both inside and outside the HMD’s field of view, while it provides additional 2D input and output capabilities.
Continuous Evaluation of Video Lectures from Real-Time Difficulty Self-Report
With the increased reach and impact of video lectures, it is crucial to understand how they are experienced. Whereas previous studies typically present questionnaires at the end of the lecture, they fail to capture students’ experience in enough granularity. In this paper we propose recording the lecture difficulty in real-time with a physical slider, enabling continuous and fine-grained analysis of the learning experience. We evaluated our approach in a study with 100 participants viewing two variants of two short lectures. We demonstrate that our approach helps us paint a more complete picture of the learning experience. Our analysis has design implications for instructors, providing them with a method that helps them compare their expectations with students’ beliefs about the lectures and to better understand the specific effects of different instructional design decisions.
Group Interactions in Location-Based Gaming: A Case Study of Raiding in Pokémon GO
Raiding is a format in digital gaming that requires groups of people to collaborate and/or compete for a common goal. In 2017, the raiding format was introduced in the location-based mobile game Pokémon GO, which offers a mixed reality experience to friends and strangers coordinating for in-person raids. To understand this technology-mediated social phenomenon, we conducted over a year of participant observations, surveys with 510 players, and interviews with 25 players who raid in Pokémon GO. Using the analytical lens of Arrow, McGrath, and Berdahl’s theory of small groups as complex systems, we identify global, local, and contextual dynamics in location-based raiding that support and challenge ad-hoc group formation in real life. Based on this empirical and theoretical understanding, we discuss implications to design for transparency, social affordances, and bridging gaps between global and contextual dynamics for increased positive and inclusive community interactions.
Interstices: Sustained Spatial Relationships between Hands and Surfaces Reveal Anticipated Action
Our observations of landscape architecture students revealed a new phenomenon-interstices. Their bimanual interactions with a pen and touch surface involved various sustained hand gestures, interleaved between their regular commands. Positioning of the non-preferred hand indicates anticipated actions, including: sustained hovering near the surface; pulled back but still floating above the surface; and resting in their laps. We ran a second study with 14 landscape architect students which confirmed our observations, and uncovered a new interstice i.e. stabilizing the preferred hand while handwriting. We conclude with directions for future research and challenges for designers and researchers.
Vulnerability & Blame: Making Sense of Unauthorized Access to Smartphones
Unauthorized physical access to personal devices by people known to the owner of the device is a common concern, and a common occurrence. But how do people experience incidents of unauthorized access? Using an online survey, we collected 102 accounts of unauthorized access. Participants wrote stories about past situations in which either they accessed the smartphone of someone they know, or someone they know accessed theirs. We describe the context leading up to these incidents, the course of events, and the consequences. We then identify two orthogonal themes in how participants conceptualized these incidents. First, participants understood trust as performative vulnerability: trust was necessary to sustain relationships, but building trust required displaying vulnerability to breaches. Second, participants were self-serving in their sensemaking: they blamed the circumstances, or the other person’s shortcomings, but rarely themselves. We discuss the implications of our findings for security design and practice.
Understanding Online News Behaviors
The news landscape has been changing dramatically over the past few years. Whereas news once came from a small set of highly edited sources, now people can find news from thousands of news sites online, through a variety of channels such as web search, social media, email newsletters, or direct browsing. We set out to understand how Americans read news online using web browser logs collected from 174 diverse participants. We found that 20% of all news sessions started with a web search, that 16% started from social media, that 61% of news sessions only involved a single news domain, and that 47% of our participants read news from both sides of the political spectrum. We conclude with key implications for online news, social media, and search sites to encourage more balanced news browsing.
VelociWatch: Designing and Evaluating a Virtual Keyboard for the Input of Challenging Text
Virtual keyboard typing is typically aided by an auto-correct method that decodes a user’s noisy taps into their intended text. This decoding process can reduce error rates and possibly increase entry rates by allowing users to type faster but less precisely. However, virtual keyboard decoders sometimes make mistakes that change a user’s desired word into another. This is particularly problematic for challenging text such as proper names. We investigate whether users can guess words that are likely to cause auto-correct problems and whether users can adjust their behavior to assist the decoder. We conduct computational experiments to decide what predictions to offer in a virtual keyboard and design a smartwatch keyboard named VelociWatch. Novice users were able to use the features of VelociWatch to enter challenging text at 17 words-per-minute with a corrected error rate of 3%. Interestingly, they wrote slightly faster and just as accurately on a simpler keyboard with limited correction options. Our finding suggest users may be able to type difficult words on a smartwatch simply by tapping precisely without the use of auto-correct.
Co-Designing Food Trackers with Dietitians: Identifying Design Opportunities for Food Tracker Customization
We report co-design workshops with registered dietitians conducted to identify opportunities for designing customizable food trackers. Dietitians typically see patients who have different dietary problems, thus having different information needs. However, existing food trackers such as paper-based diaries and mobile apps are rarely customizable, making it difficult to capture necessary data for both patients and dietitians. During the co-design sessions, dietitians created representative patient personas and designed food trackers for each persona. We found a wide range of potential tracking items such as food, reflection, symptom, activity, and physical state. Depending on patients’ dietary problems and dietitians’ practice, the necessity and importance of these tracking items vary. We identify opportunities for patients and healthcare providers to collaborate around data tracking and sharing through customization. We also discuss how to structure co-design workshops to solicit the design considerations of self-tracking tools for patients with specific health problems.
Engaging Low-Income African American Older Adults in Health Discussions through Community-based Design Workshops
Community-based approaches to participatory design, such as the design workshop, promise to engage underserved populations in collaborative dialog and provide a platform for promoting the views of communities who are not typically given a space to engage in design. Yet, we know little about how design workshops as a research site can engage underserved individuals (i.e., due to class, race, or age status) or address personal concerns (e.g., health). As a way of exploring these issues, we conducted a series of five design workshops with low-income African-American older adults to understand their health experiences. Our findings reveal three insights associated with the design workshop and the topic of health: comfort with community versus personal health; the sociocultural configuration of interaction; and empowerment in the context of systematic inequality of opportunity. We discuss the importance of understanding the situated nature of design workshops, particularly when engaging underserved groups in the topic of health, and the potential of the design workshop as a mechanism for activism.
It’s My Data! Tensions Among Stakeholders of a Learning Analytics Dashboard
Early warning dashboards in higher education analyze student data to enable early identification of underperforming students, allowing timely interventions by faculty and staff. To understand perceptions regarding the ethics and impact of such learning analytics applications, we conducted a multi-stakeholder analysis of an early-warning dashboard deployed at the University of Michigan through semi-structured interviews with the system’s developers, academic advisors (the primary users), and students. We identify multiple tensions among and within the stakeholder groups, especially with regard to awareness, understanding, access and use of the system. Furthermore, ambiguity in data provenance and data quality result in differing levels of reliance and concerns about the system among academic advisors and students. While students see the system’s benefits, they argue for more involvement, control, and informed consent regarding the use of student data. We discuss our findings’ implications for the ethical design and deployment of learning analytics applications in higher education. Early warning dashboards in higher education analyze student data to enable early identification of underperforming students, allowing timely interventions by faculty and staff. To understand perceptions regarding the ethics and impact of such learning analytics applications, we conducted a multi-stakeholder analysis of an early-warning dashboard deployed at the University of Michigan through semi-structured interviews with the system’s developers, academic advisors (the primary users), and students. We identify multiple tensions among and within the stakeholder groups, especially with regard to awareness, understanding, access, and use of the system. Furthermore, ambiguity in data provenance and data quality result in differing levels of reliance and concerns about the system among academic advisors and students. While students see the system’s benefits, they argue for more involvement, control, and informed consent regarding the use of student data. We discuss our findings’ implications for the ethical design and deployment of learning analytics applications in higher education.
Automating the Administration and Analysis of Psychiatric Tests: The Case of Attachment in School Age Children
This article presents the School Attachment Monitor, a novel interactive system that can reliably administer the Manchester Child Attachment Story Task (a standard psychiatric test for the assessment of attachment in children) without the supervision of trained professionals. Attachment problems in children cause significant mental health issues and costs to society which technology has the potential to reduce. SAM collects, through instrumented doll-play games, enough information to allow a human assessor to manually identify the attachment status of children. Experiments show that the system successfully does this in 87.5% of cases. In addition, the experiments show that an automatic approach based on deep neural networks can map the information collected into the attachment condition of the children. The outcome SAM matches the judgment of expert human assessors in 82.8% of cases. This is the first time an automated tool has been successful in measuring attachment. This work has significant implications for psychiatry as it allows professionals to assess many more children cost effectively and to direct healthcare resources more accurately and efficiently to improve mental health.
360proto: Making Interactive Virtual Reality & Augmented Reality Prototypes from Paper
We explore 360 paper prototyping to rapidly create AR/VR prototypes from paper and bring them to life on AR/VR devices. Our approach is based on a set of emerging paper prototyping templates specifically for AR/VR. These templates resemble the key components of many AR/VR interfaces, including 2D representations of immersive environments, AR marker overlays and face masks, VR controller models and menus, and 2D screens and HUDs. To make prototyping with these templates effective, we developed 360proto, a suite of three novel physical–digital prototyping tools: (1) the 360proto Camera for capturing paper mockups of all components simply by taking a photo with a smartphone and seeing 360-degree panoramic previews on the phone or stereoscopic previews in Google Cardboard; (2) the 360proto Studio for organizing and editing captures, for composing AR/VR interfaces by layering the captures, and for making them interactive with Wizard of Oz via live video streaming; (3) the 360proto App for running and testing the interactive prototypes on AR/VR capable mobile devices and headsets. Through five student design jams with a total of 86 participants and our own design space explorations, we demonstrate that our approach with 360proto is useful to create relatively complex AR/VR applications.
A Walk on the Child Side: Investigating Parents’ and Children’s Experience and Perspective on Mobile Technology for Outdoor Child Independent Mobility
Technology increasingly offers parents more and more opportunities to monitor children, reshaping the way control and autonomy are negotiated within families. This paper investigates the views of parents and primary school children on mobile technology designed to support child independent mobility in the context of the local walking school buses. Based on a school-year long field study, we report findings on children’s and parents’ experience with proximity detection devices. The results provide insights into how the parents and children accepted and socially appropriated the technology into the walking school bus activity, shedding light on the way they understand and conceptualize a technology that collects data on children’s proximity to the volunteers’ smartphone. We discuss parents’ needs and concerns toward monitoring technologies and the related challenges in terms of trust-control balance. These insights are elaborated to inform the future design of technology for child independent mobility.
Voice Presentation Attack Detection through Text-Converted Voice Command Analysis
Voice assistants are quickly being upgraded to support advanced, security-critical commands such as unlocking devices, checking emails, and making payments. In this paper, we explore the feasibility of using users’ text-converted voice command utterances as classification features to help identify users’ genuine commands, and detect suspicious commands. To maintain high detection accuracy, our approach starts with a globally trained attack detection model (immediately available for new users), and gradually switches to a user-specific model tailored to the utterance patterns of a target user. To evaluate accuracy, we used a real-world voice assistant dataset consisting of about 34.6 million voice commands collected from 2.6 million users. Our evaluation results show that this approach is capable of achieving about 3.4% equal error rate (EER), detecting 95.7% of attacks when an optimal threshold value is used. As for those who frequently use security-critical (attack-like) commands, we still achieve EER below 5%.
Emotion and Experience in Negotiating HIV-Related Digital Resources: “It’s not just a runny nose!”
While digital technologies are increasingly being used to provide support and diagnoses remotely, it is unclear whether they offer adequate emotional support and appropriate messages in navigating complex, stigmatised and sensitive conditions that can have a momentous impact on people’s lives. In this paper, we investigate how and why people access existing HIV resources, and their experiences of using these resources through a survey with 197 respondents and an interview and think-aloud study with 28 participants. Our findings indicate that many HIV-related resources do not address the anxiety-provoking reasons for access, reinforce stigma and neglect to provide important information and emotional support. We finally discuss potential ways of addressing these issues in the current environment where more sexual health services are being delivered online.
Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams’ challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by teams in practice and the solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address practitioners’ needs.
Designing Theory-Driven User-Centric Explainable AI
From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.
Emotion Work in Experience-Centered Design
Experience Centered Design (ECD) implores us to develop empathic relationships and understanding of participants, to actively work with our senses and emotions within the design process. However, theories of experience-centered design do little to account for emotion work undertaken by design researchers when doing this. As a consequence, how a design researcher’s emotions are experienced, navigated and used as part of an ECD process are rarely published. So, while emotion is clearly a tool that we use, we don’t share with one another how, why and when it gets used. This has a limiting effect on how we understand design processes, and opportunities for training. Here, we share some of our experiences of working with ECD. We analyse these using Hochschild’s framework of emotion work to show how and where this work occurs. We use our analysis to question current ECD practices and provoke debate.
Voice as a Design Material: Sociophonetic Inspired Design Strategies in Human-Computer Interaction
While there is a renewed interest in voice user interfaces (VUI) in HCI, little attention has been paid to the design of VUI voice output beyond intelligibility and naturalness. We draw on the field of sociophonetics – the study of the social factors that influence the production and perception of speech – to highlight how current VUIs are based on a limited and homogenised set of voice outputs. We argue that current systems do not adequately consider the diversity of peoples’ speech, how that diversity represents sociocultural identities, and how voices have the potential to shape user perceptions and experiences. Ultimately, as other technological developments have influenced the ideologies of language, the voice outputs of VUIs will influence the ideologies of speech. Based on our argument, we pose three design strategies for VUI voice output design – individualisation, context awareness, and diversification – to motivate new ways of conceptualising and designing these technologies.
A Promise Is A Promise: The Effect of Commitment Devices on Computer Security Intentions
Commitment devices are a technique from behavioral economics that have been shown to mitigate the effects of present bias—the tendency to discount future risks and gains in favor of immediate gratifications. In this paper, we explore the feasibility of using commitment devices to nudge users towards complying with varying online security mitigations. Using two online experiments, with over 1,000 participants total, we offered participants the option to be reminded or to schedule security tasks in the future. We find that both reminders and commitment nudges can increase users’ intentions to install security updates and enable two-factor authentication, but not to configure automatic backups. Using qualitative data, we gain insights into the reasons for postponement and how to improve future nudges. We posit that current nudges may not live up to their full potential, as the timing options offered to users may be too rigid.
Security – Visible, Yet Unseen?
An unsolved debate in the field of usable security concerns whether security mechanisms should be visible, or blackboxed away from the user for the sake of usability. However, tying this question to pragmatic usability factors only might be simplistic. This study aims at researching the impact of displaying security mechanisms on User Experience (UX) in the context of e-voting. Two versions of an e-voting application were designed and tested using a between-group experimental protocol (N=38). Version D displayed security mechanisms, while version ND did not reveal any security-related information. We collected data on UX using standardised evaluation scales and semi-structured interviews. Version D performed better overall in terms of UX and need fulfilment. Qualitative analysis of the interviews gives further insights into factors impacting perceived security. Our study adds to existing research suggesting a conceptual shift from usability to UX and discusses implications for designing and evaluating secure systems.
Designing User Interface Elements to Improve the Quality and Civility of Discourse in Online Commenting Behaviors
Ensuring high-quality, civil social interactions remains a vexing challenge in many online spaces. In the present work, we introduce a novel approach to address this problem: using psychologically “embedded” CAPTCHAs containing stimuli intended to prime positive emotions and mindsets. An exploratory randomized experiment (N = 454 Mechanical Turk workers) tested the impact of eight new CAPTCHA designs implemented on a simulated, politically charged comment thread. Results revealed that the two interventions that were the most successful at activating positive affect also significantly increased the positivity of tone and analytical complexity of argumentation in participants’ responses. A focused follow-up experiment (N = 120 Mechanical Turk workers) revealed that exposure to CAPTCHAs featuring image sets previously validated to evoke low-arousal positive emotions significantly increased the positivity of sentiment and the levels of complexity and social connectedness in participants’ posts. We offer several explanations for these results and discuss the practical and ethical implications of designing interfaces to influence discourse in online forums.
The “Comadre” Project: An Asset-Based Design Approach to Connecting Low-Income Latinx Families to Out-of-School Learning Opportunities
Participation in out-of-school learning programs has been shown to generate significant academic, social/emotional, and institutional benefits for young learners, and today’s wealthy families are disproportionately reaping these benefits. This paper presents the results of an asset-based/human-centered design research process and pilot aimed at connecting low-income families in a Southern California city with local low-cost out-of-school learning opportunities. Based on background research including qualitative interviewing, home visits, technology inventories and use walkthroughs with 40 low-income, majority Latinx families, we created and piloted a free subscription SMS service that automatically pushes bilingual SMS messages with curated information on local low-cost enrichment learning opportunities to low-income families. We framed our human-centered design process through an intersectional, “asset-based approach,” which recognizes that marginalized communities have already developed robust, culturally-specific social practices to enable them to navigate the world, seeks to amplify them, and refrains from imposing a top-down or pre-conceived “idea” of intervention.
Warping Deixis: Distorting Gestures to Enhance Collaboration
When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer’s perception. We evaluated our approach in a co-located side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.
The Mental Image Revealed by Gaze Tracking
Humans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user’s eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario.
Care and Design: An Ethnography of Mutual Recognition in the Context of Advanced Dementia
While there have been considerable developments in designing for dementia within HCI, there is still a lack of empirical understanding of the experience of people with advanced dementia and the ways in which design can support and enrich their lives. In this paper, we present our findings from a long-term ethnographic study, which aimed to gain an understanding of their lived experience and inform design practices for and with people with advanced dementia in residential care. We present our findings using the social theory of recognition as an analytic lens to account for recognition in practice and its challenges in care and research. We discuss how we, as the HCI community, can pragmatically engage with people with advanced dementia and propose a set of considerations for those who wish to design for and with the values of recognition theory to promote collaboration, agency and social identity in advanced dementia care.
Communication Cost of Single-user Gesturing Tool in Laparoscopic Surgical Training
Multi-user input over a shared display has been shown to support group process and improve performance. However, current gesturing systems for instructional collaborative tasks limit the input to experts and overlook the needs of novices in making references on a shared display. In this paper, we investigate the effects of a single-user gesturing tool on the communication between trainer and trainees in a laparoscopic surgical training. By comparing the communication structure and content between the trainings with and without the gesturing tool, we show that the communication becomes more imbalanced and the trainees become less active when using the single-user gesturing tool. Our findings highlight the needs to grant all parties the same level of access to a shared display and suggest further directions in designing a shared display for instructional collaborative tasks.
Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth Estimation
Target disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce VOR depth estimation, a method based on the Vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth.
Slow Robots for Unobtrusive Posture Correction
Prolonged static and unbalanced sitting postures during computer usage contribute to musculoskeletal discomfort. In this paper, we investigated the use of a very slow moving monitor for unobtrusive posture correction. In a first study, we identified display velocities below the perception threshold and observed how users (without being aware) responded by gradually following the monitor’s motion. From the result, we designed a robotic monitor that moves imperceptible to counterbalance unbalanced sitting postures and induces posture correction. In an evaluation study (n=12), we had participants work for four hours without and with our prototype (8 in total). Results showed that actuation increased the frequency of non-disruptive swift posture corrections and significantly reduced the duration of unbalanced sitting. Most users appreciated the monitor correcting their posture and reported less physical fatigue. With slow robots, we make the first step toward using actuated objects for unobtrusive behavioral changes.
Understanding Digitally-Mediated Empathy: An Exploration of Visual, Narrative, and Biosensory Informational Cues
Digitally sharing our experiences engages a process of empathy shaped by available informational cues. Biosensory data is one informative cue, but the relationship to empathy is underexplored. In this study, we investigate this process by showing a video of a “target” person’s visual perspective watching a virtual reality film to sixty “observers”. We vary information available to observers via three experimental conditions: a baseline unmodified video, video with narrative text, or with a graph of electrodermal activity (EDA) of the target. Compared to baseline, narrative text increased empathic accuracy (EA) while EDA had an opposite, negative effect. Qualitatively, observers describe their empathic processes as using their own feelings supplemented with the information presented depending on the interpretability of that information. Both narration and EDA prompted observers to reconsider assumptions about another’s experience. Our findings lead to a discussion of digitally-mediated empathy with implications for associated research and product development.
Understanding Personal Productivity: How Knowledge Workers Define, Evaluate, and Reflect on Their Productivity
Productivity tracking tools often determine productivity based on the time interacting with work-related applications. To deconstruct productivity’s diverse and nebulous nature, we investigate how knowledge workers conceptualize personal productivity and delimit productive tasks in both work and non-work contexts. We report a 2-week diary study followed by a semi-structured interview with 24 knowledge workers. Participants captured productive activities and provided the rationale for why the activities were assessed to be productive. They reported a wide range of productive activities beyond typical desk-bound work-ranging from having a personal conversation with dad to getting a haircut. We found six themes that characterize the productivity assessment-work product, time management, worker’s state, attitude toward work, impact & benefit, and compound task and identified how participants interleaved multiple facets when assessing their productivity. We discuss how these findings could inform the design of a comprehensive productivity tracking system that covers a wide range of productive activities.
Vistribute: Distributing Interactive Visualizations in Dynamic Multi-Device Setups
We present Vistribute, a framework for the automatic distribution of visualizations and UI components across multiple heterogeneous devices. Our framework consists of three parts: (i) a design space considering properties and relationships of interactive visualizations, devices, and user preferences in multi-display environments; (ii) specific heuristics incorporating these dimensions for guiding the distribution for a given interface and device ensemble; and (iii) a web-based implementation instantiating these heuristics to automatically generate a distribution as well as providing interaction mechanisms for user-defined adaptations. In contrast to existing UI distribution systems, we are able to infer all required information by analyzing the visualizations and devices without relying on additional input provided by users or programmers. In a qualitative study, we let experts create their own distributions and rate both other manual distributions and our automatic ones. We found that all distributions provided comparable quality, hence validating our framework.
Dancing With Drones: Crafting Novel Artistic Expressions Through Intercorporeality
Movement-based interactions are gaining traction, requiring a better understanding of how such expressions are shaped by designers. Through an analysis of an artistic process aimed to deliver a commissioned opera where custom-built drones are performing on stage alongside human performers, we observed the importance of achieving an intercorporeal understanding to shape body-based emotional expressivity. Our analysis reveals how the choreographer moves herself to: (1) imitate and feel the affordances and expressivity of the drones’ ‘otherness’ through her own bodily experience; (2) communicate to the engineer of the team how she wants to alter the drones’ behaviors to be more expressive; (3) enact and interactively alter her choreography. Through months of intense development and creative work, such an intercorporeal understanding was achieved by carefully crafting the drones’ behaviors, but also by the choreographer adjusting her own somatics and expressions. The choreography arose as a result of the expressivity they enabled together.
Crossing-Based Selection with Virtual Reality Head-Mounted Displays
This paper presents the first investigation into using the goal-crossing paradigm for object selection with virtual reality (VR) head-mounted displays. Two experiments were carried out to evaluate ray-casting crossing tasks with target discs in 3D space and goal lines on 2D plane respectively in comparison to ray-casting pointing tasks. Five factors, i.e. task difficulty, the direction of movement constraint (collinear vs. orthogonal), the nature of the task (discrete vs. continuous), field of view of VR devices and target depth, were considered in both experiments. Our findings are: (1) crossing generally had shorter or no longer time, and higher or similar accuracy than pointing, indicating crossing can complement or substitute pointing; (2) crossing tasks can be well modelled with Fitts’ Law; (3) crossing performance depended on target depth; (4) crossing target discs in 3D space differed from crossing goal lines on 2D plane in many aspects such as time and error performance, the effects of target depth and the parameters of Fitts’ models. Based on these findings, we formulate a number of design recommendations for crossing-based interaction in VR.
ARPen: Mid-Air Object Manipulation Techniques for a Bimanual AR System with Pen & Smartphone
Modeling in Augmented Reality (AR) lets users create and manipulate virtual objects in mid-air that are aligned to their real environment. We present ARPen, a bimanual input technique for AR modeling that combines a standard smartphone with a 3D-printed pen. Users sketch with the pen in mid-air, while holding their smartphone in the other hand to see the virtual pen traces in the live camera image. ARPen combines the pen’s higher 3D input precision with the rich interactive capabilities of the smartphone touchscreen. We studied subjective preferences for this bimanual input technique, such as how people hold the smartphone while drawing, and analyzed the performance of different bimanual techniques for selecting and moving virtual objects. Users preferred a bimanual technique casting a ray through the pen tip for both selection and translation. We provide initial design guidelines for this new class of bimanual AR modeling systems.
NaviBike: Comparing Unimodal Navigation Cues for Child Cyclists
Navigation systems for cyclists are commonly screen-based devices mounted on the handlebar which show map information. Typically, adult cyclists have to explicitly look down for directions. This can be distracting and challenging for children, given their developmental differences in motor and perceptual-motor abilities compared with adults. To address this issue, we designed different unimodal cues and explored their suitability for child cyclists through two experiments. In the first experiment, we developed an indoor bicycle simulator and compared auditory, light, and vibrotactile navigation cues. In the second experiment, we investigated these navigation cues in-situ in an outdoor practice test track using a mid-size tricycle. To simulate road distractions, children were given an additional auditory task in both experiments. We found that auditory navigational cues were the most understandable and the least prone to navigation errors. However, light and vibrotactile cues might be useful for educating younger child cyclists.
Audible Panorama: Automatic Spatial Audio Generation for Panorama Imagery
As 360 deg cameras and virtual reality headsets become more popular, panorama images have become increasingly ubiquitous. While sounds are essential in delivering immersive and interactive user experiences, most panorama images, however, do not come with native audio. In this paper, we propose an automatic algorithm to augment static panorama images through realistic audio assignment. We accomplish this goal through object detection, scene classification, object depth estimation, and audio source placement. We built an audio file database composed of over $500$ audio files to facilitate this process. We designed and conducted a user study to verify the efficacy of various components in our pipeline. We run our method on a large variety of panorama images of indoor and outdoor scenes. By analyzing the statistics, we learned the relative importance of these components, which can be used in prioritizing for power-sensitive time-critical tasks like mobile augmented reality (AR) applications.
Interactive Body-Driven Graphics for Augmented Video Performance
We present a system that augments live presentation videos with interactive graphics to create a powerful and expressive storytelling environment. Using our system, the presenter interacts with the graphical elements in real-time with gestures and postures, thus leveraging our innate, everyday skills to enhance our communication capabilities with the audience. However, crafting such an interactive and expressive performance typically requires programming, or highly-specialized tools tailored for experts. Our core contribution is a flexible, direct manipulation UI which enables amateurs and experts to craft such presentations beforehand by mapping a variety of body movements to a wide range of graphical manipulations. By simplifying the mapping between gestures, postures, and their corresponding output effects, our UI enables users to craft customized, rich interactions with the graphical elements. Our user study demonstrates the potential usage and unique affordance of this mixed-reality medium for storytelling and presentation across a range of application domains.
Augmenting Couples’ Communication with Lifelines: Shared Timelines of Mixed Contextual Information
Couples exhibit special communication practices, but apps rarely offer couple-specific functionality. Research shows that sharing streams of contextual information (e.g. location, motion) helps couples coordinate and feel more connected. Most studies explored a single, ephemeral stream; we study how couples’ communication changes when sharing multiple, persistent streams. We designed Lifelines, a mobile-app technology probe that visualizes up to six streams on a shared timeline: closeness to home, battery level, steps, media playing, texts and calls. A month-long study with nine couples showed that partners interpreted information mostly from individual streams, but also combined them for more nuanced interpretations. Persistent streams allowed missing data to become meaningful and provided new ways of understanding each other. Unexpected patterns from any stream can trigger calls and texts, whereas seeing expected data can replace direct communication, which may improve or disrupt established communication practices. We conclude with design implications for mediating awareness within couples.
Friend, Collaborator, Student, Manager: How Design of an AI-Driven Game Level Editor Affects Creators
Machine learning advances have afforded an increase in algorithms capable of creating art, music, stories, games, and more. However, it is not yet well-understood how machine learning algorithms might best collaborate with people to support creative expression. To investigate how practicing designers perceive the role of AI in the creative process, we developed a game level design tool for Super Mario Bros.-style games with a built-in AI level designer. In this paper we discuss our design of the Morai Maker intelligent tool through two mixed-methods studies with a total of over one-hundred participants. Our findings are as follows: (1) level designers vary in their desired interactions with, and role of, the AI, (2) the AI prompted the level designers to alter their design practices, and (3) the level designers perceived the AI as having potential value in their design practice, varying based on their desired role for the AI.
A Design Space for Gaze Interaction on Head-mounted Displays
Augmented and virtual reality (AR/VR) has entered the mass market and, with it, will soon eye tracking as a core technology for next generation head-mounted displays (HMDs). In contrast to existing gaze interfaces, the 3D nature of AR and VR requires estimating a user’s gaze in 3D. While first applications, such as foveated rendering, hint at the compelling potential of combining HMDs and gaze, a systematic analysis is missing. To fill this gap, we present the first design space for gaze interaction on HMDs. Our design space covers human depth perception and technical requirements in two dimensions aiming to identify challenges and opportunities for interaction design. As such, our design space provides a comprehensive overview and serves as an important guideline for researchers and practitioners working on gaze interaction on HMDs. We further demonstrate how our design space is used in practice by presenting two interactive applications: EyeHealth and XRay-Vision.
The Heat is On: Exploring User Behaviour in a Multisensory Virtual Environment for Fire Evacuation
Understanding validity of user behaviour in Virtual Environments (VEs) is critical as they are increasingly being used for serious Health and Safety applications such as predicting human behaviour and training in hazardous situations. This paper presents a comparative study exploring user behaviour in VE-based fire evacuation and investigates whether this is affected by the addition of thermal and olfactory simulation. Participants (N=43) were exposed to a virtual fire in an office building. Quantitative and qualitative analyses of participant attitudes and behaviours found deviations from those we would expect in real life (e.g. pre-evacuation actions), but also valid behaviours like fire avoidance. Potentially important differences were found between multisensory and audiovisual-only conditions (e.g. perceived urgency). We conclude VEs have significant potential in safety-related applications, and that multimodality may afford additional uses in this context, but the identified limitations of behavioural validity must be carefully considered to avoid misapplication of the technology.
Transformation through Provocation?
Can a chatbot enable us to change our conceptions, to be critically reflective? To what extent can interaction with a technologically ‘minimal’ medium such as a chatbot evoke emotional engagement in ways that can challenge us to act on the world? In this paper, we discuss the design of a provocative bot, a ‘bot of conviction’, aimed at triggering conversations on complex topics (e.g. death, wealth distribution, gender equality, privacy) and, ultimately, soliciting specific actions from the user it converses with. We instantiate our design with a use case in the cultural sector, specifically a Neolithic archaeological site that acts as a stage of conversation on such hard themes. Our larger contributions include an interaction framework for bots of conviction, insights gained from an iterative process of participatory design and evaluation, and a vision for bot interaction mechanisms that can apply to the HCI community more widely.
FoldTronics: Creating 3D Objects with Integrated Electronics Using Foldable Honeycomb Structures
We present FoldTronics, a 2D-cutting based fabrication technique to integrate electronics into 3D folded objects. The key idea is to cut and perforate a 2D sheet to make it foldable into a honeycomb structure using a cutting plotter; before folding the sheet into a 3D structure, users place the electronic components and circuitry onto the sheet. The fabrication process only takes a few minutes allowing to rapidly prototype functional interactive devices. The resulting objects are lightweight and rigid, thus allowing for weight-sensitive and force-sensitive applications. Finally, due to the nature of the honeycomb structure, the objects can be folded flat along one axis and thus can be efficiently transported in this compact form factor. We describe the structure of the foldable sheet, and present a design tool that enables users to quickly prototype the desired objects. We showcase a range of examples made with our design tool, including objects with integrated sensors and display elements.
An Exploration of Bitcoin Mining Practices: Miners’ Trust Challenges and Motivations
Bitcoin blockchain technology is a distributed ledger of nodes authorizing transactions between anonymous parties. Its key actors are miners using computational power to solve mathematical problems for validating transactions. By sharing blockchain’s characteristics, mining is a decentralized, transparent and unregulated practice, less explored in HCI, so we know little about miners’ motivations and experiences, and how these may impact on different dimensions of trust. This paper reports on interviews with 20 bitcoin miners about their practices and trust challenges. Findings contribute to HCI theories by extending the exploration of blockchain’s characteristics relevant to trust with the competitiveness dimension underpinning the social organization of mining. We discuss the risks of collaborative mining due to centralization and dishonest administrators, and conclude with design implications highlighting the need for tools monitoring the distribution of rewards in collaborative mining, tools tracking data centers’ authorization and reputation, and tools supporting the development of decentralized pools.
JourneyCam: Exploring Experiences of Accessibility and Mobility among Powered Wheelchair Users through Video and Data
Recent HCI research has investigated how digital technologies might enable citizens to identify and express matters of civic concern. We extend this work by describing JourneyCam, a smartphone-based system that enables powered wheelchair users to capture video and sensor data about their experiences of mobility. Thirteen participants used JourneyCam to document journeys, after which the data they collected was used to support discussions around their experiences. Our findings highlight how the system facilitated the articulation of complex embodied experiences, and how the collected data might have particular value in surfacing these experiences to help inform urban design and policymaking. Participants valued the ways in which JourneyCam’s moving image and sensor data made hard-to-express sensations apparent, as well as how it enabled them to surface previously unrecognised issues. We conclude by highlighting future opportunities for how such tools might enable citizens to inform and influence civic governance.
Let’s Play Together: Adaptation Guidelines of Board Games for Players with Visual Impairment
Board games present accessibility barriers for players with visual impairment since they often employ visuals alone to communicate gameplay information. Our research focuses on board game accessibility for those with visual impairment. This paper describes a three-phase study conducted to develop board game accessibility adaptation guidelines. These guidelines were developed through a user-centered design approach that included in-depth interviews and a series of user studies using two adapted board games. Our findings indicate that participants with and without visual impairment were able to play the adapted games, exhibiting a balanced experience whereby participants had complete autonomy and were provided with equal chances of victory. Our paper also contributes to the game and accessibility communities through the development of adaptation guidelines that allow board games to become inclusive irrespective of a player’s visual impairment.
ElectroDermis: Fully Untethered, Stretchable, and Highly-Customizable Electronic Bandages
Wearables have emerged as an increasingly promising interactive platform, imbuing the human body with always-available computational capabilities. This unlocks a wide range of applications, including discreet information access, health monitoring, fitness, and fashion. However, unlike previous platforms, wearable electronics require structural conformity, must be comfortable for the wearer, and should be soft, elastic, and aesthetically appealing. We envision a future where electronics can be temporarily attached to the body (like bandages or party masks), but in functional and aesthetically pleasing ways. Towards this vision, we introduce ElectroDermis, a fabrication approach that simplifies the creation of highly-functional and stretchable wearable electronics that are conformal and fully untethered by discretizing rigid circuit boards into individual components. These individual components are wired together using stretchable electrical wiring and assembled on a spandex blend fabric, to provide high functionality in a robust form-factor that is reusable. We describe our system in detail- including our fabrication parameters and its operational limits-which we hope researchers and practitioners can leverage. We describe a series of example applications that illustrate the feasibility and utility of our system. Overall, we believe ElectroDermis offers a complementary approach to wearable electronics-one that places value on the notion of impermanence (i.e., unlike tattoos and implants), better conforming to the dynamic nature of the human body.
May AI?: Design Ideation with Cooperative Contextual Bandits
Design ideation is a prime creative activity in design. However, it is challenging to support computationally due to its quickly evolving and exploratory nature. The paper presents cooperative contextual bandits (CCB) as a machine-learning method for interactive ideation support. A CCB can learn to propose domain-relevant contributions and adapt their exploration/exploitation strategy. We developed a CCB for an interactive design ideation tool that 1) suggests inspirational and situationally relevant materials (“may AI?”); 2) explores and exploits inspirational materials with the designer; and 3) explains its suggestions to aid reflection. The application case of digital mood board design is presented, wherein visual inspirational materials are collected and curated in collages. In a controlled study, 14 of 16 professional designers preferred the CCB-augmented tool. The CCB approach holds promise for ideation activities wherein adaptive and steerable support is welcome but designers must retain full outcome control.
PeerLens: Peer-inspired Interactive Learning Path Planning in Online Question Pool
Online question pools like LeetCode provide hands-on exercises of skills and knowledge. However, due to the large volume of questions and the intent of hiding the tested knowledge behind them, many users find it hard to decide where to start or how to proceed based on their goals and performance. To overcome these limitations, we present PeerLens, an interactive visual analysis system that enables peer-inspired learning path planning. PeerLens can recommend a customized, adaptable sequence of practice questions to individual learners, based on the exercise history of other users in a similar learning scenario. We propose a new way to model the learning path by submission types and a novel visual design to facilitate the understanding and planning of the learning path. We conducted a within-subject experiment to assess the efficacy and usefulness of PeerLens in comparison with two baseline systems. Experiment results show that users are more confident in arranging their learning path via PeerLens and find it more informative and intuitive.
Electronic Health Records Are More Than a Work Tool: Conflicting Needs of Direct and Indirect Stakeholders
The involvement of stakeholders is crucial when designing IT in highly complex application domains, such as healthcare. Stakeholder relationships are complex and can include strongly conflicting needs and value tensions. In this case study, we investigate the different perspectives of patients and physicians related to Patient Accessible Electronic Health Records (PAEHR) in Sweden. Generally, the introduction of this service has been heavily criticised by healthcare professionals, but welcomed by patients. The paper presents an innovative study design where themes from interviews with physicians are used as a lens to analyse survey data from patients. The findings highlight the necessity to understand stakeholders’ perspectives about other stakeholder groups by contrasting assumptions and expectations of physicians (indirect stakeholders) with experience of use by patients (direct stakeholders), and discusses practical challenges when designing large-scale health information systems.
Text Entry Throughput: Towards Unifying Speed and Accuracy in a Single Performance Metric
Human-computer input performance inherently involves speed-accuracy tradeoffs—the faster users act, the more inaccurate those actions are. Therefore, comparing speeds and accuracies separately can result in ambiguous outcomes: Does a fast but inaccurate technique perform better or worse overall than a slow but accurate one? For pointing, speed and accuracy has been unified for over 60 years as throughput (bits/s) (Crossman 1957, Welford 1968), but to date, no similar metric has been established for text entry. In this paper, we introduce a text entry method-independent throughput metric based on Shannon information theory (1948). To explore the practical usability of the metric, we conducted an experiment in which 16 participants typed with a laptop keyboard using different cognitive sets, i.e., speed-accuracy biases. Our results show that as a performance metric, text entry throughput remains relatively stable under different speed-accuracy conditions. We also evaluated a smartphone keyboard with 12 participants, finding that throughput varied least compared to other text entry metrics. This work allows researchers to characterize text entry performance with a single unified measure of input efficiency.
Is Now A Good Time?: An Empirical Study of Vehicle-Driver Communication Timing
Advances in automotive sensing systems and speech interfaces provide new opportunities for smarter driving assistants or infotainment systems. For both safety and consumer satisfaction reasons, any new system which interacts with drivers must do so at appropriate times. We asked 63 drivers, ”Is now a good time?” to receive non-driving information during a 50-minute drive. We analyzed 2,734 responses and synchronized automotive and video data, and show that while the chances of choosing a good time can be determined with better success using easily accessible automotive data, certain nuances in the problem require a richer understanding of the driver and environment states in order to achieve higher performance. We illustrate several of these nuances with quantitative and qualitative analyses to contribute to the understanding of how to design a system that might simultaneously minimize the risk of interacting at a bad time while maximizing the window of allowable interruption.
A 2nd Person Social Perspective on Bodily Play
Recent HCI work on digital games highlighted the advantage for designers to take on a 1st person perspective on the human body (referring to the phenomenological “lived” body) and a 3rd person perspective (the material “fleshy” body, similar to looking in the mirror). This is useful when designing bodily play, however, we note that there is not much game design discussion on the 2nd person social perspective that highlights the unique interplay between human bodies. To guide designers interested in supporting players to experience their bodies as play, we describe how game designers can engage with the 2nd person social perspective through a set of design tactics based on four of our own play systems. With our work, we hope we can aid designers in embracing this 2nd person perspective so that more people can benefit from engaging their bodies through games and play.
Tracking the Consumption of Home Essentials
Predictions of people’s behaviour increasingly drive interactions with a new generation of IoT services designed to support everyday life in the home, from shopping to heating. Based on the premise that such automation is difficult due to the contingent nature of people’s practices, in this work we explore the nature of these contingencies in depth. We have designed and conducted a technology probe that made use of simple linear predictions as a provocation, and invited people to track the life of their household essentials over a two-month period. Through a mixed-method approach we demonstrate the challenges of simple predictions, and in turn identify eight categories of contingencies that influenced prediction accuracy. We discuss strategies for how designers of future predictive IoT systems may take the contingencies into account by removing, hiding, revealing, managing, or exploiting the system uncertainty at the core of the issue.
Peripheral Notifications in Large Displays: Effects of Feature Combination and Task Interference
Visual notifications are integral to interactive computing systems. With large displays, however, much of the content is in the user’s visual periphery, where human capacity to notice visual effects is diminished. One design strategy for enhancing noticeability is to combine visual features, such as motion and colour. Yet little is known about how feature combinations affect noticeability across the visual field, or about how peripheral noticeability changes when a user’s primary task involves the same visual features as the notification. We addressed these questions by conducting two studies. Results of the first study showed that noticeability of feature combinations were approximately equal to the better of the individual features. Results of the second study suggest that there can be interference between the features of primary tasks and the visual features in the notifications. Our findings contribute to a better understanding of how visual features operate when used as peripheral notifications.
Pulp Friction: Exploring the Finger Pad Periphery for Subtle Haptic Feedback
Current haptic feedback techniques on handheld devices are applied to the finger pad or the palm of the user. These state-of-the-art approaches are coarse-grained and tend to be intrusive, rather than subtle. In contrast, we present a new feedback technique that applies stimuli around the periphery of the finger pulp, demonstrating how this can provide rich, nuanced haptic information. We use a reconfigurable haptic device employing a ferromagnetic marble for back-of-the device handheld use, which, for the first time, probes, without instrumenting the user, the periphery of the distal phalanx with localised stimulation. We present the design-space afforded by this new technique and evaluate the human-factors of finger-peripheral touch interaction in a controlled user-study. We report results with marbles of different diameters, speeds and a combination of poking, lateral vibration and patterns; present the resulting design guidelines for finger-periphery haptic feedback; and, illustrate its potential with use case scenarios.
Development of a Checklist for the Prevention of Intradialytic Hypotension in Hemodialysis Care: Design Considerations Based on Activity Theory
Hemodialysis is life-saving therapy for end-stage renal disease; yet, 20% of hemodialysis sessions are complicated by intradialytic hypotension (“IDH”). There is a need for approaches to preventing IDH that account for their implementation contexts. Using Activity Theory, we outline the design of a digital diagnostic checklist to identify patients at risk of IDH. Checklists were chosen a priori as an outcome due to prior evidence of effectiveness. Drawing on individual interviews with 20 clinicians and three focus groups with 17 patients, we describe four activity systems within hemodialysis care. We then outline a novel design process that includes co-design activities with clinicians, and four rapid-cycle iterations that progressively incorporated activity system elements into checklist design. We contribute a new type of checklist design to HCI: one that supports diagnostic thinking rather than consistent task completion. We further broaden checklist design by including a formal role for patients in checklist completion.
Preemptive Action: Accelerating Human Reaction using Electrical Muscle Stimulation Without Compromising Agency
We enable preemptive force-feedback systems to speed up human reaction time without fully compromising the user’s sense of agency. Typically these interfaces actuate by means of electrical muscle stimulation (EMS) or mechanical actuators; they preemptively move the user to perform a task, such as to improve movement performance (e.g., EMS-assisted drumming). Unfortunately, when using preemptive force-feedback users do not feel in control and loose their sense of agency. We address this by actuating the user’s body, using EMS, within a particular time window (160 ms after visual stimulus), which we found to speed up reaction time by 80 ms in our first study. With this preemptive timing, when the user and system move congruently, the user feels that they initiated the motion, yet their reaction time is faster than usual. As our second study demonstrated, this particular timing significantly increased agency when compared to the current practice in EMS-based devices. We conclude by illustrating, using examples from the HCI literature, how to leverage our findings to provide more agency to automated haptic interfaces.
An Exploratory Study on Visual Exploration of Model Simulations by Multiple Types of Experts
Experts in different domains rely increasingly on simulation models of complex processes to reach insights, make decisions, and plan future projects. These models are often used to study possible trade-offs, as experts try to optimise multiple conflicting objectives in a single investigation. Understanding all the model intricacies, however, is challenging for a single domain expert. We propose a simple approach to support multiple experts when exploring complex model results. First, we reduce the model exploration space, then present the results on a shared interactive surface, in the form of a scatterplot matrix and linked views. To explore how multiple experts analyse trade-offs using this setup, we carried out an observational study focusing on the link between expertise and insight generation during the analysis process. Our results reveal the different exploration strategies and multi-storyline approaches that domain experts adopt during trade-off analysis, and inform our recommendations for collaborative model exploration systems.
Protection, Productivity and Pleasure in the Smart Home: Emerging Expectations and Gendered Insights from Australian Early Adopters
Interest and uptake of smart home technologies has been lower than anticipated, particularly among women. Reporting on an academic-industry partnership, we present findings from an ethnographic study with 31 Australian smart home early adopters. The paper analyses these households’ experiences in relation to three concepts central to Intel’s ambient computing vision for the home: protection, productivity and pleasure, or ‘the 3Ps’. We find that protection is a form of caregiving; productivity provides ‘small conveniences’, energy savings and multi-tasking possibilities; and pleasure is derived from ambient and aesthetic features, and the joy of ‘playing around’ with tech. Our analysis identifies three design challenges and opportunities for the smart home: internal threats to household protection; feminine desires for the smart home; and increased ‘digital housekeeping’. We conclude by suggesting how HCI designers can and should respond to these gendered challenges.
Keeping Rumors in Proportion: Managing Uncertainty in Rumor Systems
The study of rumors has garnered wider attention as regulators and researchers turn towards problems of misinformation on social media. One goal has been to discover and implement mechanisms that promote healthy information ecosystems. Classically defined as regarding ambiguous situations, rumors pose the unique difficulty of intrinsic uncertainty around their veracity. Further complicating matters, rumors can serve the public when they do spread valuable true information. To address these challenges, we develop an approach that reifies “rumor proportions” as central to the theory of systems for managing rumors. We use this lens to advocate for systems that, rather than aiming to stifle rumors entirely or aiming to stop only false rumors, aim to prevent rumors from growing out of proportion relative to normative benchmark representations of intrinsic uncertainty.
Understanding Metamaterial Mechanisms
In this paper, we establish the underlying foundations of mechanisms that are composed of cell structures—known as metamaterial mechanisms. Such metamaterial mechanisms were previously shown to implement complete mechanisms in the cell structure of a 3D printed material, without the need for assembly. However, their design is highly challenging. A mechanism consists of many cells that are interconnected and impose constraints on each other. This leads to unobvious and non-linear behavior of the mechanism, which impedes user design. In this work, we investigate the underlying topological constraints of such cell structures and their influence on the resulting mechanism. Based on these findings, we contribute a computational design tool that automatically creates a metamaterial mechanism from user-defined motion paths. This tool is only feasible because our novel abstract representation of the global constraints highly reduces the search space of possible cell arrangements.
Understanding Life Transitions: A Case Study of Support Needs of Low-Income Mothers
Life transitions are an integral part of the human experience. However, research shows that lack of support during life transitions can result in adverse health outcomes. To better understand the support needs and structures of low-income women during transition to motherhood, we interviewed 10 women and their 14 supporters during the transition. Our findings suggest that support needs and structures of mothers evolve during transition, and that they also vary by socio-economic contexts. In this paper, we detail our study design and findings. Informed by our findings, we posit that all life-transitions are not the same, and that therefore, the optimal support intervention point varies for different life transitions. Currently there are no tools available to identify optimal support intervention points during life transitions. To this end, we also introduce a preliminary framework – the Strength-Stress-Analysis (SSA) framework – to identify optimal support intervention points during life-transitions.
Online Grocery Delivery Services: An Opportunity to Address Food Disparities in Transportation-scarce Areas
Online grocery delivery services present new opportunities to address food disparities, especially in underserved areas. However, such services have not been systematically evaluated. This study evaluates such services’ potential to provide healthy-food access and influence healthy-food purchases among individuals living in transportation-scarce and low-resource areas. We conducted a pilot experiment with 20 participants consisting of a randomly assigned group’s 1-month use of an online grocery delivery service, and a control group’s 1-month collection of grocery receipts, and a set of semi-structured interviews. We found that online grocery delivery services (a) serve as a feasible model to healthy-food access if they are affordable and amenable to multiple payment forms and (b) could lead to healthier selections. We contribute policy recommendations to bolster affordability of healthy-food access and design opportunities to promote healthy foods to support the adoption and use of these services among low-resource and transportation-scarce groups.
Co-Created Personas: Engaging and Empowering Users with Diverse Needs Within the Design Process
Personas are powerful tools for designing technology and envisioning its usage. They are widely used to imagine archetypal users around whom to orient design work. We have been exploring co-created personas as a technique to use in co-design with users who have diverse needs. Our vision was that this would broaden the demographic and liberate co-designers of their personal relationship with a health condition. This paper reports three studies where we investigated using co-created personas with people who had Parkinson’s disease, dementia or aphasia. Observational data of co-design sessions were collected and analysed. Findings revealed that the co-created personas encouraged users with diverse needs to engage with co-designing. Importantly, they also afforded additional benefits including empowering users within a more accessible design process. Reflecting on the outcomes from the different user groups, we conclude with a discussion of the potential for co-created personas to be applied more broadly.
“Notjustgirls”: Exploring Male-related Eating Disordered Content across Social Media Platforms
Eating disorders (EDs) are a worldwide public health concern that impact approximately 10% of the U.S. population. Our previous research characterized these behaviors across online spaces. These characterizations have used clinical terminology, and their lexical variants, to identify ED content online. However, previous HCI research on EDs (including our own) suffers from a lack of gender and cultural diversity. In this paper, we designed a follow-up study of online ED characterizations, extending our previous methodologies to focus specifically on male/masculine-related content. We highlight the similarities and differences found in the terminology utilized and media archetypes associated with the social media content. Finally, we discuss other considerations highlighted through our analysis of the male-related content that is missing from the previous research.
Technologies for Social Justice: Lessons from Sex Workers on the Front Lines
This paper provides analysis and insight from a collaborative process with a Canadian sex worker rights organization called Stella, l’amie de Maimie, where we reflect on the use of and potential for digital technologies in service delivery. We analyze the Bad Client and Aggressor List – a reporting tool co-produced by sex workers in the community and Stella staff to reduce violence against sex workers. We analyze its current and potential future formats as an artefact for communication, in a context of sex work criminalization and the exclusion of sex workers from traditional routes for reporting violence and accessing governmental systems for justice. This paper addresses a novel aspect of HCI research that relates to digital technologies and social justice. Reflecting on the Bad Client and Aggressor List, we discuss how technologies can interact with justice-oriented service delivery and develop three implications for design.
Career Mentoring in Online Communities: Seeking and Receiving Advice from an Online Community
Although people frequently seek mentoring or advice for their career, most mentoring is performed in person. Little research has examined the nature and quality of career mentoring online. To address this gap, we study how people use online Q&A forums for career advice. We develop a taxonomy of career advice requests based on a qualitative analysis of posts in a career-related online forum, identifying three key types: best practices, career threats, and time-sensitive requests. Our quantitative analysis of responses shows that both requesters and external viewers value general information, encouragement, and guidance, but not role modeling. We found no relation between the type of requests and features of responses, nor differences in responses valued by requesters versus external viewers. We present design recommendations for supporting online career advice exchange.
Cognitive Aids in Acute Care: Investigating How Cognitive Aids Affect and Support In-hospital Emergency Teams
Cognitive aids – artefacts that support a user in the completion of a task at the time – have raised great interest to support healthcare staff during medical emergencies. However, the mechanisms of how cognitive aids support or affect staff remain understudied. We describe the iterative development of a tablet-based cognitive aid application to support in-hospital resuscitation team leaders. We report a summative evaluation of two different versions of the application. Finally, we outline the limitations of current explanations of how cognitive aids work and suggest an approach based on embodied cognition. We discuss how cognitive aids alter the task of the team leader (distributed cognition), the importance of the present team situation (socially situated), and the result of the interaction between mind and environment (sensorimotor coupling). Understanding and considering the implications of introducing cognitive aids may help to increase acceptance and effectiveness of cognitive aids and eventually improve patient safety.
Facilitating Self-reflection about Values and Self-care Among Individuals with Chronic Conditions
Individuals with multiple chronic conditions (MCC) experience the overwhelming burden of treating MCC and frequently disagree with their providers on priorities for care. Aligning self-care with patients’ values may improve healthcare for these patients. However, patients’ values are not routinely discussed in clinical conversations and patients may not actively share this information with providers. In a qualitative field study, we interviewed 15 patients in their homes to investigate techniques that encourage patients to articulate values, self-care, and how they relate. Study activities facilitated self-reflection on values and self-care and produced varying responses, including: raising consciousness, evolving perspectives, identifying misalignments, and considering changes. We discuss how our findings extend prior work on supporting reflection in HCI and inform the design of tools for improving care for people with MCC.
Pyrus: Designing A Collaborative Programming Game to Promote Problem Solving Behaviors
While problem solving is a crucial aspect of programming, few learning opportunities in computer science focus on teaching problem-solving skills like planning. In this paper, we present Pyrus, a collaborative game designed to encourage novices to plan in advance while programming. Through Pyrus, we explore a new approach to designing educational games we call behavior-centered game design, in which designers first identify behaviors that learners should practice to reach desired learning goals and then select game mechanics that incentivize those behaviors. Pyrus leverages game mechanics like a failure condition, distributed resources, and enforced turn-taking to encourage players to plan and collaborate. In a within-subjects user study, we found that pairs of novices spent more time planning and collaborated more equally when solving problems in Pyrus than in pair programming. These findings show that game mechanics can be used to promote desirable learning behaviors like planning in advance, and suggest that our behavior-centered approach to educational game design warrants further study.
Inalienability: Understanding Digital Gifts
This paper takes on one of the rarely articulated yet important questions pertaining to digital media objects: how do HCI and design researchers understand ‘gifting’ when the object can just as easily be ‘shared’? This question has often been implied and occasionally answered, though only partially. We propose the concept of ‘inalienability’, taken from the gifting literature, as a useful theory for clarifying what design researchers mean by gifting in a digital context. We apply ‘inalienability’ to three papers from the ACM Digital Library and one ongoing project, spanning nearly two decades of HCI and design research, that combine ‘gifting and ‘sharing’ in their frameworks. In this way we show how applying the concept of ‘inalienability’ can clarify behaviours that mark gifting as a unique activity, frame research questions around gifting and sharing, outline specific next steps for gifting research, and suggest design strategies in this area.
As Light as You Aspire to Be: Changing Body Perception with Sound to Support Physical Activity
Supporting exercise adherence through technology remains an important HCI challenge. Recent works showed that altering walking sounds leads people perceiving themselves as thinner/lighter, happier and walking more dynamically. While this novel approach shows potential for physical activity, it raises critical questions impacting technology design. We ran two studies in the context of exertion (gym-step, stairs-climbing) to investigate how individual factors impact the effect of sound and the duration of the after-effects. The results confirm that the effects of sound in body-perception occur even in physically demanding situations and through ubiquitous wearable devices. We also show that the effect of sound interacted with participants’ body weight and masculinity/femininity aspirations, but not with gender. Additionally, changes in body-perceptions did not hold once the feedback stopped; however, body-feelings or behavioural changes appeared to persist for longer. We discuss the results in terms of malleability of body-perception and highlight opportunities for supporting exercise adherence.
Who Would You Like to Work With?
People and organizations are increasingly using online platforms to assemble teams. In response, HCI researchers have theorized frameworks and created systems to support team assembly. However, little is known about how users search for and choose teammates on these platforms. We conducted a field study where 530 participants used a team formation system to assemble project teams. We describe how users’ traits and social networks influence their teammate searches, teammate choices, and team composition. Our results show that (a) what users initially search for differs from what they finally choose: initially they search for experts and sociable users, but they are ultimately more likely to choose their prior social connections as their teammates; (b) users’ decisions lead to non-diverse and segregated teams, where most of the expertise and social capital are concentrated in a few teams. We discuss the implications of these results for designing team formation systems than promote users’ agency.
ModiFiber: Two-Way Morphing Soft Thread Actuators for Tangible Interaction
Despite thin-line actuators becoming widely adopted in different Human-Computer Interaction (HCI) contexts, including integration into fabrics, paper art, hinges, soft robotics, and human hair, accessible line-based actuators are very limited beyond shape memory alloy (SMA) wire and motor-driven passive tendons. In this paper, we introduce a novel, yet simple and accessible, line-based actuator. ModiFiber is a twisted-then-coiled nylon thread actuator with a silicone coating. This composite thread actuator exhibits unique two-way reversible shrinking or twisting behaviors triggered by heat or electrical current (i.e., Joule heating). ModiFiber is soft, flexible, safe to operate and easily woven or sewn, hence it has a great potential as an embedded line-based actuator for HCI purposes. In this paper, we explain the material mechanisms and manufacturing approaches, followed by some performance tests and application demonstrations.
Breakdowns in Home-School Collaboration for Behavioral Intervention
For some children, behavioral health services are critical in supporting their development and preventing adverse outcomes such as school dropout, substance use, or encounters with juvenile justice. Schools play an important role in identifying problem behavior and providing appropriate intervention, and these efforts are most effective when executed in collaboration with parents at home. However, home-school collaboration is difficult to achieve. In this work, we investigated lack of information sharing as a barrier to collaboration, through a qualitative study including observation, contextual inquiry, and interviews. We found that policies, processes, and tools for documenting behaviors in schools are implemented without significant consideration toward exchanging information with parents. Consequently, a lack of effective two-way information sharing tended to hinder collaboration and erode trust. Combining our empirical findings with evidence-based strategies for parent involvement, we discuss design opportunities for promoting collaboration toward positive behavioral outcomes for children.
VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository
Researchers currently rely on ad hoc datasets to train automated visualization tools and evaluate the effectiveness of visualization designs. These exemplars often lack the characteristics of real-world datasets, and their one-off nature makes it difficult to compare different techniques. In this paper, we present VizNet: a large-scale corpus of over 31 million datasets compiled from open data repositories and online visualization galleries. On average, these datasets comprise 17 records over 3 dimensions and across the corpus, we find 51% of the dimensions record categorical data, 44% quantitative, and only 5% temporal. VizNet provides the necessary common baseline for comparing visualization design techniques, and developing benchmark models and algorithms for automating visual analysis. To demonstrate VizNet’s utility as a platform for conducting online crowdsourced experiments at scale, we replicate a prior study assessing the influence of user task and data distribution on visual encoding effectiveness, and extend it by considering an additional task: outlier detection. To contend with running such studies at scale, we demonstrate how a metric of perceptual effectiveness can be learned from experimental results, and show its predictive power across test datasets.
Shape Structuralizer: Design, Fabrication, and User-driven Iterative Refinement of 3D Mesh Models
Current Computer-Aided Design (CAD) tools lack proper support for guiding novice users towards designs ready for fabrication. We propose Shape Structuralizer (SS), an interactive design support system that repurposes surface models into structural constructions using rods and custom 3D-printed joints. Shape Structuralizer embeds a recommendation system that computationally supports the user during design ideation by providing design suggestions on local refinements of the design. This strategy enables novice users to choose designs that both satisfy stress constraints as well as their personal design intent. The interactive guidance enables users to repurpose existing surface mesh models, analyze them in-situ for stress and displacement constraints, add movable joints to increase functionality, and attach a customized appearance. This also empowers novices to fabricate even complex constructs while ensuring structural soundness. We validate the Shape Structuralizer tool with a qualitative user study where we observed that even novice users were able to generate a large number of structurally safe designs for fabrication.
Designing Participatory Sensing with Remote Communities to Conserve Endangered Species
The increasing loss of species globally calls for effective monitoring tools and strategies to inform conservation action. The dominant approach to citizens engagement has been smart phone and platform-centric, tasking crowds to collect and analyze data. However, many critically endangered species inhabit remote areas, characterized by sparsely populated communities with poor internet connectivity. Approaches need to garner high engagement relative to population size, with data collection and knowledge synthesis suited to the local context. We conducted a field study in remote communities to understand how to enhance conservation of Bhutan’s critically endangered White-bellied heron by exploring existing monitoring practices and trialing acoustic sensing technologies. We found that knowledge about the species is partial, heterogeneous, situated within and across communities and rooted in cultural beliefs. Sensors, acoustic interfaces, and playful probes provided new ways for the community to ‘see’ and discuss their local environment fostering them to share and grow their knowledge together. We contribute a synthesis of key considerations for designing effective participatory sensing to conserve species in remote communities.
"Tricky to get your head around": Information Work of People Managing Chronic Kidney Disease in the UK
People diagnosed with a chronic health condition have many information needs which healthcare providers, patient groups, and resource designers seek to support. However, as a disease progresses, knowing when, how, and for what purposes patients want to interact with and construct personal meaning from health-related information is still unclear. This paper presents findings regarding the information work of chronic kidney disease patients. We conducted semi-structured interviews with 13 patients and 6 clinicians, and observations at 9 patient group events. We used the stages of the information journey – recognizing need, seeking, interpreting, and using information – to frame our data analysis. We identified two distinct but often overlapping information work phases, ‘Learning’ and ‘Living With’ a chronic condition to show how patient information work activities shift over time. We also describe social and individual factors influencing information work, and discuss technology design opportunities including customized education and collaboration tools.
Long-Term Value of Social Robots through the Eyes of Expert Users
Socially-enabled digital technologies have attracted academic interest for decades, with recent commercial examples of Siri and Alexa, capturing public attention. However, despite ubiquitous visions of a robotic future, very few fully-fledged social robots are currently available to consumers. To improve their designs, studies of their long-term use are particularly valuable, but are currently unavailable. To address this gap, we report on interviews with four long-term users of Pepper – a social robot introduced in 2014. Our thematic analysis elicited insights across three kinds of value Pepper brought to its users: utilitarian functionality; the community that formed around Pepper; and a personal value of affection. We focus on two contributions those values bring to social robot design: social robots as social proxies, alleviating disabilities or acting akin to social media profiles; and robot nurturing as a design construct, going beyond purely utilitarian or hedonistic perspectives on robots.
Measuring and Understanding Photo Sharing Experiences in Social Virtual Reality
Millions of photos are shared online daily, but the richness of interaction compared with face-to-face (F2F) sharing is still missing. While this may change with social Virtual Reality (socialVR), we still lack tools to measure such immersive and interactive experiences. In this paper, we investigate photo sharing experiences in immersive environments, focusing on socialVR. Running context mapping (N=10), an expert creative session (N=6), and an online experience clustering questionnaire (N=20), we develop and statistically evaluate a questionnaire to measure photo sharing experiences. We then ran a controlled, within-subject study (N=26 pairs) to compare photo sharing under F2F, Skype, and Facebook Spaces. Using interviews, audio analysis, and our questionnaire, we found that socialVR can closely approximate F2F sharing. We contribute empirical findings on the immersiveness differences between digital communication media, and propose a socialVR questionnaire that can in the future generalize beyond photo sharing.
Email Makes You Sweat: Examining Email Interruptions and Stress Using Thermal Imaging
Workplace environments are characterized by frequent interruptions that can lead to stress. However, measures of stress due to interruptions are typically obtained through self-reports, which can be affected by memory and emotional biases. In this paper, we use a thermal imaging system to obtain objective measures of stress and investigate personality differences in contexts of high and low interruptions. Since a major source of workplace interruptions is email, we studied 63 participants while multitasking in a controlled office environment with two different email contexts: managing email in batch mode or with frequent interruptions. We discovered that people who score high in Neuroticism are significantly more stressed in batching environments than those low in Neuroticism. People who are more stressed finish emails faster. Last, using Linguistic Inquiry Word Count on the email text, we find that higher stressed people in multitasking environments use more anger in their emails. These findings help to disambiguate prior conflicting results on email batching and stress.
Measuring the Separability of Shape, Size, and Color in Scatterplots
Scatterplots commonly use multiple visual channels to encode multivariate datasets. Such visualizations often use size, shape, and color as these dimensions are considered separable–dimensions represented by one channel do not significantly interfere with viewers’ abilities to perceive data in another. However, recent work shows the size of marks significantly impacts color difference perceptions, leading to broader questions about the separability of these channels. In this paper, we present a series of crowdsourced experiments measuring how mark shape, size, and color influence data interpretation in multiclass scatterplots. Our results indicate that mark shape significantly influences color and size perception, and that separability among these channels functions asymmetrically: shape more strongly influences size and color perceptions in scatterplots than size and color influence shape. Models constructed from the resulting data can help designers anticipate viewer perceptions to build more effective visualizations.
Beyond Behavior: The Coach’s Perspective on Technology in Health Coaching
Rapid innovations in electronic healthcare and behavior tracking systems are challenging health coaches (dietitians, personal trainers, etc.) to rethink their traditional roles and healthcare practices. At the same time, many current e-coaching systems have been developed without explicitly incorporating the healthcare professionals’ perspective into the design process. In the current paper, we present three consecutive qualitative studies, starting from the health coach’s perspective on successful coaching, progressively zooming in on the potential role and impact of technology as part of the coaching process. Our main finding is that coaches are concerned that introducing technology in the coaching process puts too much emphasis on behavioral information, lowering the attention for the client’s lived experience, while understanding those experiences is key for successful coaching. We summarize our insights in a multi-channel communication model and draw implications for the design of supporting technology in health coaching.
Privacy, Anonymity, and Perceived Risk in Open Collaboration: A Study of Service Providers
Anonymity can enable both healthy online interactions like support-seeking and toxic behaviors like hate speech. How do online service providers balance these threats and opportunities? This two-part qualitative study examines the challenges perceived by open collaboration service providers in allowing anonymous contributions to their projects. We interviewed eleven people familiar with organizational decisions related to privacy and security at five open collaboration projects and followed up with an analysis of public discussions about anonymous contribution to Wikipedia. We contrast our findings with prior work on threats perceived by project volunteers and explore misalignment between policies aiming to serve contributors and the privacy practices of contributors themselves.
Aarnio: Passive Kinesthetic Force Output for Foreground Interactions on an Interactive Chair
We propose a new type of haptic output for foreground interactions on an interactive chair, where input is carried out explicitly in the foreground of the user’s consciousness. This type of force output restricts a user’s motion by modulating the resistive force when rotating a seat, tilting the backrest, or rolling the chair. These interactions are useful for many applications in a ubiquitous computing environment, ranging from immersive VR games to rapid and private query of information for people who are occupied with other tasks (e.g. in a meeting). We carefully designed and implemented our proposed haptic force output on a standard office chair and determined the recognizability of five force profiles for rotating, tilting, and rolling the chair. We present the result of our studies, as well as a set of novel interaction techniques enabled by this new force output for chairs.
Does It Feel Real?: Using Tangibles with Different Fidelities to Build and Explore Scenes in Virtual Reality
Professionals in domains like film, theater, or architecture often rely on physical models to visualize spaces. With virtual reality (VR) new tools are available providing immersive experiences with correct perceptions of depth and scale. However, these lack the tangibility of physical models. Using tangible objects in VR can close this gap but creates the challenges of producing suitable objects and interacting with them with only the virtual objects visible. This work addresses these challenges by evaluating tangibles with three haptic fidelities: equal disc-shaped tangibles for all virtual objects, Lego-built tangibles, and 3D-printed tangibles resembling the virtual shapes. We present results from a comparative study on immersion, performance, and intuitive interaction and interviews with domain experts. The results show that 3D-printed objects perform best, but Lego offers a good trade-off between fast creation of tangibles and sufficient fidelity. The experts rate our approach as useful and would use all three versions.
How Do Humans Assess the Credibility on Web Blogs: Qualifying and Verifying Human Factors with Machine Learning
The purpose of this paper is to understand the factors involved when a human judges the credibility of information and to develop a classification model for weblogs, a primary source of information for many people. Considering both computational and human-centered approaches, we conducted a user study designed to consider two cognitive procedures–(1) visceral, behavioral and (2) reflective assessments–in the evaluation of information credibility. The results of the 80-participant study highlight that human cognitive processing varies according to an individual’s purpose and that humans consider the structures and styles of content in their reflective assessments. We experimentally proved these findings through the development and analysis of classification models using 16,304 real blog posts written by 2,944 bloggers. Our models yield greater accuracy and efficiency than the models with well-known best features identified in prior research
Adding Proprioceptive Feedback to Virtual Reality Experiences Using Galvanic Vestibular Stimulation
We present a small and lightweight wearable device that enhances virtual reality experiences and reduces cybersickness by means of galvanic vestibular stimulation (GVS). GVS is a specific way to elicit vestibular reflexes that has been used for over a century to study the function of the vestibular system. In addition to GVS, we support physiological sensing by connecting heart rate, electrodermal activity and other sensors to our wearable device using a plug and play mechanism. An accompanying Android app communicates with the device over Bluetooth (BLE) for transmitting the GVS stimulus to the user through electrodes attached behind the ears. Our system supports multiple categories of virtual reality applications with different types of virtual motion such as driving, navigating by flying, teleporting, or riding. We present a user study in which participants (N = 20) experienced significantly lower cybersickness when using our device and rated experiences with GVS-induced haptic feedback as significantly more immersive than a no-GVS baseline.
VibEye: Vibration-Mediated Object Recognition for Tangible Interactive Applications
We present VibEye: a vibration-mediated recognition system of objects for tangible interaction. A user holds an object between two fingers wearing VibEye. VibEye triggers a vibration from one finger, and the vibration that has propagated through the object is sensed at the other finger. This vibration includes information about the object’s identity, and we represent it using a spectrogram. Collecting the spectrograms of many objects, we formulate the object recognition problem to a classical classification problem among the images. This simple method, when tested with 20 users, shows 92.5% accuracy for 16 objects of the same shape with various materials. This material-based classifier is also extended to the recognition of everyday objects. Lastly, we demonstrate several tangible applications where VibEye provides the needed functionality while enhancing user experiences. VibEye is particularly effective for recognizing objects made of different materials, which is difficult to distinguish by other means such as light and sound.
Sound Forest: Evaluation of an Accessible Multisensory Music Installation
Sound Forest is a music installation consisting of a room with light-emitting interactive strings, vibrating platforms and speakers, situated at the Swedish Museum of Performing Arts. In this paper we present an exploratory study focusing on evaluation of Sound Forest based on picture cards and interviews. Since Sound Forest should be accessible for everyone, regardless age or abilities, we invited children, teens and adults with physical and intellectual disabilities to take part in the evaluation. The main contribution of this work lies in its findings suggesting that multisensory platforms such as Sound Forest, providing whole-body vibrations, can be used to provide visitors of different ages and abilities with similar associations to musical experiences. Interviews also revealed positive responses to haptic feedback in this context. Participants of different ages used different strategies and bodily modes of interaction in Sound Forest, with activities ranging from running to synchronized music-making and collaborative play.
Making Sense of Human-Food Interaction
Activity in Human-Food Interaction (HFI) research is skyrocketing across a broad range of disciplinary interests and concerns. The dynamic and heterogeneous nature of this emerging field presents a challenge to scholars wishing to critically engage with prior work, identify gaps and ensure impact. It also challenges the formation of community. We present a Systematic Mapping Study of HFI research and an online data visualisation tool developed to respond to these issues. The tool allows researchers to engage in new ways with the HFI literature, propose modifications and additions to the review, and thereby actively engage in community-making. Our contribution is threefold: (1) we characterize the state of HFI, reporting trends, challenges and opportunities; (2) we provide a taxonomy and tool for diffractive reading of the literature; and (3) we offer our approach for adaptation by research fields facing similar challenges, positing value of the tool and approach beyond HFI.
Designing for Digital Playing Out
We report on a design-led study in the UK that aimed to understand barriers to children (aged 5 to 14 years) ‘playing out’ in their neighbourhood and explore the potential of the Internet of Things (IoT) for supporting children’s free play that extends outdoors. The study forms a design ethnography, combining observational fieldwork with design prototyping and co-creative activities across four linked workshops, where we used BBC micro:bit devices to co-create new IoT designs with the participating children. Our collective account contributes new insights about the physical and interactive features of micro:bits that shaped play, gameplay, and social interaction in the workshops, illuminating an emerging design space for supporting ‘digital playing out’ that is grounded in empirical instances. We highlight opportunities for designing for digital playing out in ways that promote social negotiation, supports varying participation, allows for integrating cultural influences, and accounts for the weaving together of placemaking and play.
Life-Affirming Biosensing in Public: Sounding Heartbeats on a Red Bench
“Smart city” narratives promise IoT data-driven innovations leveraging biosensing technologies. We argue this overlooks a potential benefit of city living: affirmation. We designed the Heart Sounds Bench, which amplifies the heart sounds of those sitting on it, as well as recording and playing back the heart sounds of previous sitters. We outline our design intent to invite rest, reflection, and recognition of others’ lives in public space. We share results from a study with 19 participants. Participants expressed feeling connected to a shared life energy including others and the environment, and described heart sounds as feeling intimate yet anonymous. Finally, we elaborate the concept of life-affirmation in terms of recognition of others’ lives, feeling connection, and respecting untranslatable differences with opacity, as a way of helping “smart city” designs embrace a multiplicity of desires.
ATMSeer: Increasing Transparency and Controllability in Automated Machine Learning
To relieve the pain of manually selecting machine learning algorithms and tuning hyperparameters, automated machine learning (AutoML) methods have been developed to automatically search for good models. Due to the huge model search space, it is impossible to try all models. Users tend to distrust automatic results and increase the search budget as much as they can, thereby undermining the efficiency of AutoML. To address these issues, we design and implement ATMSeer, an interactive visualization tool that supports users in refining the search space of AutoML and in analyzing the results. To guide the design of ATMSeer, we derive a workflow of using AutoML based on interviews with machine learning experts. A multi-granularity visualization is proposed to enable users to monitor the AutoML process, analyze the searched models, and refine the search space in real time. We demonstrate the utility and usability of ATMSeer through two case studies, expert interviews, and a user study with 13 end users.
A Bayesian Cognition Approach to Improve Data Visualization
People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users’ prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people’s judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don’t behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.
PersonalTouch: Improving Touchscreen Usability by Personalizing Accessibility Settings based on Individual User’s Touchscreen Interaction
Modern touchscreen devices have recently introduced customizable touchscreen settings to improve accessibility for users with motor impairments. For example, iOS 10 introduced the following four Touch Accommodation settings: 1) Hold Duration, 2) Ignore Repeat, 3) Tap Assistance, and 4) Tap Assistance Gesture Delay. These four independent settings lead to a total of more than 1 million possible configurations, making it impractical to manually determine the optimal settings. We present PersonalTouch, which collects and analyzes touchscreen gestures performed by individual users, and recommends personalized, optimal touchscreen accessibility settings. Results from our user study show that PersonalTouch significantly improves touch input success rate for users with motor impairments (20.2%, N=12, p=.00054) and for users without motor impairments (1.28%, N=12, p=.032).
The Parenting Actor-Network of Latino Immigrants in the United States
The field of Human-Computer Interaction (HCI) has shown a growing interest in how technology might support parenting. An area that remains underexplored is the design of technology to support parents from nondominant groups in positively impacting their children’s education. Drawing on Actor-Network Theory (ANT), our paper takes a sociotechnical view of low-income Latino Spanish-speaking immigrants in the U.S.—a large nondominant group—attempting to form alliances with other actors such as teachers, the broader community, and technology to exchange information that might enrich their children’s education. The use of ANT allowed us to advance work on parenting in HCI by providing a deeper understanding of the reasons—including attributes embedded in technology—impacting the quality of information channels in the parental engagement network of a nondominant group. Further, our ANT analysis illuminates a discussion of challenges and opportunities for technology to intervene in the network in ways that align with all actors’ needs and harness their potentialities.
Geollery: A Mixed Reality Social Media Platform
We present Geollery, an interactive mixed reality social media platform for creating, sharing, and exploring geotagged information. Geollery introduces a real-time pipeline to progressively render an interactive mirrored world with three-dimensional (3D) buildings, internal user-generated content, and external geotagged social media. This mirrored world allows users to see, chat, and collaborate with remote participants with the same spatial context in an immersive virtual environment. We describe the system architecture of Geollery, its key interactive capabilities, and our design decisions. Finally, we conduct a user study with 20 participants to qualitatively compare Geollery with another social media system, Social Street View. Based on the participants’ responses, we discuss the benefits and drawbacks of each system and derive key insights for designing an interactive mirrored world with geotagged social media. User feedback from our study reveals several use cases for Geollery including travel planning, virtual meetings, and family gathering.
Passquerade: Improving Error Correction of Text Passwords on Mobile Devices by using Graphic Filters for Password Masking
Entering text passwords on mobile devices is a significant challenge. Current systems either display passwords in plain text: making them visible to bystanders, or replace characters with asterisks shortly after they are typed: making editing them harder. This work presents a novel approach to mask text passwords by distorting them using graphical filters. Distorted passwords are difficult to observe by attackers because they cannot mentally reverse the distortions. Yet passwords remain readable by their owners because humans can recognize visually distorted versions of content they saw before. We present results of an online questionnaire and a user study where we compared Color-halftone, Crystallize, Blurring, and Mosaic filters to Plain text and Asterisks when 1) entering, 2) editing, and 3) shoulder surfing one-word passwords, random character passwords, and passphrases. Rigorous analysis shows that Color-halftone and Crystallize filters significantly improve editing speed, editing accuracy and observation resistance compared to current approaches.
HoloDoc: Enabling Mixed Reality Workspaces that Harness Physical and Digital Content
Prior research identified that physical paper documents have many positive attributes, for example natural tangibility and inherent physical flexibility. When documents are presented on digital devices, however, they can provide unique functionality to users, such as the ability to search, view dynamic multimedia content, and make use of indexing. This work explores the fusion of physical and digital paper documents. It first presents the results of a study that probed how users perform document-intensive analytical tasks when both physical and digital versions of documents were available. The study findings then informed the design of HoloDoc, a mixed reality system that augments physical artifacts with rich interaction and dynamic virtual content. Finally, we present the interaction techniques that HoloDoc affords, and the results of a second study that assessed HoloDoc’s utility when working with digital and physical copies of academic articles.
SwarmHaptics: Haptic Display with Swarm Robots
This paper seeks to better understand the use of haptic feedback in abstract, ubiquitous robotic interfaces. We introduce and provide preliminary evaluations of SwarmHaptics, a new type of haptic display using a swarm of small, wheeled robots. These robots move on a flat surface and apply haptic patterns to the user’s hand, arm, or any other accessible body parts. We explore the design space of SwarmHaptics including individual and collective robot parameters, and demonstrate example scenarios including remote social touch using the Zooids platform. To gain insights into human perception, we applied haptic patterns with varying number of robots, force type, frequency, and amplitude and obtained user’s perception in terms of emotion, urgency, and Human-Robot Interaction metrics. In a separate elicitation study, users generated a set of haptic patterns for social touch. The results from the two studies help inform how users perceive and generate haptic patterns with SwarmHaptics.
Estimating Touch Force with Barometric Pressure Sensors
Finger pressure offers a new dimension for touch interaction, where input is defined by its spatial position and orthogonal force. However, the limited availability and complexity of integrated force-sensing hardware in mobile devices is a barrier to exploring this design space. This paper presents a synthesis of two features in recent mobile devices – a barometric sensor (pressure altimeter) and ingress protection – to sense a user’s touch force. When a user applies force to a device’s display, it flexes inward and causes an increase in atmospheric pressure within the sealed chassis. This increase in pressure can be sensed by the device’s internal barometer. However, this change is uncontrolled and requires a calibration model to map atmospheric pressure to touch force. This paper derives such a model and demonstrates its viability on four commercially-available devices (including two with dedicated force sensors). The results show this method is sensitive to forces of less than 1 N, and is comparable to dedicated force sensors.
A Badge, Not a Barrier: Designing for-and Throughout-Digital Badge Implementation
We synthesize insights from a multi-year project involving the design and implementation of a digital badge system with youth co-designers at a science center. Using stakeholder interviews and surveys, participatory design session data, and user analytics, we identify the sociotechnical, sociocultural, and technical challenges of long-term badge implementation and propose several recommendations for the design and implementation of future badge systems. By identifying these challenges and providing recommendations that foreground stakeholder values and participation, we show how to support implementation throughout the entire design-to-implementation cycle.
Virtual Objects in the Physical World: Relatedness and Psychological Ownership in Augmented Reality
As technology advances, people increasingly interact with virtual objects in settings such as augmented reality (AR) where the virtual layer is superimposed on top of the physical world. Similarly to interactions with physical objects, users may assign virtual objects with value, experience a sense of relatedness, and develop psychological ownership over these objects. The objective of this study is to understand how AR’s unique characteristics influences the emergence of meaning and ownership perceptions amongst users. We conducted a study of users’ interactions with a virtual dog over a three-week period, comparing AR and fully virtual settings. Our findings show that engagement with the application is a key determinant of the relation users develop with virtual objects. However, the effect of the background layer-whether physical or virtual-dominates the development of relatedness and ownership feelings, highlighting the importance of the “real” physical layer in shaping users’ perceptions.
Signal Appropriation of Explicit HIV Status Disclosure Fields in Sex-Social Apps used by Gay and Bisexual Men
HIV status disclosure fields in online sex-social applications (“apps”) are designed to help increase awareness, reduce stigma, and promote sexual health. Public disclosure could also help those diagnosed relate to others with similar statuses to feel less isolated. However, in our interview study (n=28) with HIV positive and negative men who have sex with men (MSM), we found some users preferred to keep their status private, especially when disclosure could stigmatise and disadvantage them, or risk revealing their status to someone they knew offline in a different context. How do users manage these tensions between health, stigma, and privacy? We analysed our interview data using signalling theory as a conceptual framework and identify participants developing ‘signal appropriation’ strategies, helping them manage the disclosure of their HIV status. Additionally, we propose a set of design considerations that explore the use of signals in the design of sensitive disclosure fields.
HapTwist: Creating Interactive Haptic Proxies in Virtual Reality Using Low-cost Twistable Artefacts
In this paper, we present a series of studies on using Rubik’s Twist, a type of low-cost twistable artefact, to create haptic proxies for various hand-graspable VR objects. Our pilot studies validated the feasibility and effectiveness of Rubik’s-Twist-based haptic proxies. The pilot results also revealed user challenges in the physical shape creation, motivating the development of the HapTwist toolkit. The toolkit consists of the shape-generation algorithm, the software interface for shape-construction guidance and interaction authoring, and the hardware modules for constructing interactive haptic proxies. The user studies showed that HapTwist was easy to learn and use, and it significantly improved user performance in creating interactive haptic proxies with Rubik’s Twist. Furthermore, HapTwist-generated haptic proxies achieved similar VR experience as the real objects.
Falcon: Balancing Interactive Latency and Resolution Sensitivity for Scalable Linked Visualizations
We contribute user-centered prefetching and indexing methods that provide low-latency interactions across linked visualizations, enabling cold-start exploration of billion-record datasets. We implement our methods in Falcon, a web-based system that makes principled trade-offs between latency and resolution to optimize brushing and view switching times. To optimize latency-sensitive brushing actions, Falcon reindexes data upon changes to the active view a user is brushing in. To limit view switching times, Falcon initially loads reduced interactive resolutions, then progressively improves them. Benchmarks show that Falcon sustains real-time interactivity of 50fps for pixel-level brushing and linking across multiple visualizations with no costly precomputation. We show constant brushing performance regardless of data size on datasets ranging from millions of records in the browser to billions when connected to a backing database system.
“When the Elephant Trumps”: A Comparative Study on Spatial Audio for Orientation in 360º Videos
Orientation is an emerging issue in cinematic Virtual Reality (VR), as viewers may fail in locating points of interest. Recent strategies to tackle this research problem have investigated the role of cues, specifically diegetic sound effects. In this paper, we examine the use of sound spatialization for orientation purposes, namely by studying different spatialization conditions (“none”, “partial”, and “full” spatial manipulation) of multitrack soundtracks. We performed a between-subject mixed-methods study with 36 participants, aided by Cue Control, a tool we developed for dynamic spatial sound editing and data collection/analysis. Based on existing literature on orientation cues in 360º and theories on human listening, we discuss situations in which the spatialization was more effective (namely, “full” spatial manipulation both when using only music and when combining music and diegetic effects), and how this can be used by creators of 360º videos.
Egocentric Smaller-person Experience through a Change in Visual Perspective
This paper explores how human perceptions, actions, and interactions can be changed through an embodied and active experience of being a smaller person in a real-world environment, which we call an egocentric smaller person experience. We developed a wearable visual translator that provides the perspective of a smaller person by shifting the wearer’s eyesight level down to their waist using a head-mounted display and a stereo camera module, while allowing for field of view control through head movements. In this study, we investigated how the developed device can modify the wearer’s body representation and experiences based on a field study conducted at a nursing school and museums, and through lab studies. It was observed that the participants changed their perceptions, actions, and interactions because they are considered to have perceived themselves as being smaller. Using this device, designers and teachers can understand the perspectives of other people in an existing environment.
LocknType: Lockout Task Intervention for Discouraging Smartphone App Use
Instant access and gratification make it difficult for us to self-limit the use of smartphone apps. We hypothesize that a slight increase in the interaction cost of accessing an app could successfully discourage app use. We propose a proactive intervention that requests users to perform a simple lockout task (e.g., typing a fixed length number) whenever a target app is launched. We investigate how a lockout task with varying workloads (i.e., pause only without number input, 10-digit input, and 30-digit input) influence a user’s decision making, by a 3-week, in-situ experiment with 40 participants. Our findings show that even the pause-only task that requires a user to press a button to proceed discouraged an average of 13.1% of app use, and the 30-digit-input task discouraged 47.5%. We derived determinants of app use and non-use decision making for a given lockout task. We further provide implications for persuasive technology design for discouraging undesired behaviors.
Effect of Orientation on Unistroke Touch Gestures
As touchscreens are the most successful input method of current mobile devices, touch gestures became a widely used input technique. While gestures provide users with advantages to express themselves, they also introduce challenges regarding accuracy and memorability. In this paper, we investigate the effect of a gesture’s orientation on how well the gesture can be performed. We conducted a study in which participants performed systematically rotated unistroke gestures. For straight lines as well as for compound lines, we found that users tend to align gestures with the primary axes. We show that the error can be described by a Clausen function with R² = .93. Based on our findings, we suggest design implications and highlight the potential for recognizing flick gestures, visualizing gestures and improving recognition of compound gestures.
LASEC: Instant Fabrication of Stretchable Circuits Using a Laser Cutter
This paper introduces LASEC, the first technique for instant do-it-yourself fabrication of circuits with custom stretchability on a conventional laser cutter and in a single pass. The approach is based on integrated cutting and ablation of a two-layer material using parametric design patterns. These patterns enable the designer to customize the desired stretchability of the circuit, to combine stretchable with non-stretchable areas, or to integrate areas of different stretchability. For adding circuits on such stretchable cut patterns, we contribute routing strategies and a real-time routing algorithm. An interactive design tool assists designers by automatically generating patterns and circuits from a high-level specification of the desired interface. The approach is compatible with off-the-shelf materials and can realize transparent interfaces. Several application examples demonstrate the versatility of the novel technique for applications in wearable computing, interactive textiles, and stretchable input devices.
Sensory Alignment in Immersive Entertainment
When we use digital systems to stimulate the senses, we typically stimulate only a subset of users’ senses, leaving other senses stimulated by the physical world. This creates potential for misalignment between senses, where digital and physical stimulation give conflicting signals to users. We synthesize knowledge from HCI, traditional entertainments, and underlying sensory science research relating to how senses work when given conflicting signals. Using this knowledge we present a design dimension of sensory alignment, and show how this dimension presents opportunities for a range of creative strategies ranging from full alignment of sensory stimulation, up to extreme conflict between senses.
How to Design Voice Based Navigation for How-To Videos
When watching how-to videos related to physical tasks, users’ hands are often occupied by the task, making voice input a natural fit. To better understand the design space of voice interactions for how-to video navigation, we conducted three think-aloud studies using: 1) a traditional video interface, 2) a research probe providing a voice controlled video interface, and 3) a wizard-of-oz interface. From the studies, we distill seven navigation objectives and their underlying intents: pace control pause, content alignment pause, video control pause, reference jump, replay jump, skip jump, and peek jump. Our analysis found that users’ navigation objectives and intents affect the choice of referent type and referencing approach in command utterances. Based on our findings, we recommend to 1) support conversational strategies like sequence expansions and command queues, 2) allow users to identify and refine their navigation objectives explicitly, and 3) support the seven interaction intents.
Caring for Vincent: A Chatbot for Self-Compassion
The digitization of mental health care holds promises of affordable and ubiquitously available treatment, e.g., with conversational agents (chatbots). While technology can guide people to care for themselves, we examined how people can care for another being as a way to care for themselves. We created a self-compassion chatbot (Vincent) and compared between caregiving and care-receiving conditions. Care-giving Vincent asked participants to partake in self-compassion exercises. Care-receiving Vincent shared its foibles, e.g., embarrassingly arriving late at an IP address, and sought out advice. While self-compassion increased for both conditions, only those with care-receiving Vincent significantly improved. In tandem, we offer qualitative data on how participants interacted with Vincent. Our exploratory research shows that when a person cares for a chatbot, the person’s self-compassion can be enhanced. We further reflect on design implications for strengthening mental health with chatbots.
A Wee Bit More Interaction: Designing and Evaluating an Overactive Bladder App
Overactive Bladder (OAB) is a widespread condition, affecting 20% of the population. Even though it is a treatable condition, people often do not seek treatment. In this paper, we describe how we co-designed and evaluated with 30 stakeholders (9 medical professionals and 21 end-users) an OAB mobile health application that aims to increase adherence to self-managed treatment. Our results support previous research that visualizing progress, setting goals, receiving reminders and feedback increases use. We discovered that games could be used successfully as a distraction technique for urge suppression. Contrary to the current research direction, automatically calculated features could be a detriment to app interaction. Regarding evaluation, we found that designers may not want to rely only on questionnaires when assessing the success of a game and its emotional impact on users.
HandSee: Enabling Full Hand Interaction on Smartphone with Front Camera-based Stereo Vision
We present HandSee, a novel sensing technique that can capture the state and movement of the user’s hands touching or gripping a smartphone. We place a right angle prism mirror on the front camera to achieve a stereo vision of the scene above the touchscreen surface. We develop a pipeline to extract the depth image of hands from a monocular RGB image, which consists of three components: a stereo matching algorithm to estimate the pixel-wise depth of the scene, a CNN-based online calibration algorithm to detect hand skin, and a merging algorithm that outputs the depth image of the hands. Building on the output, a substantial set of valuable interaction information, such as fingers’ 3D location, gripping posture, and finger identity can be recognized concurrently. Due to this unique sensing ability, HandSee enables a variety of novel interaction techniques and expands the design space for full hand interaction on smartphones.
Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
The full extended abstract proceedings are available in the ACM Digital Library .
Click a paper title to access the full text in the ACM Digital Library. Papers are free to access for a one year period, starting from the beginning of the CHI 2019 conference.
SESSION: alt.chi
How we Guide, Write, and Cite at CHI
There are many opinions on how to write an influential CHI paper, ranging from writing in an active voice to including colons in the title. However, little is known about how we actually write, and how writing influences impact. We conducted quantitative analyses of the full text of all 6578 CHI papers published since 1982 to investigate. We looked at readability, titles, novelty, and name-dropping and related these measures to the papers’ citation count; overall and for different subcommittees. We found that CHI papers are more readable than papers from other fields. Furthermore, readability, title length, and novelty markers all influence citation counts.
Digital Silence and Liberating Stories: During a Student-Driven Movement
In August 2018, a student protest initiated in Bangladesh sought justice when two school students were run over by public bus. Student protesters were demonstrating on the street for days until they were physically attacked. Concurrent with the physical attacks, the country experienced a disconnect. Internet, restrictions on social media usage, and several high-profile arrests of people speaking about the incidents. These suppressive encounters created what we call a “digital silence.” In response, we collected stories from people, which depict their effort to seek out information about the events unfolding and share their perspective of what happened. Through these in-the-moment stories, we see a glimpse of how the information suppression impacted people with varying proximity to the events, including protesters, bystanders, and family members. We also reflect on the benefit of the subtle defiance of storytelling for storytellers in the midst of this social justice effort.
Imaginary Studies: A Science Fiction Autoethnography Concerning the Design, Implementation and Evaluation of a Fictional Quantitative Study to Evaluate the Umamimi Robotic Horse Ears
In this paper, I use ‘science fiction autoethnography’ to reflect on conducting an imaginary, quantitative study. My fictional study is intended to evaluate a real-life artefact: the ‘umamimi’ robotic horse ears. This physical device provides a backdrop, against which my experiences and self-reflections are used to critique quantitative ‘hard science’. My own cognitive bias, rigid attachment to a viewpoint and presumptions (concerning anticipated results) all provide the real story. When I conduct an imaginary study, what does the process and its speculative results say about my autobiographical story and both the object and subject’s broader societal and cultural meanings?
Models of Minds: Reading the Mind Beyond the Brain
Drawing on philosophies of embodied, distributed & extend cognition, this paper argues that the mind is readable from sensors worn on the body and embedded in the environment. It contends that past work in HCI has already begun such work, introducing the term models of minds to describe it. To those who wish to develop the capacity to build models of minds, we argue that notions of the mind are entangled with the technologies that seek to sense it. Drawing on the racial and gendered history of surveillance, we advocate for future work on how models of minds may reinforce existing vulnerabilities, and create new ones.
Patching Gender: Non-binary Utopias in HCI
Non-binary people are rarely considered by technologies or technologists, and often subsumed under binary trans experiences on the rare occasions when we are discussed. In this paper we share our own experiences and explore potential alternatives – utopias, impossible places, as our lived experience of technologies’ obsessive gender binarism seems near-insurmountable. Our suggestions on how to patch these gender bugs appear trivial while at the same time revealing seemingly insurmountable barriers. We illustrate the casual violence technologies present to non-binary people, as well as the on-going marginalisations we experience as HCI researchers. We write this paper primarily as an expression of self-empowerment that can function as a first step towards raising awareness towards the complexities at stake.
A Mulching Proposal: Analysing and Improving an Algorithmic System for Turning the Elderly into High-Nutrient Slurry
The ethical implications of algorithmic systems have been much discussed in both HCI and the broader community of those interested in technology design, development and policy. In this paper, we explore the application of one prominent ethical framework-Fairness, Accountability, and Transparency-to a proposed algorithm that resolves various societal issues around food security and population ageing. Using various standardised forms of algorithmic audit and evaluation, we drastically increase the algorithm’s adherence to the FAT framework, resulting in a more ethical and beneficent system. We discuss how this might serve as a guide to other researchers or practitioners looking to ensure better ethical outcomes from algorithmic systems in their line of work.
We Did It Right, But It Was Still Wrong: Toward Assets-Based Design
HCI interventions often fall short of delivering lasting impact in resource-constrained contexts. We reflect on a project where we followed the “right” steps of needs-based, human-centered design, yet failed to deliver impact to the community. We introduce a framework that evaluates an intervention’s potential for sustainable impact by maximizing use of assets in the community and minimizing novelty. We propose assets-based design as an approach that starts with what a community has, leveraging those assets in a design, as opposed to a needs-based approach that focuses on adding what a community lacks.
Siri, Echo and Performance: You have to Suffer Darling
Don’t ignore this because its about speech technology. VUIs (voice user interfaces) won a best paper in CHI 2018. Did that get your attention? Good. Siri, Ivona, Google Home, and most speech synthesis systems have voices which are based on imitating a neutral citation style of speech and making it sound natural. But, in the real world, darling, people have to act, to perform! In this paper we will talk about speech synthesis as performance, why the uncanny valley is a bankrupt concept, and how academics can escape from studying corporate speech technology as if it’s been bestowed by God.
Countermeasures: Learning to Lie to Objects
Ubiquitous computing is leading to ubiquitous sensing. Sensor components such as motion, proximity, and biometric sensors are increasingly common features in everyday objects. However, the presence and full capabilities of these components are often not clear to users. Sensor-enhanced objects have the ability to perceive without being perceived. This reduces the ability of users to control how and when they are being sensed. To address this imbalance, this project identifies the need to be able to deceive ‘smart’ objects, and proposes a number of practical interventions to increase user awareness of sensors, and encourage agency over digital sensing through acts of dishonesty to objects.
Designing the Past
This paper challenges the position that design is a future oriented discipline, and rather turns an eye to the past as potential material for re-design. We claim that what we call ‘the past’ is far from static, monolithic, immutable, and is rather subjective, fluid, and constantly renegotiated. People constantly engage in re-designing the past by re-elaborating, reckoning, and plainly forgetting. The rewriting of the past, such as in historical revisionism, is often seen as an attempt to wipe-out, and hence re-inscribe and perpetuate, injustice, oppression, and even genocide. With this paper we call for more courage to take ownership of the past as something malleable, to take responsibility for it, and in so doing to open up design opportunities to a plurality of voices.
Dissent by Design: A Manifesto for CHI Manifestos
The past decade has seen a welcome rise in critical reflection in HCI [29,13,3,19,20,21]. But the use of manifestos – not to promote but to provoke – is still rare in comparison to more established disciplines. Digital activism has given new life to the manifesto, and the manifesto may in turn give new life to CHI – prompting new ideas by temporarily liberating scholars from the confines of careful speech and rational argument. We present a manifesto for manifestos; a chance for the CHI community to question its status quo and dream of its possible futures using our purpose-built authoring tools.
Bringing Shades of Feminism To Human-Centered Computing
This consolidation of 18 stories from students and researchers of human-centered computing (HCC) represent some of the diverse shades of feminism that are present in our field. These stories, our stories, reflect how we see the world and why, also articulating the change we wish to bring.
Cyborg Perspectives on Computing Research Reform
Recent exposures of extant and potentially discriminatory impacts of technological advancement have prompted members of the computing research field to reflect on their duty to actively predict and mitigate negative consequences of their work. In 2018, Hecht et al. proposed changes to the peer-review process attending to the computing research community’s responsibility for impacts on society. In requiring researchers and reviewers to expressly consider the positive and negative consequences of each study, the hope is that our community can earnestly shape more ethical innovation and inquiry. We question whether most researchers have sufficient historical context and awareness of activist movements to recognize crucial impacts to marginalized populations. Drawing from the work of feminist theorists and critical disability scholars, we present case studies in leveraging “situated knowledges” in the analysis of research ethics.
The Continued Prevalence of Dichotomous Inferences at CHI
Dichotomous inference is the classification of statistical evidence as either sufficient or insufficient. It is most commonly done through null hypothesis significance testing (NHST). Although predominant, dichotomous inferences have proven to cause countless problems. Thus, an increasing number of methodologists have been urging researchers to recognize the continuous nature of statistical evidence and to ban dichotomous inferences. We wanted to see whether they have had any influence on CHI. Our analysis of CHI proceedings from the past nine years suggests that they have not.
Of Mice and Pants: Queering the Conventional Gamer Mouse for Cooperative Play
Withing the fields of HCI and game design, conventional design practices have been criticised for perpetuating the status quo and marginalising users beyond the norm [11], [1], e.g. through genderized assumptions about user interaction [13]. To solve this problem of conservatism in HCI, one recommended strategy has been queering; the use of mischiveous, spaceful, and oblique design principles [13]. This contribution focuses on the conventional computer mouse within videogames as an example for a conventional input device optimised for a limited set of interactions. The article first reviews HCI discourses on the mouse within technology studies, game culture, and queer game studies. In these three domains, the mouse has been consistently reduced to its functionality as high-precision point-and-click device, constructing it as conservative seemingly hard-wired to cater to male-centred pleasures. We then discuss three experimental game design strategies to queer the mouse controller in The Undie Game, a cooperative wearable mouse-based installation game by the Copenhagen Game Collective. The Undie Game speculates about ways to confront and disrupt conventional expectations about gaming by fa”silly”tating interaction for two players who wear a mouse controller in their panties and collectively steer a 3D high definition tongue on screen to achieve a mutual highscore. By creating a social, silly, and potentially daunting play experience, The Undie Game reinterprets the affordances of the computer mouse to bring subjects like consent, failure, and ambiguity into the picture.
Let Us Say What We Mean: Towards Operational Definitions for Techno-Spirituality Research
Recent years have seen a dramatic increase in HCI research on the use of technology in spiritual practices and environments. Some of these works cover spiritual/transcendent experiences associated with these contexts, but strikingly few of them describe in any way the experiences they studied or aimed to support, let alone give definitions of the terms they use for those experiences. Even fewer papers cite any literature on the relevant experiences. We have to ask: How do the authors understand the experiences their work is aiming to observe, invite, or support? How do they know when and whether they have observed, invited, or supported the kinds of experiences they target? How do they know what they are studying? This paper discusses the presence and absence of definitions of terms for spiritual/transcendent experiences in HCI research, and of citations of relevant literature. It speculates about possible reasons for the oversight, proposes some definitions aimed at filling the gap, and suggests an approach to operationalizing some of the proposed definitions.
Inaction as a Design Decision: Reflections on Not Designing Self-Tracking Tools for Menopause
This reflective essay documents an attempt to design self-tracking technologies for menopause. This process culminated in the decision to not design. The contribution of this essay is the knowledge produced through reflecting on inaction. From an investigation into current examples, it became clear that applying self-tracking to menopause was fundamentally inappropriate. These technologies were also found to risk resulting in more harm than good; both in essentializing and medicalizing a non-medical process, and in perpetuating notions of the bodily experience of the menopausal transition as a negative experience.
SESSION: Awards
SIGCHI Lifetime Research Award Talk: Making Digital Tangible
Today’s mainstream Human-Computer Interaction (HCI) research primarily addresses functional concerns – the needs of users, practical applications, and usability evaluation. Tangible Bits and Radical Atoms are driven by vision and carried out with an artistic approach. While today’s technologies will become obsolete in one year, and today’s applications will be replaced in 10 years, true visions – we believe – can last longer than 100 years. Tangible Bits (3, 4) seeks to realize seamless interfaces between humans, digital information, and the physical environment by giving physical form to digital information and computation, making bits directly manipulatable and perceptible both in the foreground and background of our consciousness (peripheral awareness). Our goal is to invent new design media for artistic expression as well as for scientific analysis, taking advantage of the richness of human senses and skills we develop throughout our lifetime interacting with the physical world, as well as the computational reflection enabled by real-time sensing and digital feedback. Radical Atoms (5) leaps beyond Tangible Bits by assuming a hypothetical generation of materials that can change form and properties dynamically, becoming as reconfigurable as pixels on a screen. Radical Atoms is the future material that can transform its shape, conform to constraints, and inform the users of their affordances. Radical Atoms is a vision for the future of Human-Material Interaction, in which all digital information has a physical manifestation, thus enabling us to interact directly with it.
SIGCHI Lifetime Practice Award Talk The Business of UX
After four decades of practice, User Experience design has reached a maturity level integral to the success of every business venture. Whether the product or service provided competes in the consumer, enterprise or medical sector, UX quality is known to directly impact effectiveness, efficiency and satisfaction, the combination of which determines consumer acceptance. However, great design alone is not sufficient to achieve meaningful impact. Products with high usability lab ratings have been rejected in the crucible of real-life usage because they don’t add sufficient value for either the consumer or company that delivers them to market. The failure of these so called “great designs” reduces them at best to museum or portfolio pieces. True impact is only achieved when the designed artifact reaches a critical level of market adoption. The service benchmarks today are Facebook with over two billion active users and Google with 1.2 trillion searches a year. Achieving significant market adoption is difficult. It requires not only delightfully fulfilling users’ needs but also a UX strategy and design optimization to fit corporate business models and marketing channels, both characterized by substantial financial risk. If there is no ROI for the product, then by association there is no ROI for design or the UX team itself. UX earns a “seat at the table” by simultaneously delivering value for both the business and the user. Owning the Business of UX role contains strategy and management challenges. Mastering them can bring UX to corporate parity with the more established engineering and marketing professions.
Inclusive and Engaged Research: Community Based Scholarship in HCI
Science and design should be relevant and accessible to everyone. HCI has a long history of service, engagement, and connection with people with higher risk for educational, physical, and social challenges. Previous winners of this award are on the forefront of those efforts towards support for these diverse and often vulnerable populations. On this 15th anniversary of this award, we have a moment to reflect on these advances. Increasingly, the CHI community is connected to our worlds outside of research and scholarship. However, we see that connection is not enough. We must seek instead for true engagement. What does it mean to be a true partner, to take small steps to increase engagement in projects, in designs, and in scholarly work? Building on an existing ethos of service, three years ago, the CHI community undertook an effort to positively impact the cities we visit: Day of Service. This step is just one in a long line of efforts on the part of a responsible, committed group of scholars to leaving this world better than we found it. However, these efforts also represent some of the challenges of our own privilege. How can we go beyond service to true collaboration? How can we bring to bear our vast resources while listening to the community and valuing their expertise and lived experiences? Finally, the CHI community has always been a place of greater diversity than some similar and surrounding academic communities. Recent efforts have focused on expanding that diversity further still: diversity of thought, diversity of experience and physical bodies, and diversity of racial, ethnic, and gender boundaries. What happens once we have recruited this diverse community? How do we ensure long-term inclusion in all activities and in the highest levels of leadership? The CHI Social Impact Award is an incredible honor and the talk an excellent platform. In this talk, I will reflect alongside the community. I will describe research focused on empowering people who are not typically represented in the design process as well as the requisite inclusive and democratic approaches to design. In particular, I will focus on the ways in which thought and action are deeply intertwined [2] and the generation of knowledge through participatory cycles of action and reflection [1]. I will also go beyond these specific research projects and practices to discuss the progress of the CHI community as a whole and our work to create an environment that is engaged, collaborative, and inclusive.
SIGCHI Outstanding Dissertation Award: A Quantified Past
A ‘data-driven life’ has become an established feature of present and future technological visions. This dissertation interrogates the human experience of a data-driven life, by conceptualising, investigating, and speculating about personal informatics tools as new technologies of memory. I argue that the prevalence of quantified data and metrics is creating fundamentally new and distinct records of everyday life: a ‘quantified past’. To address this, I conducted qualitative and idiographic fieldwork — with long-term self-trackers, and subsequently with users of ‘smart journals’ — to investigate how this data-driven record mediates the experience of remembering. Further, I undertook a speculative and design-led inquiry to explore the context of a ‘quantified wedding’. Adopting a context where remembering is centrally valued, this Research through Design project demonstrated opportunities for the design of data-driven tools for remembering. Crucially, while speculative, this project maintained a central focus on individual experience, and introduced an innovative methodological approach ‘Speculative Enactments’ for engaging participants meaningfully in speculative inquiry.
SIGCHI Outstanding Dissertation Award: Assignment Problems for Optimizing Text Input
Text input methods are an integral part of our daily interaction with digital devices. However, their design poses a complex problem: for any method, we must decide which input action (a button press, a hand gesture, etc.) produces which symbol (e.g., a character or word). With only 26 symbols and input actions, there are already more than 1026 distinct solutions, making it impossible to find the best one through manual design. Prior work has shown that we can use optimization methods to search such large design spaces efficiently and automatically find a good user interface with respect to the given objectives [6]. However, work in the text entry domain has been limited mostly to the performance optimization of (soft-)keyboards (see [2] for an overview). The Ph.D. thesis [2] advances the field of text-entry optimization by enlarging the space of optimizable text-input methods and proposing new criteria for assessing their optimality. Firstly, the design problem is formulated as an assignment problem for integer programming. This enables the use of standard mathematical solvers and algorithms for efficiently finding good solutions. Then, objective functions are developed, for assessing their optimality with respect to motor performance, ergonomics, and learnability. The corresponding models extend beyond interaction with soft keyboards, to consider multi-finger input, novel sensors, and alternative form factors. In addition, the thesis illustrates how to formulate models from prior work in terms of an assignment problem, providing a coherent theoretical basis for text entry optimization. The proposed objectives are applied in the optimization of three assignment problems: text input with multi-finger gestures in mid-air [8], text input on a long piano keyboard [4], and – for a contribution to the official French keyboard standard – input of special characters via a physical keyboard [3]. Combining the proposed models offers a multi-objective optimization approach able to capture the complex cognitive and motor processes during typing. . .
SIGCHI Outstanding Dissertation Award: On-World Computing
Computers are now ubiquitous. However, computers and digital content have remained largely separate from the physical world – users explicitly interact with computers through small screens and input devices, and the “virtual world” of digital content has had very little overlap with the practical, physical world. My thesis work is concerned with helping computing escape the confines of screens and devices, spilling digital content out onto the physical world around us. In this way, I aim to help bridge the gap between the information-rich digital world and the familiar environment of the physical world and allow users to interact with digital content as they would ordinary physical content. I approach this problem from many angles: from the low-level work of providing high-fidelity touch interaction on everyday surfaces, easily transforming these surfaces into enormous touchscreens; to high-level questions surrounding the interaction design between physical and virtual realms. To achieve this end, building on my prior work, I developed two physical embodiments of this new mixed-reality design: a tiny, miniaturized projector and camera system providing the hardware basis for a projected on-world interface, and a head-mounted augmented-reality head-mounted display modified to support touch interaction on arbitrary surfaces.
SESSION: Case Studies
SENSEI: Harnessing Community Wisdom for Local Environmental Monitoring in Finland
The way people participate in decision making has radically changed over the last few decades. Technology has facilitated the sharing of knowledge, ideas and opinions across social structures and, has allowed grass-root initiatives to flourish. Participatory civic technology has helped local communities to embrace civic action on matters of shared concern. In this case study, we describe SENSEI, a year-long participatory sensing movement. Local community organisations, decision makers, families, individuals and researchers worked together to co-create civic technologies to help them address environmental issues of shared interest, such as invasive plant species, abandoned items in the forests and nice places. Over 240 local participants have taken part to the different stages of this year long process which included ten community events and workshops. As a result, over hundred concrete ideas about issues of common interest were generated, nearly thirty civic tech prototypes were designed and developed, along hundreds of environmental observations. In this paper, we describe the process or orchestration of this initiative and present key reflections from it.
Effects of Participatory Evaluation – A Critical Actor-Network Analysis
In previous work, we have developed the theoretical concept of Critical Experience and the Participatory Evaluation with Autistic ChildrEn (PEACE) method. We grounded both in a series of separate case studies which allowed us to understand how to gather more and richer insights from the children than previously. This is crucial for child-led research projects. In this paper, we present additional cases in more detail which demonstrate the applicability of our concept of Critical Experience on cases in which PEACE was used. This provides new insights into how Critical Experience handles child-led evaluation strategies and how it can be applied and potentially transferred to different contexts, guiding other researchers and practitioners in evaluating participatory processes.
Towards Metrics of Meaningfulness for Tech Practitioners
HCI and the tech industry are increasingly interested in designing products that afford meaningful user experiences. Yet while several metrics of meaningfulness have been suggested, their utility and relevance for industry is unclear. We conducted workshops with 9 welfare technology companies and presented them with different metrics from existing literature in HCI, psychology, and industry, to evaluate their product and consider how relevant designing for meaningfulness is for them in their practice. We point to four metrics which companies considered particularly relevant, and suggest that further defining metrics of meaningfulness in HCI would be beneficial to both academia and industry.
Lessons Learned from Research via Private Social Media Groups
As research methods evolve to provide a voice to understudied, distributed communities, we explore our experiences running and analyzing Asynchronous Remote Communities (ARC) studies. Our experiences stem from four separate Facebook-based ARC studies with people who experience: rare disease, pregnancy, miscarriage, or HIV. We delve into these studies’ methods, and present updated guidelines focused on improved study design, data collection, and analysis plans for ARC studies.
Factitious: Large Scale Computer Game to Fight Fake News and Improve News Literacy
This case study describes a game designed to serve as new literacy education tool, playful polling system for research audience perceptions. The game underwent two primary designer iterations. As a result of design changes and renewed political chatter about fake news, the game’s second iteration gathered more than 500,000 plays. The data collected reveals useful patterns in understanding news literacy and the perception of play experiences. This data of more than 45,000 players, indicates that the older the person the better they are at identifying fake news, until the approximate age of 70. It also indicates that higher education correlates to better performance at identifying real news from fake, although the time it takes to do so varies. This case study demonstrates the potential for such game designs to collect data useful to non-game contexts.
ROOT: A Multidisciplinary Approach to Urban Health Challenges with HCI
With the rise of chronic diseases as the number one cause of death and disability among urban populations, it has become increasingly important to design for healthy environments. There is, however, a lack of interdisciplinary approaches and solutions to improve health and well-being through urban planning and design. This case study offers an HCI solution and approach to design for healthy urban structures and dynamics in existing neighborhoods. We discuss the design process and design of ROOT, an interactive lighting system that aims to stimulate walking and running through supportive, collaborative and social interaction. We exemplify how multidisciplinary HCI approaches in a hackathon setting can contribute to real life urban health challenges. This case study concludes that the experimental and collaborative nature of a hackathon facilitates the rapid exchange of perspectives and fosters interdisciplinary research and practice in urban planning and design.
Providing Access to VR Through a Wheelchair
Individuals may use their wheelchair to play VR games, explore three-dimensional, visual worlds and take part in virtual social events, even if they do not master the hand- or head- inputs that are common for VR. We present the development of a low-cost, do-it-yourself, wheelchair locomotion-device, which allows navigation in VR. More than 50 people, including 9 wheelchair users, participated in the evaluations of three prototypes and a number of games developed for them. Initially, cybersickness turned out to be a problem, but when we changed to an electric wheelchair instead of a manual and fine-tuned the controls this discomfort was remarkably reduced. We suggest using this device for gaming, training, interaction design, accessibility studies and the operation of robots.
When Users Assist the Voice Assistants: From Supervision to Failure Resolution
We conducted an in situ study of six households in domestic and driving situations in order to better understand how voice assistants (VA) are used and evaluate the efficiency of vocal interactions in natural contexts. The filmed observations and interviews revealed activities of supervision, verification, diagnosis and problem-solving. These activities were not only costly in time, but they also interrupted the flow in the inhabitants’ other activities. Although the VAs were expected to facilitate the accomplishment of a second, simultaneous task, they in fact were a hindrance. Such failures can cause abandonment, but the results nevertheless revealed a paradox of use: the inhabitants forgave and accepted these errors, while continuing to appropriate the vocal system.
Identifying the Intersections: User Experience + Research Scientist Collaboration in a Generative Machine Learning Interface
Creative generative machine learning interfaces are stronger when multiple actors bearing different points of view actively contribute to them. User experience (UX) research and design involvement in the creation of machine learning (ML) models help ML research scientists to more effectively identify human needs that ML models will fulfill. The People and AI Research (PAIR) group within Google developed a novel program method in which UXers are embedded into an ML research group for three months to provide a human-centered perspective on the creation of ML models. The first full-time cohort of UXers were embedded in a team of ML research scientists focused on deep generative models to assist in music composition. Here, we discuss the structure and goals of the program, challenges we faced during execution, and insights gained as a result of the process. We offer practical suggestions for how to foster communication between UX and ML research teams and recommended UX design processes for building creative generative machine learning interfaces.
Young Children’s Reading and Learning with Conversational Agents
Young children increasingly interact with voice-driven interfaces, such as conversational agents (CAs). The social nature of CAs makes them good learning partners for children. We have designed a storytelling CA to engage children in book reading activities. This case study presents the design of this CA and investigates children’s interactions with and perception of the CA. Through observation, we found that children actively responded to the CA’s prompts, reacted to the CA’s feedback with great affect, and quickly learned the schema of interacting with a digital interlocutor. We also discovered that the availability of scaffolding appeared to facilitate child-CA conversation and learning. A brief post-reading interview suggested that children enjoyed their interaction with the CA. Design implications for dialogic systems for young children’s informal learning are discussed.
The Tesserae Project: Large-Scale, Longitudinal, In Situ, Multimodal Sensing of Information Workers
The Tesserae project investigates how a suite of sensors can measure workplace performance (e.g., organizational citizenship behavior), psychological traits (e.g., personality, affect), and physical characteristics (e.g., sleep, activity) over one year. We enrolled 757 information workers across the U.S. and measure heart rate, physical activity, sleep, social context, and other aspects through smartwatches, a phone agent, beacons, and social media. We report challenges that we faced with enrollment, privacy, and incentive structures while setting up such a long-term multimodal large-scale sensor study. We discuss the tradeoffs of remote versus in-person enrollment, and showed that directly paid, in-person enrolled participants are more compliant overall compared to remotely-enrolled participants. We find that providing detailed information regarding privacy concerns up-front is highly beneficial. We believe that our experiences can benefit other large sensor projects as this field grows.
Social Media as a Passive Sensor in Longitudinal Studies of Human Behavior and Wellbeing
Social media serves as a platform to share thoughts and connect with others. The ubiquitous use of social media also enables researchers to study human behavior as the data can be collected in an inexpensive and unobtrusive way. Not only does social media provide a passive means to collect historical data at scale, it also functions as a “verbal” sensor, providing rich signals about an individual’s social ecological context. This case study introduces an infrastructural framework to illustrate the feasibility of passively collecting social media data at scale in the context of an ongoing multimodal sensing study of workplace performance (N=757). We study our dataset in its relationship with demographic, personality, and wellbeing attributes of individuals. Importantly, as a means to study selection bias, we examine what characterizes individuals who choose to consent to social media data sharing vs. those who do not. Our work provides practical experiences and implications for research in the HCI field who seek to conduct similar longitudinal studies that harness the potential of social media data.
Multimodal Speech-based Dialogue for the Mini-Mental State Examination
We present a system-initiative multimodal speech-based dialogue system for the Mini-Mental State Examination (MMSE). The MMSE is a questionnaire-based cognitive test, which is traditionally administered by a trained expert using pen and paper and afterwards scored manually to measure cognitive impairment. By using a digital pen and speech dialogue, we implement a multimodal system for the automatic execution and evaluation of the MMSE. User input is evaluated and scored in real-time. We present a user experience study with 15 participants and compare the usability of the proposed system with the traditional approach. Our experiment suggests that both modes perform equally well in terms of usability, but the proposed system has higher novelty ratings. We compare assessment scorings produced by our system with manual scorings made by domain experts.
Participatory Design of a Virtual Reality-Based Reentry Training with a Women’s Prison
This study examines the participatory design process of a virtual reality (VR) reentry training program with a women’s prison. Conceptually drawing on previous work in VR exposure training, this prototype consists of guided, first-person 3D-360° video episodes that depict psychologically stressful situations that women commonly face when returning home. Critical story and production elements, including screenplay, acting, and narration, were created with incarcerated and formerly incarcerated women. The institutional, technological, and cultural restrictions of prison, combined with the tensions of making media with often exploited groups, forced adaptations of participatory design methods. The inclusion of incarcerated female voices resulted in an immersive narrative that reflects this group’s specific challenges. The next phase is to evaluate its efficacy against non-immersive comparative trainings for reentry-related anxieties.
Understanding Abusive Behaviour Between Online and Offline Group Discussions
Online discussion platforms can face multiple challenges of abusive behaviour. In order to understand the reasons for persisting such behaviour, we need to understand how users behave inside and outside a community. In this paper, we propose a novel methodology to generate a dataset from offline and online group discussion conversations. We advocate an empirical-based approach to explore the space of abusive behaviour. We conducted a user-study (N = 15) to understand what factors facilitate or amplify forms of behaviour in cases of online conversation that are less likely to be tolerated in face-to-face. The preliminary analysis validates our approach to analyse large-scale conversation dataset.
From the Lab to the OB Truck: Object-Based Broadcasting at the FA Cup in Wembley Stadium
While traditional live-broadcasting is typically comprised of a handful of well-defined workflows, these become insufficient when targeting multiple screens and interactive companion devices on the viewer side. In this case study, we describe the development of an end-to-end system enabling immersive and interactive experiences using an object-based broadcasting approach. We detail the deployment of this system during the live broadcast of the FA Cup Final at Wembley Stadium in London in May 2018. We also describe the trials and interviews we ran in the run-up to this event, the infrastructure we used, the final software developed for controlling and rendering on-screen graphics and the system for generating and configuring the live broadcast-objects. In this process, we learned about the workflows inside an OB truck during live productions through an ethnographic study and the challenges involved in running an object-based broadcast over the Internet, which we discuss alongside other gained insights.
Designing an Informal Learning Curriculum to Develop 3D Modeling Knowledge and Improve Spatial Thinking Skills
We report on the design and implementation of a 3-week long summer academy introducing high school students to 3D modeling and 3D printing experiences. Supporting youth in developing 3D modeling knowledge can enhance their capacity to effectively use an array of emerging technologies such as Virtual Reality, Augmented Reality, and digital fabrication. We used tools and practices from both formal and informal education, such as storylining, to inform the design of the curriculum. We collected data through surveys, artifacts, observations, screen recordings, and group videos. Our findings suggest that (1) emphasizing curricular coherence as a design goal and (2) providing youth with multiple avenues for engaging in 3D modeling can help to: spark youth interest in 3D printing/modeling, maintain youth engagement in learning activities over the course of several weeks, and provide youth with opportunities to develop their spatial thinking skills.
A Change of Perspective: Designing the Automated Vehicle as a New Social Actor in a Public Space
With the rise of automated vehicles, a new road user had to be designed, an autonomous system that needs to integrate into an ecosystem of human-human interaction. Traditionally automotive UX is focused on the interaction between the driver and the vehicle. This new design challenge however comprised a change of perspective from driver/inside to road user/outside and from a system that is steered by a human being to an intelligent system that proactively makes decisions in a public space. A new approach was necessary to handle this change of perspective in the design process and to instill it into the heads of the stakeholders. We modified a user centered process to satisfy the challenge of designing the automated vehicle as a social actor. For example, we designed for acceptance by defining a character based on hopes and concerns of the public. The flow of communication was analyzed and intent based visual and acoustic signals were designed and evaluated in a purpose-built simulator. The lessons we learned from this process might also be applicable to the design of other autonomous, public facing systems.
Lab Hackathons to Overcome Laboratory Equipment Shortages in Africa: Opportunities and Challenges
Equipment shortages in Africa undermine Science, Technology, Engineering and Mathematics (STEM) Education. We have pioneered the LabHackathon (LabHack): a novel initiative that adapts the conventional hackathon and draws on insights from the Open Hardware movement and Responsible Research and Innovation (RRI). LabHacks are fun, educational events that challenge student participants to build frugal and reproducible pieces of laboratory equipment. Completed designs are then made available to others. LabHacks can therefore facilitate the open and sustainable design of laboratory equipment, in situ, in Africa. In this case study we describe the LabHackathon model, discuss its application in a pilot event held in Zimbabwe and outline the opportunities and challenges it presents.
Should I Interfere’ AI-Assistants’ Interaction with Knowledge Workers: A Case Study in the Oil and Gas Industry
Artificial Intelligence (AI) assistants have been a hot topic for a few years. Popular solutions – such as Google Assistant, Microsoft’s Cortana, Apple’s Siri, and Amazon Alexa – are becoming resourceful AI-assistants for general users. Apart from some mishaps, those assistants have a successful history in supporting people’s everyday tasks. The same cannot be said in industry-specific scenarios, in which AI-assistants are still a bet. Companies combining AI with human expertise and experience can be stand out in their industry. This is particularly important for industries that rely their strategic decision-making processes on knowledge workers actions. More than another system, AI-assistants are new players in the human-computer interaction. But how and when should an AI-assistant interfere in a knowledge worker task? In this paper, we present findings from a case study using the Wizard of Oz approach in an oil and gas company. Our findings begin to answer that question for what kind of interference knowledge workers in that domain would accept from an AI-assistant.
Translation, Tracks & Data: an Algorithmic Bias Effort in Practice
Potential negative outcomes of machine learning and algorithmic bias have gained deserved attention. However, there are still relatively few standard processes to assess and address algorithmic biases in industry practice. Practical tools that integrate into engineers’ workflows are needed. As a case study, we present two tooling efforts to create tools for teams in practice to address algorithmic bias. Both intend to increase understanding of data, models, and outcome measurement decisions. We describe the development of 1) a prototype checklist based on existing literature frameworks; and 2) dashboarding for quantitatively assessing outcomes at scale. We share both technical and organizational lessons learned on checklist perceptions, data challenges and interpretation pitfalls.
Building Together: When Research Went Viral at Uber
In late 2017, Uber was nearly a year into a complete redesign of its driver-facing mobile app. This case study describes the research program we executed to support the app’s global beta launch, which aimed to “Build Together” with drivers across different geographies. With the goal of minimizing the time-space-cognitive distance between beta drivers and the product team, we deployed researchers in 7 cities for a 3-week research sprint, combining four high-touch ethnographic methods to understand drivers’ reaction to the product. Unusually, we used an internal Google+ social media site to post a continual stream of raw, unsynthesized “atomic evidence” from research activities. The G+ unexpectedly went viral, creating extremely high engagement, impact, and stakeholder sentiment. Here we discuss the pros, cons, and impact of our approach, and also how success came from creating space for others to create, engage with, and act on raw evidence from the field.
HCI and Menopause: Designing With and Around the Aging Body
With growing concern for the intimate dimensions of technology development, HCI scholars have begun to grapple with who wields power in design around the body. However, beyond menstruation, few studies have sought to examine the role of technology design in the later stages of life for menstruating people. This paper considers menopause as a central, but overlooked life phase informing the design of future intimate technologies. We review empirical analysis of menopausal experiences and design provocations that emerged from our work. We end with a reflection on the opportunities and pitfalls around menopause design.
Triptech: A Method for Evaluating Early Design Concepts
Measuring user experience (UX) is an important part of the design process, yet there are few methods to evaluate UX in the early phases of product development. We introduce Triptech, a method used to quickly explore novel product ideas. We present how it was used to gauge the frequency and importance of user needs, to assess the desirability and perceived usefulness of design concepts, and to draft UX requirements for Now Playing-an on-device music recognition system for the Pixel 2. We discuss the merits and limitations of the Triptech method and its applicability to tech-driven innovation practices.
Designing Airport Interiors with 3D Visualizations
Understandings of user-centered design incorporate the need to include users and stakeholders in the design process from early on, employing visual and ‘enactment’ principles and approaches. Virtual Reality (VR) and 3D visualizations offer such opportunities for enhanced ‘enactments’ of proposed designs through immersion. Within PASSME H2020 European project, 3D design visualizations for novel concepts for an airport interior were developed and tested early on with users to identify the best interior design principles among the alternatives considered to reduce passenger stress, waiting times and improve overall Passenger Experience (PAX). Using the potential of VR, concepts were tested with users and we identified passenger emotional and design-driven responses to boarding gates and lounge visualizations to inform the iterative development of in-situ passenger-centric interventions. We elicited both emotional, practical and operational needs and requirements for improving PAX within airports and found out that users can use VR to imagine interaction scenarios within the proposed design products.
Teaching Data Visualization and Storytelling with Data Comic Workshops
This paper presents a method for hands-on creation of data comics in a workshop context and includes a description of the results, lessons learned and future improvements. Data comics is a promising format for data-driven storytelling, leveraging the power of data visualization and visual storytelling with comics. However, authoring data comics requires a diverse range of skills that are both creative and analytical. Our workshop is aimed at developing a blue-print for future workshops and reflecting on challenges and potential improvements. Within a 3-week assignment for an illustration class, we ran three 3-hour sessions. Our design was informed by the experiences of previous data-comics workshops. Results show the creative potential of data comics. Challenges to learn from the workshop include the stages to introduce data visualizations and journalistic narratives, the structuring of stories and the method of developing iterations of comic drafts. We close by reflecting on these challenges and how they can inform future improvements and adaptations.
Children’s Reflection in Action with DIY
The present case study describes and comments on an experimental activity with 9-11 year old children of a public school in Lecce (Italy) in August 2018. The pupils were required to create computational tools using materials recycled from their own home. We adopted a constructionist perspective; we wanted to foster reflection and discussion among the young participants on the amount of household waste produced and how it can be repurposed for creating novel objects. In order to achieve our scope, we were guided by collapse informatics theory and research through design.
Explorations on Single Usability Metrics
A long-term summative evaluation program was undertaken at Microsoft. This program focused on generating a Single Usability Metric (SUM) score across products over time but encountered a number of issues including error rate reliability, challenges establishing objective time-on-task targets, and scale anchoring. These issues contributed to making SUM difficult to communicate, prompting exploration of an alternative single usability metric using simple thresholds developed from anchor text and inter-metric correlations.
Towards Novel Urban Planning Methods — Using Eye-tracking Systems to Understand Human Attention in Urban Environments
Data on how humans perceive the attractiveness of their (urban) environments has been mainly gathered with qualitative methods, including workshops, interviews and group discussions. Qualitative methods help us to understand the phenomenon, albeit with the cost of sufficient information as concerns its details. We may end up confirming something that we as researchers have ‘programmed’ to get as a result. Here we take a complementary approach, having collected eye tracking data from two case experiments. The participants to these experiments were professional urban planners and non-professionals respectively. We asked them to view planning-related artefacts comprising architectural illustrations, photographed landscapes and planning sketches. After analysing the findings from these experiments, we draw guidelines for using the eye tracking system in urban planning processes for gathering the human perceptions of attractive urban environments
"TechShops" Engaging Young Adults with Intellectual Disability in Exploratory Design Research
This case study presents “TechShops”, a collaborative workshop-based approach to learning about technologies with Young Adults with Intellectual Disability (YAID) in exploratory design research. The “TechShops” approach emerged because we found it difficult to engage YAID in traditional contextual interviews. Hence, we offered a series of “TechShops”, which we found useful in: enabling engagement with participants, their families and support staff; fostering relationships; and gaining research access. We explain the context of “TechShops”, and reflect upon the opportunities and challenges that the approach offers for both researchers and YAID in exploratory design research.
How to Carry Out Usability Studies with Visually Impaired Children
Usability tests help us obtain quantitative and qualitative data with real users who perform actual tasks with a product. Usability tests were carry out to evaluate a designed product for a Student Design Competition (SDC). The following document relates the process of adapting usability tests to visually impaired children, who were the target audiences in a project. In interaction with children we learned how to help children understand some concepts involved in the product faster. This interaction resulted in a reliable device whose characteristics fit directly with user’s needs.
SESSION: Courses
Anticipating the Future of HCI by Understanding Its Past and Present
This course is for students, practitioners, and academics who are interested in planning their careers. The rapid pace of change leaves some tools and technologies behind. We must focus our attention primarily on current developments. Why look in the rear-view mirror? Some topics that are being actively explored now will soon be of less interest, so career planning benefits from thoughtfully identifying trajectories. To make use of relevant information in other fields you must understand how their terminologies, priorities, and methods evolved. We will cover the history of several HCI fields and discuss opportunities and challenges that lie ahead. Software evolved from passively reacting to human input to today’s dynamic partnership. In some areas HCI advanced steadily, elsewhere it reached dead ends or seemed to go in circles. Understanding these patterns will prepare you to respond to unexpected developments in years to come. The forces that shaped HCI in computer science, human factors, information systems, information science, and design are covered, with examples and implications for our new era.
Making with Fabric: Foundations of Soft Goods and E-Textiles Fabrication
Wearable technologies afford more pervasive access to the human user: higher bandwidth for communication in multiple modalities, and better context-awareness through sensing the user’s body and environment. However, stand-alone devices like wristbands and clip-ons are limited in the body areas they can simultaneously access. Clothing and textiles provide a useful platform for distributed systems, but present unique challenges in design and fabrication. This course provides an introduction to the tools, methods, and techniques of designing and fabricating with soft goods, including patternmaking and construction techniques for different material types at multiple scales, and e-textile methods and materials.
Modern Vision Science for Designers: Making Designs Clear at a Glance
Why do some interfaces allow users to find what they need easily while others do not? What information can the visual system effortlessly extract, and what requires slower, more cognitive processes? What does eye-tracking data tell us about what users perceive? Vision scientists have recently made ground-breaking progress in understanding many aspects of vision that are key to good design. A critical determining factor is what information is available at a glance. This course reviews state-of-the-art vision science, including a computational model that visualizes the available information. We will demonstrate use of this model in evaluating and guiding visual designs.
Sketching in HCI: Hands-on Course of Sketching Techniques
Freehand sketching is a valuable process, input, output, and tool, often used by people to communicate and express ideas, as well as document, explore and describe concepts between researcher, user, or client. Sketches are fast, easy to create, and – by varying their fidelity – can be used in all areas of HCI. Sketching in HCI will explore and demonstrate themes around sketching in HCI with the aim of producing tangible outputs. Attendees will leave the course with the confidence to engage actively with sketching on an everyday basis in their research practice.
How to Write CHI Papers (Third Edition)
We base everything that we do as researchers on what we write. Primarily for graduate students and young researchers, it is hard to turn a research project into a successful CHI publication. This struggle continues for postdocs and young professors trying to author excellent reviews for the CHI community that pinpoint flaws and improvements in research papers. This third edition of the successful CHI paper writing course offers hands-on advice and more in-depth tutorials on how to write papers with clarity, substance, and style. It is structured into three 80-minute units with a focus on writing CHI papers.
Introduction to Human-Computer Interaction
The objective of this course is to provide newcomers to Human-Computer Interaction (HCI) with an introduction and overview of the field. Attendees often include practitioners without a formal education in HCI, and those teaching HCI for the first time. This course includes content on theory, cognition, design, evaluation, and user diversity.
Empirical Research Methods for Human-Computer Interaction
In this two-session course, attendees learn how to conduct empirical research in human-computer interaction (HCI). This course delivers an A-to-Z tutorial on designing a user study and demonstrates how to write a successful CHI paper. It would benefit anyone interested in conducting a user study or writing a CHI paper. Only general HCI knowledge is required.
Professional Presentation Training: Improve the User Experience of your CHI Presentation
Come learn from Bloomberg UX designers how to apply professional design and presentation skills to your CHI presentation to ensure you make the biggest impact on your audience in the limited time and space you have. In part 1 you will learn how to convey your information and message visually: first by finding the key story you are trying to tell and then using principles of visual hierarchy to make that story pop! In part 2 you will learn how to convey your information orally: effectively getting and keeping your audience’s attention so they remember your message. http://www.beproatchi.com/
Insights in Experimental Data through Intuitive and Interactive Statistics
It is not unusual for empirical scientists, who are often not specialists in statistics, to have only limited trust in the statistical analyses that they apply to their data. The claim of this course is that an improved human-computer interaction with statistical methods can be accomplished by providing a simple mental model of what statistics does, and to support this model through well-chosen visualizations and interactive exploration. In order to support this proposed approach, an entirely new program for performing interactive statistics, called ILLMO, was developed. This course will use examples of frequent statistical tasks such as hypothesis testing, linear regression and clustering to introduce the key concepts underlying intuitive and interactive statistics.
Ethnographic Methods for Human Factors Researchers: Collecting and Interweaving Threads of HCI
This course offers an introduction to ethnography for Human Factors Research. It covers relevant topics along the research process from decision arguments for the method and study design, up to data collection, analysis and interpretation. Ethical questions will include the researcher’s role(s) in the field and modes of data presentation. The collection of multi-dimensional sets of data – a trademark of high-quality ethnographic work – enables inter-weaving threads of HCI perspectives in complex human factors and user research contexts. To achieve this, a comprehensive toolbox of ethnographic methods is introduced along with practical hands-on sessions to familiarize with these methodological instruments.
Make This! Introduction to Electronics Prototyping Using Arduino
This course is a hands-on introduction to interactive electronics prototyping for people with a variety of backgrounds, including those with no prior experience in electronics. Familiarity with programming is helpful, but not required. Participants learn basic electronics, microcontroller programming and physical prototyping using the Arduino platform, then use digital and analog sensors, LED lights and motors to build, program and customize a small paper robot.
Rapid Prototyping of Augmented Reality & Virtual Reality Interfaces
This course introduces participants to rapid prototyping techniques for augmented reality and virtual reality interfaces. Participants will learn about both physical prototyping with paper and Play-Doh as well as digital prototyping via new visual authoring tools for AR/VR. The course is structured into four sessions. After an introduction to AR/VR prototyping principles and materials, the next two sessions are hands-on, allowing participants to practice new physical and digital prototyping techniques. These techniques use a combination of new paper-based AR/VR design templates and smartphone-based capture and replay tools, adapting Wizard of Oz for AR/VR design. The fourth and final session will allow participants to test and critique each other’s prototypes while checking against emerging design principles and guidelines. The instructor has previously taught the techniques to broad student audiences with a wide variety of non-technical backgrounds, including design, architecture, business, medicine, education, and psychology, who shared a common interest in user experience and interaction design. The course is targeted at non-technical audiences including HCI practitioners, user experience researchers, and interaction design professionals and students. A useful byproduct of the course will be a small portfolio piece of a first AR/VR interface designed iteratively and collaboratively in teams.
Balancing Interaction Design
Over the last two decades, creative, lean and strategic design approaches have become increasingly prevalent in the development of interactive technologies, but tensions exist with longer established approaches such as human factors engineering and user-centered design. These tensions can be harnessed productively by first giving equal status in principle to creative, business and agile engineering practices, and then supporting this with flexible critical approaches and resources that can balance and integrate across a range of multidisciplinary design practices.
Conceptual Models: Core to Good Design
A crucial step in designing a user interface for a software application is to design a coherent, task-focused conceptual model (CM). With a CM, designers design better, developers develop better, and users learn and use better. Unfortunately, this step is often skipped, resulting in incoherent, arbitrary, inconsistent, overly-complex applications that impede design, development, learning, understanding, and use. This course covers what CMs are, how they help, how to develop them, and provides hands-on experience.
Design for Wellbeing – Tools for Research, Practice and Ethics
Any move towards more ethical design and technologies that genuinely improve our lives requires that those technologies respect our psychological needs. Currently, there is no systematic integration of wellbeing science into tech development, and the many technology-induced harms to mental health, reported in the media daily, attest to this deficit. But the status quo is changing. A demand for more “Humane Technologies” [12] is forcing companies to rethink digital business as usual. Fortunately, recent research has uncovered new ways to make psychologically respectful technologies possible. Just as we can design ergonomically to support physical wellness, we can design psycho-ergonomically to support psychological health. By integrating well-evidenced theory and methods from multiple disciplines, we can design and develop new technologies to “do no harm” and even increase psychological wellbeing [1]. In this course we will introduce frameworks for designing technologies that respect human values and wellbeing [6,7,8,9,10] together with an established ethical framework within which to situate this design for flourishing [11]. We also provide practical tools for ideation, design, and the evaluation of the psychological impact of products.
Computational Interaction with Bayesian Methods
This course introduces computational methods in human–computer interaction. Computational interaction methods use computational thinking—abstraction, automation, and analysis—to explain and enhance interaction. This course introduces the theory of practice of computational interaction by teaching Bayesian methods for interaction across four wide areas of interest when designing computationally-driven user interfaces: decoding, adaptation, learning and optimization. The lectures center on hands-on Python programming interleaved with theory and practical examples grounded in problems of wide interest in human-computer interaction.
Designing with the Mind in Mind: The Psychological Basis for UI Design Guidelines
UI design rules and guidelines are not simple recipes. Applying them effectively requires determining rule applicability and precedence and balancing trade-offs when rules compete. By understanding the underlying psychology, designers and evaluators enhance their ability to apply design rules. This two-part (160-minute) course explains that psychology.
Avoiding and Mitigating Ethical Traps in Technocentric Fieldwork
We are witnessing an increase in fieldwork within the field of HCI, particularly involving marginalized or under-represented populations. This has posed ethical challenges for researchers during such field studies, with “ethical traps” not always identified during planning stages. This is often aggravated by the inconsistent policy guidelines, training, and application of ethical principles. We ground this in our collective experiences with ethically-difficult research, and frame it within common principles that are common across many disciplines and policy guidelines – representative of the instructors’ diverse and international backgrounds.
Building Economic Models of Human Computer Interaction
Economics provides an intuitive and natural way to formally represent the cost and benefits of interacting with applications, interfaces and devices. By using economics models it is possible to reason about interaction and make predictions about how changes to the system will affect performance and behavior. In this course, we provided an overview of relevant economic concepts and then showed how economics can be used to model human computer interaction to generated hypotheses about interaction which can be used to inform design and guide experimentation. As a case study, we demonstrate how various interactions with search and recommender applications can be modeled, before concluding the day with a hands-on modeling session using example and participant problems.
Tangible Ecosystem Design: Developing Disruptive Services for Digital Ecosystems
The epoch of the platform economy has arrived. Companies face the question of how to build up disruptive digital services with disruptive business models and establish a digital ecosystem. Traditional UCD methods concentrate on the conception of particular services. Nevertheless, most of these are isolated solutions of single companies. Building services for the digital transformation era requires additional methods to come up with end-to-end consumer experiences and sustainable business models across boundaries of single companies for benefiting consumers. In this course participants learn the “Tangible Ecosystem Design? Method (TED), that supports the conception of Digital Ecosystems by using tangible elements.
Introduction to Legal Issues in Human-Computer Interaction
The objective of this course is to provide an overview of legal issues in HCI. The course will focus on five different areas: accessibility, privacy, intellectual property, telecommunications, and requirements in using human participants in research.
Bespoke Data Visualization using R and ggplot2
Being able to visualize data in consistent high-quality ways is a useful skill for HCI researchers and practitioners. In this course, attendees will learn how to produce high quality plots and visualizations using the ggplot2 library for the R statistical computing language. There are no prerequisites and attendees will leave with scripts to get them started as well as foundational knowledge of free open-source tools that they can build on to produce complex, even interactive, visualizations.
Design for User Interaction with Intelligent Systems
Intelligence is now a widely accepted part of systems that we interact with every day, but comes in many forms as far as the user is concerned. There is a need for interaction designers to have an understanding of the nature of intelligent systems and be able to categorise different types of intelligence as it appears to the user. This course will give attendees an appreciation of what ‘intelligence’ or ‘smartness’ within computer systems is, discuss how it is currently perceived, and describe the enablers and barriers to its effective use. The course will categorise different kinds of intelligence capability. It will offer interaction design guidelines or heuristics for the design of user interfaces with intelligent features to enable them to be more effective as partners with humans. It will use a mixture of teaching techniques combining presentation, discussion and class exercise.
Eye Tracking Methodology in Screen-based Usability Testing
Eye tracking is an important tool in usability testing of a screen-based user interface. Though eye tracking has been used in usability testing for quite a while, challenges remain. For example, how to accurately calibrate gaze point? How to interpret a scan pattern? In this tutorial, we will introduce the basics of the human oculomotor system, the role of eye tracking in cognition, eye tracking recording techniques, and data analysis methods. Upon the completion of this tutorial, students will have a basic understanding of physiological and psychological mechanisms underlying eye tracking, data collection techniques, and data analysis methods.
User Experience (UX) Research in Games
This course will allow participants to understand the complexities of games user research methods for user experience research in games. For this, we have put together three-course sessions at CHI (80 minutes each) on applications of different user research methods in games evaluation and playtesting exercises to help participants turn player feedback into actionable design recommendations. This course consists of three interactive face-to-face units during CHI 2019.
Conversation Design: Principles, Strategies, and Practical Application
With the rise of digital assistants, chatbots, and other conversational interfaces, there’s a huge demand for detail and instruction for Conversation Design. This course provides a focused walkthrough of the principles, strategies, and process of Conversation Design. Topics include understanding users, defining persona, analyzing conversation components, dialog writing strategies, and the detailed process of creating natural dialog. Interactive components at each stage engage participants with individual worksheets, small group exercises and reviews, and a team project. Participants will gain an understanding of the complexity and challenges of Conversation Design, and learn about resources and tools for doing it well.
Prototyping Transparent and Flexible Electrochromic Displays
This course is a hands-on introduction to the fabrication of flexible, transparent free-form displays based on electrochromism for an audience with a variety of backgrounds, including artists and designers with no prior knowledge of physical prototyping. Besides prototyping using screen printing or ink-jet printing of electrochromic ink and an easy assembly process, participants will learn essentials for designing and prototyping electrochromic displays.
Intro to the Human Body: Wearability and Human Factors of Wearable Systems
The traditional “human” model in human-computer interaction prioritizes the human brain, with physical and sensory interaction as secondary emphases. As wearable technologies proliferate and mature, the user experience and human factors of the rest of the body become increasingly important. This course will provide an overview of the basic foundations of wearability and human factors of wearable systems, from anatomy and physiology to body schema, physiological experience of on-body artifacts, and the ways in which dress affects and communicates identity and social relationships.
SESSION: Interactivity
ScaleDial: A Novel Tangible Device for Teaching Musical Scales & Triads
The teaching of harmonic foundations in music is a common learning objective in many education systems. However, music theory is often considered as a non-interactive subject matter that requires huge efforts to understand. With this work, we contribute a novel tangible device, called ScaleDial, that makes use of the relations between geometry and music theory to provide interactive, graspable and playful learning experiences. Therefore, we introduce an innovative tangible cylinder and demonstrate how harmonic relationships can be explored through a physical set of digital manipulatives, that can be arranged and stacked on top of an interactive chromatic circle. Based on the tangible interaction and further rich visual and auditory output capabilities, ScaleDial enables a better understanding of scales, pitch constellations, triads, as well as intervals. Further, we describe the technical realization of our advanced prototype and show how we fabricate the magnetic, capacitive and mechanical sensing.
ClassBeacons: Enhancing Reflection-in-Action of Teachers through Spatially Distributed Ambient Information
Reflection-in-action (RiA) refers to teachers’ reflections on their teaching performance during busy classroom routines. RiA is a demanding competence for teachers, but little has been known about how HCI systems could support teachers’ RiA during their busy and intensive teaching. To bridge this gap, we design and evaluate an ambient information system named ClassBeacons. ClassBeacons aims to help teachers intuitively reflect-in-action on how to divide time and attention over pupils throughout a lesson. ClassBeacons subtly depicts teachers’ division of time and attention over pupils through multiple light-objects distributed over students’ desks. Each light-object indicates how long the teacher has been cumulatively around it (helping an adjacent student) by shifting color. A field evaluation with eleven teachers proved that ClassBeacons enhanced teachers’ RiA by supporting their sensemaking of ongoing performance and modification of upcoming actions. Furthermore, ClassBeacons was experienced to unobtrusively fit into teachers’ routines without overburdening teaching in progress.
Physical Programming for Blind and Low Vision Children at Scale
There is a dearth of appropriate tools for young learners with mixed visual abilities to engage with computational learning. Addressing this gap, we present Project Torino, a physical programming language for teaching computational learning to children ages 7-11 regardless of level of vision. To create code, children connect and manipulate tactile objects to create music, audio stories, or poetry. Designed to be made and deployed at scale, Project Torino (along with a scheme of work) has been successfully used by 30 non-specialist teachers with 75 children across the UK over three months.
StringTouch: A Scalable Low-Cost Concept for Deformable Interfaces
This paper describes a demo prototype of a tangible user interface (TUI) concept that is derived from the expressive play of musical string instruments. We translated this interaction paradigm to an interactive demo which offers a novel gesture vocabulary (strumming, picking, etc.). In this work we present our interaction concepts, prototype description, technical details and insights on the rapid and low-cost manufacturing and design process. (Video demonstration: https://vimeo.com/309265370)
Tangible Interactions with Acoustic Levitation
Acoustic Levitation can hold millimetric objects in mid-air without any physical contact. This capability has been exploited to create displays since being able to position mid-air physical voxels enables for rich data representations. However, most of the times interesting features of acoustic levitation are not exploited. Acoustic Levitation is harmless and sound diffracts around objects, thus we can insert our hand inside the levitator and touch the levitated particles without harmful effect on us. In this demo, we showcase more tangible interactions with acoustically levitated particles by passing acoustically-transparent structures through the particles, manipulating particles in mid-air with wearable levitators or by moving multiple particles with direct manipulation. We hope that this demo provides a more tangible experience of acoustic levitation. Since all the presented devices are Do-It-Yourself, we encourage visitors to experiment further with acoustic levitation.
Three-in-one: Levitation, Parametric Audio, and Mid-Air Haptic Feedback
Ultrasound enables new types of human-computer interfaces, ranging from auditory and haptic displays to levitation (visual). We demonstrate these capabilities with an ultrasonic phased array that allows users to interactively manipulate levitating objects with mid-air hand gestures whilst also receiving auditory feedback via highly directional parametric audio, and haptic feedback via focused ultrasound onto their bare hands. Therefore, this demo presents the first ever ultrasound rig which conveys information to three different sensory channels and levitates small objects simultaneously.
Demonstration of Refinity: An Interactive Holographic Signage for New Retail Shopping Experience
This demo presents Refinity – an interactive holographic signage for the new retail shopping experience. In our demo, we show a concept of futuristic shopping experience with a tangible 3D mid-air interface that allows customers to directly select and explore realistic virtual products using autostereoscopic 3D display combined with mid-air haptics and finger tracker. We also present an example of in-store shopping scenario for natural interactions with 3D. This shopping experience will engage users in producing a memorable in-store experience with the merging of digital and physical interactions.
Multimodal Representation of Complex Spatial Data
For blind users, spatial information is often presented in non-spatial form such as electronic speech.We explore the possibility of representing spatial data on refreshable tactile graphic displays in combination with audio feedback and utilizing both static and dynamic tactile information. We demonstrate an implementation of a New York Times style crossword puzzle, providing interactions to query location and stored data, ask for clues in across and down directions, edit and fill in the crossword puzzle using a Perkins style braille keyboard or a typewriter-style keyboard , and verify answers. Through our demonstration, we explore tradeoffs related to available tactile real estate, and overcrowding of the tactile image with a view toward reducing the cognitive workload involved in retaining a working mental model of the active grid, and the time to complete a letter placement task.
Brick: A Synchronous Multiplayer Augmented Reality Game for Mobile Phones
Multiplayer augmented reality (AR) games allow players to inhabit a shared physical environment populated with interactive digital objects. However, currently available games fall short because of either limited synchronicity or limited opportunities for player movement. Here, we present Brick, a synchronous multiplayer AR game at the room scale. Brick’s players collaborate to fill in a pattern of empty slots using digital “bricks” scattered about the room. This paper provides an overview of Brick from a design and technical perspective. It also discusses how Brick extends the current scope of AR games to include collaborative gameplay.
Towards Evidence-informed Design Principles for Adaptive Reading Games
This demonstration presents the design principles of the Navigo games for reading. By reflecting on our design tools and processes we explore the way theory, empirical evidence and best practice and expertise have informed our design. We look into the reciprocal role of theory and design and provide transferable lessons for design of educational technologies in the context of HCI.
‘In the Same Boat’,: A Game of Mirroring Emotions for Enhancing Social Play
Social closeness is important for an individual’s health and well-being, and this is especially difficult to maintain over a distance. Games can help with this, to connect and strengthen relationships or create new ones by enabling shared playful experiences. The demo proposed is a game we designed called ‘In the Same Boat’, a two-player game intended to foster social closeness between players over a distance. We leverage the synchronization of both players’ physiological data (heart rate, breathing,facial expressions) mapped to an input scheme to control the movement of a canoe down a river.
Slackliner 2.0: Real-time Training Assistance through Life-size Feedback
In this demo, we present Slackliner 2.0, an interactive slackline training assistant which features head and skeleton tracking, and real-time feedback through life-size projection. Like in other sports, proper training leads to a faster increase of skill and lessens the risk of injuries. We chose a set of exercises from slackline literature and implemented an interactive trainer which guides the user through the exercises giving feedback if the exercises were executed correctly. Based on lessons learned from our study and prior demonstrations we present a revised version of Slackliner that uses head tracking to better guide the user’s attention and movements. Additionally a new visual indicator informs the trainee about her arm posture during the performance. This has been also included in an updated post-analysis view that provides the trainee with more detailed feedback about her performance. The present demo showcases an interactive sports training system that provides in-situ feedback while following a well-guided learning procedure.
Playing Beyond the Front Room: Designing for Social Play in Ola De La Vida
Ola De La Vida is a three-player cooperative game installation which is designed to harness the qualities of social play environment such as an arcade or play party (an event which mixes games, music, dance and socializing). Ola De La Vida uses physical and digital design techniques that consider the unique qualities of a social play space being sensitive to the complex personal, social and interpersonal aspects each player may experience. The game installation uses its scale to heighten the visibility of the game in a crowded play space, custom control methods to lower barriers to entry and costume to promote teamwork, collaboration and to lower social anxieties. These techniques, in partnership with the digital-physical nature of the game play aims to entice players and spectators in a social environment to participate. Through this installation we hope to encourage discussion around designing for participation and the challenges of social play.
Crushed it!: Interactive Floor Demonstration
We introduceCrushed It!, an interactive game on a sensor floor. This floor is combined with a multiple projector system to reduce occlusions from players’ interactions with the floor. Individual displays, an HTC Vive to track player position, and smart watches were added to provide an extra layer of interactivity. We created this interactive experience to explore collaboration between people when interacting with large displays. We contribute a novel combination of different technologies for this game system and our studies showed this game is both entertaining and provides players with motivation to stay physically active. We believe presenting at interactivity would be a benefit to both our research and to the attendees of CHI 2019.
FoldTronics Demo: Creating 3D Objects with Integrated Electronics Using Foldable Honeycomb Structures
We present FoldTronics, a 2D-cutting based fabrication technique to integrate electronics into 3D folded objects. The key idea is to cut and perforate a 2D sheet using a cutting plotter to make it foldable into a 3D honeycomb structure; before folding, users place the electronic components and circuitry onto the sheet. The fabrication process only takes a few minutes allowing to rapidly prototype functional interactive devices. The resulting objects are lightweight and rigid, thus allowing for weight-sensitive and force-sensitive applications. Due to the nature of honeycombs, the created objects can be folded flat along one axis and thus can be efficiently transported in this compact form factor.
Demonstrating Kyub: A 3D Editor for Modeling Sturdy Laser-Cut Objects
We present an interactive editing system for laser cutting called kyub. Kyub allows users to create models efficiently in 3D, which it then unfolds into the 2D plates laser cutters expect. Unlike earlier systems, such as FlatFitFab, kyub affords construction based on closed box structures, which allows users to turn very thin material, such as 4mm plywood, into objects capable of withstanding large forces, such as chairs users can actually sit on. To afford such sturdy construction, every kyub project begins with a simple finger-joint “boxel”-a structure we found to be capable of withstanding over 500kg of load. Users then extend their model by attaching additional boxels. Boxels merge automatically, resulting in larger, yet equally strong structures. While the concept of stacking boxels allows kyub to offer the strong affordance and ease of use of a voxel-based editor, boxels are not confined to a grid and readily combine with kuyb’s various geometry deformation tools. In our technical evaluation, objects built with kyub withstood hundreds of kilograms of loads. We demonstrate the kyub software to the CHI audience and allow them to experience the resulting models first hand.
Digital Fabrication of Soft Actuated Objects by Machine Knitting
With recent interest in shape-changing interfaces, material-driven design, wearable technologies, and soft robotics, digital fabrication of soft actuatable material is increasingly in demand. Much of this research focuses on elastomers or non-stretchy air bladders. In this work, we explore a series of design strategies for machine knitting actuated soft objects by integrating tendons with shaping and anisotropic texture design.
Painting with CATS: Camera-Aided Texture Synthesis
We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other. This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings.
Flowboard: A Visual Flow-Based Programming Environment for Embedded Coding
With Maker-friendly environments like the Arduino IDE, embedded programming has become an important part of STEM education. But learning embedded programming is still hard, requiring both coding and basic electronics skills. To understand if a different programming paradigm can help, we developed Flowboard, which uses Flow-Based Programming (FBP) rather than the usual imperative programming paradigm. Instead of command sequences, learners assemble processing nodes into a graph through which signals and data flow. Flowboard consists of a visual flow-based editor on an iPad, a hardware frame integrating the iPad, an Arduino board and two breadboards next to the iPad, letting learners connect their visual graphs seamlessly to the input and output electronics. Graph edits take effect immediately, making Flowboard a live coding environment.
Live Programming By Example
Live programming is a novel approach for programming practice. Programmers are given real-time feedback when writing code, traditionally via a graphical user interface. Despite live programming’s practical values, such as providing an easier overview of code and better understanding of its structure, it is not yet widely used. In this work, we extend live programming to general purpose code editors, which allows for live programming to be used by programmers, and provides new interfaces for understanding and changing the functionality of code. To achieve this we extended a fully-featured IDE with the ability to show input/output examples of code execution, as the programmer is writing code. Furthermore, we integrate programming by example (PBE) synthesis into our tool by allowing the user to change the shown output, and have the code update automatically. Our goal is to use live programming to give novice programmers a new way to interact and understand programming, as well as being a useful development tool for more advanced programmers.
Dynamic Depth-of-Field Projection for 3D Projection Mapping
Demonstration for a dynamic depth-of-field projection mapping on a 3D moving object would be performed. Conventional projection mapping was limited on 2D space, due to their narrow depth-of-field projection range. Our system included a high-speed projector, a high-speed variable focus lens, a depth sensor by a stereo camera, so that the depth information would be detected and then served as feedback to correct the focal length of the projection. As a result, a projection mapping would be well-focused projected on a 3D dynamic moving object.
Demonstration of Springlets: Expressive, Flexible and Silent On-Skin Tactile Interfaces
We present Springlets, expressive, non-vibrating mechanotactile interfaces on the skin. Embedded with shape memory alloy springs, we implement Springlets as thin and flexible stickers to be worn on various body locations, thanks to their silent operation even on the neck and head. We present a technically simple and rapid technique for fabricating a wide range of Springlet interfaces. We developed six modular Springlets: a pincher, a directional stretcher, a presser, a puller, a dragger, and an expander (Fig. 1). In our hands-on demonstration, we show our modular Springlets and several Springlet interfaces for tactile social communication, physical guidance, health interfaces, navigation, and virtual reality gaming. Attendees can wear the interfaces and explore their expressive variable force profiles and spatiotemporal patterns.
A Sensing Technique for Data Glove Using Conductive Fiber
We demonstrate a sensing technique for data gloves using conductive fiber. This technique enables us to estimate hand shapes (bend of a finger and contact between fingers) and differentiates a grabbing tag. To estimate how far each finger bends, the electrical resistance of the conductive fiber is measured; this resistance decreases as the finger bends because the surface of the glove short circuits. To detect contact between fingers, we apply alternating currents with different frequencies to each finger and measure the signal propagation between the fingers. This principle is also used to differentiate a grabbing tag (each tag has an alternating current with a unique frequency). We developed a prototype data glove based on this technique.
LUNE: Representing Lunar Day by Displayed Lighting Object
LUNE is a displayed lighting object representing time. It provides real-time lighting visualized moon phase. People can recognize the date of month through abstract image of moon. This product was developed to investigate the use of metaphor from lighting to represent time. A diverse set of researches and design initiatives related to time, temporality and slowness has emerged in the DIS and HCI communities. An important area of work is to represent time. The primary objective of this research is to suggest the new perspective of time called sense of time. First, we can understand how people perceive the time and we can also trace recent research related to perspectives of time in HCI. Second, we designed artifacts and investigated the use of moon phase as material to represent time in a new approach.
Bear & Co: Simulating Value Conflicts in IoT Development
Bear & Co. is a fictitious immersion into the world of being part of an IOT start-up. We invite visitors to join the company, and facilitate their journey through various ethical conundrums, as they become part of the company. First, they must state their values – what they will bring to the company and care most about. Then, we test those values through different scenarios and problems that are unexpected and that do not have easy answers. Finally, we debrief our visitors and invite them to peruse explanations for various ethical approaches presented as maps and diagrams, where they can interrogate their own decisions against three different philosophical viewpoints.
An Exploration of Responsive and Emotive Wearables through Research Prototyping
Responsive and Emotive wearables are concerned with the visualisation of environmental and physiological data. Four research prototypes were created for doctoral research to investigate the possibility that wearable technology could be used to create new forms of nonverbal communication using physiological data from the wearer’s body. The research investigated who the audience might be for these research prototypes and their concerns and requirements. Through a lens of how these artifacts might they be used in social or formal contexts, the research gathered data from fifty potential users of emotive wearables and examined the usage and user preferences of such devices. Findings reflected the concerns of potential users from aesthetics and functionality to ethical and privacy issues.
Being-in-the-Gallery
Being-in-the-Gallery is an immersive experience which explores the embodied nature of virtual reality and the implications this has on contemporary sculptural practice and our encounters with both. This interactive artwork is experienced through the HTC Vive headset, with movement and touch key elements to the aesthetic of this work. It combines both a physical and a digital sculpture and in doing so creates a mixed reality that plays on a disconnect between what we can see and what we can feel.
Hypercept: Speculating the Visual World Intervened by Digital Media
Human perception has long been influenced by technological breakthroughs. An intimate mediation of technology lies in between our direct perceptions and the environment we perceive. Through three extreme ideal types of perceptual machines, this project defamiliarizes and questions the habitual ways in which we interpret, operate, and understand the visual world intervened by digital media. The three machines create: Hyper-sensitive vision – a speculation on social media’s amplification effect and our filtered communication landscape. Hyper-focused vision – an analogue version of the searching behavior on the Internet. Hyper-commoditized vision – monetized vision that meditates on the omnipresent advertisement targeted all over our visual field. The site of intervention is the visual field in a technologically augmented society. All the three machines have both internal state and external signal.
Twenty Years of The Mixed Reality Laboratory
The mixed reality lab has now been a staple of the CHI community for twenty years. From its founding in 1999 through to today, we have placed our relationship with art and artists at the forefront of our research methods. In this retrospective exhibition, we present some of our most recent and exciting work, alongside some of our archived works, and ask viewers to consider twenty years of CHI research and innovation – not just from our lab, but from the whole CHI community. Back in 1999 when we started, Virtual Reality was the exciting new technology. A lot has changed since then.
Come Hither to Me: Performance of a Seductive Robot
Come Hither to Me is an interactive robotic performance, which examines the emotive social interaction between an audience and a robot. Our interactive robot attempts to communicate and flirt with audience members in the gallery. The robot uses feedback from sensors, auditory data, and computer vision techniques to learn about the participants and inform its conversation. The female robot approaches the audience, picks her favorites, and starts charming them with seductive comments, funny remarks, backhanded compliments, and personal questions. We are interested in evoking emotions in the participating audience through their interactions with the robot. Come Hither to Me strives to invert gender roles and stereotypical expectations in flirtatious interactions. This performative piece explores the dynamics of social communication, objectification of women, and the gamification of seduction. The robot reduces flirtation to an algorithm, codifying pick-up lines and sexting paradigms.
Eyes: Iris Sonification and Interactive Biometric Art
“Eyes” is an interactive biometric data art that transforms human’s Iris data into musical sound and 3D animated image. The idea is to allow the audience to explore their own identities through unique visual and sound generated by their iris patterns based on iris recognition and image processing techniques. Selected iris images are printed in 3D sculptures, and it replays the sound and animated images on the sculptures. This research-based artwork has an experimental system generating distinct sounds for each different iris data using visual features such as colors, patterns, brightness and size of the iris. It has potentials to lead the new way of interpreting complicated dataset with the audiovisual output. Moreover, aesthetically beautiful, mesmerizing and uncanny valley-effected artwork can create personalized art experience and multimodal interaction. Multisensory interpretations of this data art can lead a new opportunity to reveal users’ narratives and create their own “sonic signature.”
Hybrid Dandelion: Visual Aesthetics of Performance Through Bionic Mechanism with Data from Biometric Facial Recognition
Hybrid Dandelion is an interactive real-time animation art installation, used bionic mechanisms of algorithmic design to study generative rules for mimic biological form. It provides an approach to combine the algorithmic data structures from distributed computing, fractal tree, and L-system for investigate growth dandelion-like morphology. This work creates an interaction scenario with using facial recognition to scan audience’s biometrics as means decoding your genetic data, and then inserting (the trait) into the dandelion model for modifying its indeed generative rules as like metaphor hybrid genetically modified task. It allows the audience to experience their unique data-driven artificial life form – dandelion as embodied and possessed through their facial features, heartbeat signal, and emotion expression in artistic expression.
Death Ground
Death Ground is a competitive musical installation-game for two players. The work is designed to provide the framework for the players/participants in which to perform game-mediated musical gestures against each-other. The main mechanic involves destroying the other player’s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such power-ups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned power-ups.
Keycube is a Kind of Keyboard (k3)
Alternate realities through headsets, such as augmented, mixed and virtual reality are becoming part of people’s everyday life. Except in some limited context, usual keyboards are inappropriate for such technological medium and alternative interfaces for text-entry must be explored. In this paper we present the keycube, a general-purpose cubic handheld device that goes beyond the text-entry interface by including multiple keys, a touch-screen, an inertial unit with six degrees-of-freedom and a vibrotactile feedback. Strong of its form factor and affordance, the keycube offers advantages with regards to mobility, comfort, learnability, privacy and playfulness. Thus, the combination creates a novel text-entry interface convenient for many use cases across the whole reality-virtuality continuum.
Demonstration of Transcalibur: A VR Controller that Presents Various Shapes of Handheld Objects
We demonstrate Transcalibur, which is a hand-held VR controller that can render a 2D shape by changing its mass properties on a 2D planar area. We built a computational perception model using a data-driven approach from the collected data pairs of mass properties and perceived shapes. This enables Transcalibur to easily and effectively provide convincing shape perception based on complex illusory effects. Our user study showed that the system succeeded in providing the perception of various desired shapes in a virtual environment. In the demonstration, users can explore VR application that can feel the sensation of wielding sword, shield and crossbow and with these fight with a dragon.
VRChairRacer: Using an Office Chair Backrest as a Locomotion Technique for VR Racing Games
Locomotion in Virtual Reality (VR) is an important topic as there is a mismatch between the size of a Virtual Environment and the physically available tracking space. Although many locomotion techniques have been proposed, research on VR locomotion has not concluded yet. In this demonstration, we contribute to the area of VR locomotion by introducing VRChairRacer. VRChairRacer presents a novel mapping the velocity of a racing cart on the backrest of an office chair. Further, it maps a users’ rotation onto the steering of a virtual racing cart. VRChairRacer demonstrates this locomotion technique to the community through an immersive multiplayer racing demo.
Demonstrating VRBox: A Virtual Reality Augmented Sandbox
We present VRBox–an interactive sandbox for playful and immersive terraforming that combines the approach of augmented sandboxes with virtual reality technology and mid-air gestures. Our interactive demonstration offers a virtual reality (VR) environment containing a landscape, which the user designs via interacting with real sand while wearing a VR head-mounted display (HMD). Whereas real sandboxes have been used with augmented reality before, our approach using sand in VR offers novel and original interactive features such as exploring the sand landscape from a first person perspective. In this demo, users can experience our VR-sandbox system consisting of a box with sand, multiple Kinect depth sensing, an HMD, and hand tracking, as well as an interactive world simulation.
A Virtual Reality Experience for Learning Languages
This demo showcases an interactive virtual reality experience for language learning that allows users to enter a virtual world to explore and interact with their surroundings while learning Spanish. Through immersive game-play on the Oculus Rift, users explore Spanish translations of everyday household items in a search-and-find format, scoring points when they can correctly identify and select objects. Users are able to put what they learn into practice in real time. Study participants who tried the experience said they found this method of language learning to be more enjoyable than traditional methods of studying due to the gamification created and it not “feeling like studying”. As virtual reality headsets continue to become more accessible to the public, addresses the cost limitations of traveling overseas to achieve immersion in foreign language. This application can be expanded to most real-world scenarios and locations. Additionally, it can be applied to any language.
Experiencing a Mirrored World with Geotagged Social Media in Geollery
We demonstrate the online deployment of Geollery, a mixed reality social media platform. We introduce an interactive pipeline to reconstruct a mirrored world at two levels of detail – the street level and the bird’s-eye view. Instead of using offline 3D reconstruction approaches, our system streams and renders a mirrored world in real time, while depicting geotagged social media as billboards, balloons, framed photos, and virtual gifts. Geollery allows multiple users to see, chat, and collaboratively sketch with spatial context in this mirrored world. We demonstrate a wide range of use cases including crowdsourced tourism, interactive audio guide with immersive spatial context, and meeting remote friends in mixed reality. We envision Geollery will be inspiring and useful as a standalone social media platform for those looking to explore new areas or looking to share their experiences. Please refer to https://geollery.com for the paper and live demos.
Egocentric Smaller-person Experience through a Change in Visual Perspective
This paper explores how human perceptions, actions, and interactions can be changed through an embodied and active experience of being a smaller person in a real-world environment, which we call an egocentric smaller person experience. We developed a wearable visual translator that provides the perspective of a smaller person by shifting the wearer’s eyesight level down to their waist using a head-mounted display and a stereo camera module, while allowing for field of view control through head movements. In this study, we investigated how the developed device can modify the wearer’s body representation and experiences based on a field study conducted at a nursing school and museums, and through lab studies. Using this device, designers and teachers can understand the perspectives of a smaller-person including a child in an existing environment.
Demonstration of SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision
Current virtual reality applications do not support people who have low vision. We present SeeingVR, a set of 14 tools that enhance a VR app for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. We demonstrate the design of SeeingVR in this paper.
Immersive VR Exergames for Health and Wellbeing
This paper describes a Virtual Reality (VR) exergame platform used to investigate player motivation and experience of physical activity through variations of game designs. The VR-Rides platform combines a desk cycle, real-world imagery, an HTC VIVE head-mounted device (HMD) with controllers and a Microsoft wrist band, such that the player can navigate locations in a safe immersive virtual environment. Panorama images come from Google Street View. Two games were developed to explore motivation, enjoyment and their link to player’s perceived experiences in immersive VR exergaming setup. The platform acts as a test-bed to iteratively design and evaluate theory driven immersive VR game designs to support activities pertaining to player’s health and wellbeing goals.
Multisensory Virtual Environment for Fire Evacuation Training
Growing organizational safety awareness is propelling the interest in development of novel solutions for fire training. Significant effort in this area has previously gone into designing various immersive virtual reality (VR) systems in hopes of enabling safe exploration and experiencing the consequences of ones actions in scenarios that would be too hazardous in the real life. Yet, the fact that VR generally lacks the sensory feedback of a real life fire situation has been found to impede a sense of realism and validity of such experiences. As part of our efforts to mitigate this issue, we now present a new prototype fire evacuation simulator designed to deliver not just audiovisual feedback, but likewise real-time heat and scent stimulation.
3D Positional Movement Interaction with User-Defined, Virtual Interface for Music Software: MoveMIDI
This paper describes progress made in design and development of a new digital musical instrument (MIDI controller), MoveMIDI, and highlights its unique 3D positional movement interaction design differing from recent orientational and gestural approaches. A user constructs and interacts with MoveMIDI’s virtual, 3D interface using handheld position-tracked controllers to control music software, as well as non-musical technology such as stage lighting. MoveMIDI’s virtual interface contributes to solving problems difficult to solve with hardware MIDI controller interfaces such as customized positioning and instantiation of interface elements, and accurate, simultaneous control of independent parameters. MoveMIDI’s positional interaction mirrors interaction with some physical acoustic instruments and provides visualization for an audience. Beta testers of MoveMIDI have created emergent use cases for the instrument.
Are Drones Meditative?
Meditative movement involves regulating attention to the body whilst moving, to create a state of meditation. This can be difficult for beginners, we propose that drones can facilitate this as they can move with and give feedback to whole body movements. We present a demonstration that explores various ways drones could facilitate meditative movement by drawing attention to the body. We designed a two-handed control map for the drone that engages multiple parts of the body, a light foam casing to give the impression that the drone is floating and an onboard light which gives feedback to the speed of the movement. The user will experience both leading and following the drone to explore the interplay between mapping, form, feedback and instruction. The demonstration relates to an expansion of the attention regulation framework, which is used to inform the design of interactive meditative experiences and human-drone interactions.
iScream!: Towards the Design of Playful Gustosonic Experiences with Ice Cream
In this demonstration, we present iScream!, a novel gustosonic experience that generates unique digital sounds as a result of eating ice cream. The system uses capacitive sensing to detect eating actions and based on these actions, it plays out six different playful sounds to facilitate a playful eating experience. Our aim is to support a playful way of eating because we believe that interactive technology offers unique opportunities to facilitate novel engaging eating experiences. Ultimately, with this work, we aim to inspire and guide designers working with interactive playful gustosonic experiences, which open up new interaction possibilities to experience eating as play.
Augmenting Circle Dance with Physical Computing
Pusing Tiang (Malay; loosely translated as Round-a-Pole) is an installation that augments circle dance by turning it into a game. It was pilot tested in a school among 16 primary schoolchildren. The results show that the installation encourages schoolchildren to synchronise their movement through play. The installation matches CHI 2019 theme of weaving the threads of CHI and playing the game at CHI symbolises the Celtic knot logo of strength and friendship.
Movie+: Towards Exploring Social Effects of Emotional Fingerprints for Video Clips and Movies
Collaborative movie viewing with the loved ones increases connectedness and social bonds within family members and friends. Furthermore, with the rapid adoption of personal mobile devices, people often engage in this activity being geographically separated. However, conveying our feelings and emotions about a recently watched movie or a video clip is often limited to a post on social media or a short blurb on an instant messaging app. Drawing on the popular interest in quantified-self, which envisioned one collecting and sharing biophysical information from everyday routines (e.g., workouts), we have designed and developed Movie+, a mobile application, which utilizes personal biophysical data to construct an individual’s “emotional fingerprint” while viewing a video clip. Movie+ allows the selective sharing of this information through different visualization options, as well as rendering others’ emotional fingerprints over the same clip. In this submission, we outline the design rationale and briefly describe our application prototype.
PhotoFlow in Action: Picture-Mediated Reminiscence Supporting Family Socio-Connectivity
Family connections are maintained through sharing reminiscences, often supported by family photographs which easily prompt memories. This is increasingly important as we age, as picture-based reminiscence has been shown to reduce older adults’ social isolation. However, there is a gap between sharing memories from physical pictures and the limited support for oral social reminiscence afforded by digital tools. PhotoFlow supports older adults’ picture-mediated social storytelling of family memories using an intuitive metaphor mirroring sharing physical family pictures on a table top. The app uses the speech of oral storytelling to automatically organize pictures based only on what has been said. This simplifies the overall process of family picture interactions by leveraging one enjoyable aspect to ease a more effortful one. In particular, the familiar table top interaction metaphor has the potential to bridge the gap between physical picture reminiscence and managing digital picture collections.
SESSION: Doctoral
Designing for the Self: Enabling People to Design Behavioral Interventions
I investigate how individuals can employ human centered design techniques to adopt positive behaviors in their everyday life. I have designed and built a tool that supports individuals in planning physical activity using design processes such as understanding other people’s behavioral needs, ideating behavioral interventions and prototyping behavioral solutions using evidence-based techniques. In my dissertation I will demonstrate how tools can support iterative behavioral design by evaluating behavioral solutions, drawing insights from testing, and making iterations to a behavioral solution. This work will result in new tools and design methods that enable people to adapt evidence-based techniques in a way that fits with their everyday needs.
Bridging Social Critique and Design: Building a Health Informatics Tool for Transgender Voice
This project aims to develop a voice training application for transgender people. Voice training is typically conducted by a speech therapist, and consists of personalized sessions that support individuals in changing their voices (such as modifying pitch, resonance, or speech patterns). The reasons why people may pursue voice training are varied, but often includes discomfort with voice being misaligned with gender identity. Training with a speech therapist may be inaccessible due to health disparities; thus, a technological solution, as I propose in my research, is necessary. This project will address existing constraints to design a novel voice training application in partnership with community members, using a participatory research methodology and combining the fields of speech science, feminist and queer theory, and HCI.
Bodily Connectedness: Designing Affective, Movement-based Communication Media
This is a practice based PhD research that explores inter-affective movement-based communication as an approach to mediated connectedness (an immediate, felt experience of feeling close to another person) over distance in dyads. Inspired by recent socio-cognitive theories (e.g. enactive intersubjectivity [8], synergistic approach [16], in this thesis communication is viewed as a dynamic coordination between two holistic living bodies, rather than two abstract minds transmitting information from sender to receiver. The main hypothesis is that coordinated inter-affective movement can facilitate the feeling of connectedness in mediated settings. I will creatively explore this assumption by developing a sequence of experimental design artifacts.
HCI for Participatory Futuring in Sustainable Communities: Reconciling Visions with Everyday Practice
Current food consumption patterns are unsustainable. The food system is globalized and dominated by a few large organisations, which dis-empowers people to make changes to it. However, grassroots communities are important in engendering positive change from the bottom up. Long-term thinking is a key to empowering these communities in transitioning towards sustainable food systems. This research is concerned with practices of “futuring” in grassroots communities and how HCI can facilitate openness, participation, and coordination in constructing visions of the future, and in reconciling these with the everyday practices of the communities.
Reconnecting the Body and the Mind: Technology to Support Mindfulness for Stress
Mindfulness meditation has the potential to help practitioners cope with their stress. Currently, projects often use corrective feedback models to help people understand when they are out of a mindfulness state. My dissertation uses research by design to build a technology intervention for mindfulness meditation that adopts a strategy of gently guiding and supporting the user’s in-the-moment experience of practising meditation through a natural soundscape that responds to the user’s brainwave activity collected from the Muse EEG Headset.
User Adaptation to Constant Change in Algorithmically-Driven Social Platforms
Social platforms present a challenge for self-presentation and identity management by obscuring audiences behind algorithmic mechanisms. Users are increasingly aware of this and actively adapting through folk theorization, but we do not know how users are coping with the constant change endemic to these platforms, or how we can assist users in this coping process. My dissertation will examine how users perceive and adapt to the constantly-changing platform space using self-presentation and audience management as an illustrative case.
Algorithms, Oppression, and Mental Illness on Social Media
My research explores how individuals with mental illness express themselves online and off. Through digital ethnography, including interviews with Instagram users and manual collection of public content on Instagram, I have holistically examined the experience of mental illness as expressed through social media. This user-centric approach reveals and addresses the limitations of computational techniques, which my dissertation work will address by combining qualitative methods with generative algorithms to explore new ‘ways of seeing’ mental illness. I will create new tools enabling users to generate representations from their own posts in support of creating new representations of mental illness, advancing algorithmic fairness, and confronting technological forms of oppression online.
Languages & Visualizations to Enable Effective End User Programming
Programming requires expertise to employ effectively. My research aims to help end user programmers more effectively author, understand, and reuse code and data through the design of new languages and program visualization tools. New programming languages can raise the level of abstraction to focus on relevant domain-specific details. Improved tools can better align with and enrich end user programmers’ mental models. Visualizing program state and behavior promotes program understanding, and can proactively surface surprising or incorrect results. My future work proposes to explore new visualization techniques and languages to facilitate understanding of constraint programming systems.
Player Types of Gamers: Critical Evaluation
The following extended abstract describes a research plan for and preliminary findings of a dissertation thesis on player types. The described four studies aim to answer to questions of what player types are, how it is possible to improve such categorizations, and whether player typologies based on self-reports can be validated by emotional responses during playing.
Designing Visual Communication of Everyday Illness Experiences in Pediatric Care
In complex chronic care, patients’ ongoing awareness of their health status and ability to articulate health needs are vital to active participation in care, yet they face various challenges that could thwart their potential to engage in such participation. My research explores how design methods in HCI can evolve to meet these challenges by engaging both adolescents and family caregivers throughout the process of tracking the patients’ illness experiences and co-designing rich representations that are expected to support adolescents’ communication of these experiences in care. This thesis will contribute 1) a critical understanding of the ways in which human-centered design can address primary challenges that adolescents face when engaging in care, 2) a novel method for conducting co-design research with chronically ill patient families, and a 3) family-centered mobile health technology that demonstrates the feasibility of engaging pediatric patient families.
Designing for Visual Data Exploration in Multi-Device Environments
Multi-device environments have an enormous potential to enable more flexible workflows during our daily work. At the same time, visual data exploration is characterized as a fragmented sensemaking process requiring a high degree of flexibility. In my thesis, I am aiming to bring these two worlds into symbiosis, specifically for sensemaking with multivariate data visualizations and graph visualizations. This involves three main objectives: (i) understanding the devices’ roles in dynamic device ensembles and their relations to exploration patterns, (ii) identifying mechanisms for adapting visualizations for different devices while preserving a consistent perception and interaction, and, finally, (iii) supporting users and developers in designing such distributed visualization interfaces, e.g., through specific guidelines. As specific contributions, it is planned that (i) and (ii) emerge into a design space, while (iii) leads to a set of heuristics. So far, I was able to extensively work on the first objective as well as to touch on the other two.
Expressive Biosignals: Authentic Social Cues for Social Connection
My research introduces expressive biosignals as a novel social cue to improve interpersonal communication. Expressive biosignals are sensed physiological data revealed between people to provide a deeper understanding of each other’s psychological states. My prior work has shown the potential for these cues to provide authentic and validating emotional expression, while fostering awareness and social connection between people. In my proposed research, I expand on this work by exploring how social responses to biosignals can benefit communication through empathy-building and social support. This work will scope the design space for expressive biosignals and inform future interventions for a variety of social contexts, including interpersonal relationships and mental health.
Inventive Scaffolds Catalyze Creative Learning
Creative problem-solving requires both exploratory and evaluative thinking skills. The contextual, open-ended nature of creative tasks makes them uniquely challenging to teach and learn. People tend to under-explore in problem-solving, using the most available representation of a problem and hindering potentially more creative solutions. My dissertation examines how inventive scaffolds provide feedback between the exploration and evaluation processes of creative problem-solving, potentially amplifying creativity of solutions. I investigate this through two interventions. First, interactive guidance and adaptive suggestions embodied in the CritiqueKit system to improve critique and evaluation of creative work. Second, problem-framing scaffolds to reduce fixation and enhance exploration. My research demonstrates methods for increasing human inventiveness with relevance in creative education and the design of creativity support interfaces.
Mobile Persuasive Technology: Promoting Positive Waste Management Behaviors in Developing African Nations
My research examines new ways of persuading citizens to change their behavior towards waste management, and protect the environment. As a first step towards contributing to research, I conducted a user-based study, to find what strategies could be used to motivate national citizens to adopt positive waste disposal behaviors. Mapping the results to their matching persuasive technology techniques and operationalizing them in a mobile web platform, I show how mobile persuasive system interventions could be designed to promote positive and environmentally responsible behaviors and protect the environment from pollution.
Technology Meets Fashion: Exploring Wearables, Fashion Tech and Haute Tech Couture
The introduction of technology into the worlds of fashion and haute couture, has made it possible for fashion designers and technologists to create and experiment with garments and wearables in a variety of novel and expressive forms. Several of these haute couture garments infused with technology are shown on international runways and can ultimately influence the design of consumer fashion and wearable products. Within this context, I describe my dissertation which aims to explore and understand the role of technology throughout the process of design and fabrication in the haute tech couture domain and uncover broader implications for the design of wearables.
Collaborative Video Game Design Work and Diversity
This overview describes my ongoing dissertation research on diversity within collaborative video game design. First, I explain why research into daily work within this field is needed, especially with a focus on diversity. Next, I briefly review previous research and identify three key areas for considering diversity in the field: participation of underrepresented and marginalized groups, the structure of organizations, and collaborative work tool selection and use. I then outline my qualitative research approach of conducting semi-structured interviews with video game designers. Finally, I present some preliminary results and expected contributions for this research.
Exploring Future IoT for Families through End User Development: Applying Do-It-Together Practises to Reveal Family Dynamics in Technology Adoption
Industry and research increasingly explore opportunities to make our homes smart, e.g. through the Internet of Things (IoT). Technological developments nurture this rise of smart products, seemingly corresponding to households’ needs. Yet, these domestic environments remain a complex domain to study or design for. This work explores the understudied complexity of families’ needs and values in relation to connected and smart technology, in particular as a multi-user group. By leveraging participatory and do-it-yourself practices, I aim to engage families in discussion – and empower them to externalize and reflect upon their views. As such, I can study their reflective practices to reveal (tacit) understandings and (latent) needs which informs future developments in smart home technologies.
Designing for Long-term Digital Data Management
Digital data is a pervasive component of modern society, with people managing a growing number of data types across many devices. My research explores people’s choices on what to keep over the long-term and aims to design personalized data management tools. In a first study, I characterized individual differences in data preservation behaviors. I plan to use interviews, a survey, and probing methods to further extend this characterization and define a design space for long-term data management. Then, I plan to build and evaluate a prototype that synthesizes findings from all my studies.
Exploring Socially-Focused Technologies that Can Help Children with Cancer Feel More Like Children Despite their Disease, Treatment and Environment
This describes the background and motivation for dedicating my PhD to the exploration of socially-focused technologies for childhood cancer patients. Very little work has been done, especially in the field of human and child computer interaction, to explore the ways in which the hospital context in conjunction with the cancer experience impact children’s social and emotional well-being during middle childhood (ages 6-12), and in turn how technology could improve their experience. My research seeks to (1) empower children with cancer by providing a platform for them to voice their own experiences with isolation, loneliness, and loss of a normal childhood, as well as how technology may better support their needs, (2) contribute design knowledge about how to support meaningful social interaction and play that is age and ‘ability’ appropriate, and (3) provide insight for future design and evaluation studies by better understanding constraints/opportunities for socially-focused technologies intended for use in a real world pediatric hospital environment.
Co-Designing Digital Technologies to Support Minimally-Verbal Children on the Autism Spectrum
This doctoral work considers how to best co-design with minimally-verbal children on the autism spectrum in classroom contexts. It focuses on 1) leveraging personal interests and individual strengths to foster engagement, social interaction and self-expression through novel technologies and 2) child-centred, holistic methodological approaches to co-design work. This research questions how integrating these may better engage and include minimally-verbal children on the spectrum in the co-design of digital technologies.
SESSION: Panels
Moving Beyond “The Great Screen Time Debate” in the Design of Technology for Children
Despite pervasive messaging about the dangers of “screen time,” children and families remain avid consumers of digital media and other technologies. Given competing narratives heralding the promise or the peril of children’s technology, how can designers best serve this audience? In this panel, we bring together world experts from: children’s media and communications, pediatrics and human development, HCI and design, and industry product development to debate the validity of pundits’ concerns and discuss designers’ opportunities and obligations with respect to creating products for this user group. Panelists bring diverse–and sometimes conflicting–perspectives on the conceptual frameworks that are most appropriate for understanding family technology use, the ethical considerations designers should bring to this space, and the most pressing needs for future research. Grounding the conversation in guidance from the audience, panelists will share their visions for a research agenda that separates moral panics from credible concerns and promotes the design of positive digital experiences for children and families.
The Future of Tangible User Interfaces
Tangible user interfaces have a rich history in HCI research ever since their introduction two decades ago. But what are the practical implications, the commercial potential, and the future of this influential paradigm? This panel starts by looking into the importance of tangible interaction and its current role. It will then draw on the expertise of both the panelists and the audience to speculate about its future and new opportunities for the field. The panelists represent a variety of perspectives from both industry and academia, and includes some of the most well-known innovators in the field. The format builds on the CHI 2006 panel The state of tangible interfaces: projects, studies, and open issues, which shared some of the same organizers.
Careers in HCI and UX: The Digital Transformation from Craft to Strategy
Since its inception in the early 1980’s, HCI and UX have sought wider recognition and influence. Now digital transformation, a pervasive shift in the role of information technology, will offer both practitioners and researchers far more influential roles in organizations. This shift, which is taking place in industry, education and government is part of a larger shift to a global, digitally connected society.
This panel builds on the theme of CHI’19 – Weaving the Threads – through its focus on how HCI/UX, research and practice will be integrated as organizations transform into digitally-driven entities.
The panel brings together thought leaders with backgrounds in both academia and industry. With extensive audience participation, we will explore the implications of digital transformation on the roles of HCI/UX, the challenges, and the new skills needed to support a culture change and collaborate with a wider range of stakeholders.
It is our hope that this panel can become a jumping off point for future work, built on our belief that HCI/UX are vital drivers of new technologies, and of beneficial societal transformations. Through national agencies and, perhaps in time, becoming part of the UN’s Sustainable Development Agenda, would position HCI/UX to fulfill its role as a key driver of both business and social transformation.
Rigor, Relevance and Impact: The Tensions and Trade-Offs Between Research in the Lab and in the Wild
As an interdisciplinary field, CHI embraces diverse research practices ranging from controlled lab experiments to field studies in homes and communities. While quantitative research in the lab emphasizes the scientific rigor for testing hypotheses, qualitative research in the wild maximizes the understanding of contexts of using technologies. Furthermore, each type of research inherently makes its impact in different ways. This panel invites researchers with varied backgrounds to talk about the tensions and trade-offs between research in the lab and in the wild, in aspects of scientific rigor, real-world relevance and impact. The goal is to enhance the mutual understanding between researchers with diverged values and practices within the CHI community.
SIGCHI Research Ethics Town Hall
An ongoing challenge within the diverse HCI and social computing research communities is understanding research ethics in the face of evolving technology and methods. Building upon successful town hall meetings at CHI 2018, GROUP 2018 and CSCW 2018, this panel will be structured to facilitate audience discussion and to collect input about current challenges and processes. It will be led by members of the ACM SIGCHI Research Ethics Committee. We will pose open questions and invite audience discussion of practices centered on recent “hot topic” issues. For this year’s town hall, the primary focus will be on paths to balancing the often-competing regulatory frameworks under which we operate (some of which having recently undergone significant revisions) with our community’s efforts to reveal ethical challenges posed by new interactive technologies and new contexts of use. We will engage the audience in discussions on whether there is a non-colonial role for ethics education within the broad HCI community, how that may capture the cultural and disciplinary differences that are woven into CHI’s fabric, and how research ethical issues should be handled in SIGCHI paper submission and review process.
Moving Towards a Journal-centric Publication Model for CHI: Possible Paths, Opportunities and Risks
As a scholarly field, the ACM SIGCHI community maintains a strong focus on conferences as its main outlet for scholarly publication. Historically, this originates in how the field of computer science adopted a conference-centric publication model as well as in the organizational focus of ACM. Lately, this model has become increasingly challenged for a number of reasons, and multiple alternatives are emerging within the SIGCHI community as well as in adjacent communities. Through revisiting examples from other conferences and neighboring communities, this panel explores alternative publication paths and their opportunities and risks.
SESSION: SIG Abstracts
Transformative Experience Design: Designing with Interactive Technologies to Support Transformative Experiences
Some life experiences can generate profound and long-lasting shifts in core beliefs and attitudes, including subjective transformation. These experiences can change what individuals know and value, their perspective on the world and life, evolving them as a grown person. For these characteristics, transformative experiences are gaining increasing attention in psychology, neuroscience, and philosophy. One potentially interesting question related to transformative experiences concerns how they can be invited by means of interactive technologies. This question lies at the center of a new research program, transformative experience design, which has two aims: (1) to investigate phenomenological and neurocognitive aspects of transformative experiences, as well as their implications for individual growth and psychological well-being; and (2) to translate such knowledge into tentative design principles for developing experiences that aim to support meaning in life and personal growth. Our goal for this SIG is to discuss challenges and opportunities for transformative experiences in the context of interactive technologies.
SIG: Spatiality of Augmented Reality User Interfaces
Augmented reality and spatial information manipulation is being increasingly used as part of environ- ment integrated form factors and wearable device such as head-mounted displays. The integration of this exciting technology in many aspects of peoples’ lives is transforming the way we understand computing, pushing the boundaries of Spatial Interfaces into virtual but embedded environments. We think that the time is ripe for a renewed discussion about the role of Augmented Reality within Spatial Interfaces. With this SIG we want to expand the discussion related to Spatial Interfaces and the way they impact interaction with the world in two areas. First, we aim to critically discuss the definition of Spatial Interfaces and outline the common components that build such interfaces in today’s world. Second, we would like the community to reflect on the path ahead and focus on the potential of what kind of experiences can Spatial Interfaces achieve today
Digital Housekeeping, Gender and Domestic Work
This SIG meeting will examine the domestic technologies and routines of diverse households as well as the role of gender in the use and maintenance of these technologies. Our aim is to bring together domestic technology experts and social scientists who study the domestic environment across a range of socio-economic groups to discuss the present and the future of domestic technologies, including their impacts on the lives of those who are often unvoiced, such as paid domestic workers.
MatHealthXB: Designing Across Borders for Global Maternal Health
The proliferation of digital technologies has facilitated the adoption of innovative approaches to addressing global maternal health challenges. Worldwide, HCI researchers – from both resource-constrained and resource-rich countries are actively engaged in developing novel responses to an ever-evolving maternal health landscape. However, opportunities for these researchers to interact and engage in sustained dialogue and collaboration are limited. The purpose of this Special Interest Group (SIG) is to bring these professionals together to support an active global network of maternal health researchers and facilitate collaboration across borders.
Cooperativism and Human-Computer Interaction
If social, economic and environmental sustainability are linked, then support for the increasing number of non-profit groups and member-owned organizations offering what Trebor Scholz has called “platform cooperativism” [17] has never been more important. Together, these organizations not only tackle issues their members identify in the world of work, but also provide network-driven collections of shared things (e.g., books, tools) and resources (e.g., woodworking spaces, fab labs) that benefit local communities, potentially changing, not just use of resources at community level, but socio-economic structures on the ground (e.g., [15]). In contrast to for-profit services often associated with the sharing economy (e.g., Uber, Airbnb), platform co-ops attempt to advocate ecological, economic and social sustainability, with the goal to promoting a fairer distribution of goods and labor, ultimately creating a stronger sense of community. While some HCI sub-communities (e.g., CSCW) have started to explore this emergent phenomenon, especially leveraging ethnographic research methods, researchers have called for more diverse HCI approaches to address the growing scope of challenges within platform co-ops, member-driven exchange systems, and cooperativism more broadly. This SIG aims to bring together researchers from different HCI sub-communities to identify future research directions in HCI around cooperativism and platforms.
ARC: Moving the Method Forward
Asynchronous Remote Communities (ARC) methodology has been used to explore HCI topics in a range of contexts. This innovative methodology takes advantage of the technological tools and platforms that are often the subject of HCI research to extend existing methods of data collection, pushing methodology beyond historical modes and allowing better connection with populations who have previously been left out of the research process. This SIG will make space for researchers and practitioners who are interested in using ARC methodology to connect with people who have already used ARC and discuss challenges and opportunities with this methodology, and how to extend similar kinds of innovative, distributed computing based methods into new contexts.
Expecting the Unexpected in Participatory Design
Participatory Design (PD) provides unique benefits in designing technology with and for specific target audiences. However, it can also be an intensive and difficult process, with unexpected situations which can arise at any stage. In this Special Interest Group (SIG), we propose that PD researchers may exchange “war stories” about their unexpected and difficult experiences with PD. This will facilitate reflective discussions and the identification of possible solutions, and enable future PD research to plan for similar situations, thereby making difficulties a little less unexpected.
Evaluating Technologies with and for Disabled Children
Due to policies supporting the inclusion of disabled children in mainstream schools and the use of technologies to enable personalized schooling, there are broad research incentives and opportunities to design technologies for disabled children in educational contexts. A workshop at CHI 2018 with researchers and practitioners working on accessible and assistive technologies for children in educational settings raised two on-going challenges in this area: (1) Very few assistive technologies proposed in research are evaluated in context, notably because of the many practical constraints on evaluation when working with these small communities of diverse individuals; (2) The scholars turning their attention to context raise new design preoccupations, such as interdependence, for which we do not yet have a community consensus regarding the suitable approaches to evaluation. Although this workshop was conducted with researchers working with children with visual impairments, these challenges apply more widely to the field of technologies for children with disabilities in educational settings. The purpose of this SIG is to bring together researchers and practitioners to encourage the homogenization of evaluation approaches for accessible and assistive technologies in schools.
Learning, Education, and HCI
In this SIG, we propose a gathering of researchers and practitioners thinking about HCI in learning and educational contexts to foster an ongoing Learning and Education community at CHI. With the recent increase in CHI submissions relating to learning (40% more submissions than previous CHI), this SIG is an opportunity to foster an inclusive dialogue on designing and studying phenomena, tools, and processes related to learning and education. This SIG will bring together researchers, educators, and practitioners with three goals in mind: (1) discussing more inclusive cross-disciplinary perspectives on learning; (2) defining future directions and standards for learning and education contributions in CHI; and (3) building community across research/practice boundaries.
Child-Computer Interaction SIG: Designing for Refugee Children
The global refugee crisis is a significant current challenge affecting millions of children. The process of refugee migration comes with major immediate as well as long-term risks to children’s physical and mental health, education, and prospects. Despite the multiple dangers and challenges during migration, most refugee families have access to and make use of interactive technologies, prior to, during, and after migration. This SIG meeting is an opportunity to discuss novel potential roles for technologies to alleviate some of the challenges faced by child refugees.
Queer(ing) HCI: Moving Forward in Theory and Practice
The increasing corpus on queer research within HCI, which started by focusing on sites such as location-based dating apps, has begun to expand to other topics such as identity formation, mental health and physical well-being. This Special Interest Group (SIG) aims to create a space for discussion, connection and camaraderie for researchers working with queer populations, queer people in research, and those using queer theory to inform their work. We aim to facilitate a broad-ranging, inclusive discussion of where queer HCI research goes next.
SketCHI 2.0: Hands-On Special Interest Group on Sketching in HCI
Sketching is universal. It enables us to work through problems, communicate complexity, work with people who have diverse needs, and document work processes we employ within Human-Computer Interaction. Increased interest in sketching as a methodology within HCI has led to increased attendance of interactive courses, meet-ups, and discussion groups, from those who are complete beginners, to seasoned researchers with the skills and knowledge to support others. By bringing together these individuals, we are able to advance the understanding of how sketching underpins research, and how we might work with sketching as technology advances. SketCHI 2.0 aims to support ongoing discussions and collaborations around sketching in HCI, and further build the Sketching HCI community. As well as drawing on location, feedback, and discussion, we will form collaborative working groups to further our collective interest in this area and conduct high-level discussions about the practical applications and outputs of sketching in HCI.
Mini Living Lab: Improving Retention and Success for Women in Tech and Diverse Teams Through Redesigning the Critique Process
Gender disparity in high tech is a long-standing challenge. The number of women in tech is lower than 30 years ago; women leave the field 50% more often than men; attrition costs companies money and talent. This SIG addresses the issue of gender and retention by changing key work practices. As a maker community critique, giving and receiving feedback on one’s work, is a necessary and everyday experience. Yet especially women lose self esteem when criticized. Since HCI professionals are key participants in critique it is a good place to start improving interactions within diverse teams. In this SIG we engage the community in a critical examination and reinvention of the critique process. Using a Mini Living Lab format, participants share experiences of their critique practices, brainstorm improvements, and try them out with a practice problem. The organizers share insights from their Living Lab company-research partnerships addressing gender and retention.
Refugees & HCI SIG: Situating HCI Within Humanitarian Research
Currently, the United Nations High Commissioner for refugees estimates that there are around 65.8 million forcibly displaced people worldwide [16]. As digital technologies have become more available, humanitarian researchers and organizations have begun to explore how technologies may be used to address refugee needs under the umbrella of Digital Humanitarianism. Interest in refugee and humanitarian contexts has also been expressed within the HCI community through the organization of workshops at conferences. While previous engagements within the HCI community have focused on our experiences of working within refugee contexts as well as developing a common research agenda, we have yet to explore how HCI research fits within wider humanitarian research and in relation to digital humanitarianism. This SIG invites HCI researchers to engage in discussions on situating HCI research within humanitarian research and response.
SESSION: Student Design Competition
NeighBoard: Facilitating Community Policing with Embodied Tech Design
Augmenting grassroots community policing (CP) efforts with technologies that assist citizens is a promising strategy for reducing real and perceived fear of crime. We used a human-centered design approach, working with residents of the St. Paul Summit-University neighborhood, to discern abstract functionalities for developing new CP technology. We then created and evaluated NeighBoard (Figure 1), which aims to enhance the social fabric of communities by letting citizens implement their own strategies for preventing crime and maintaining safety in their neighborhood.
Co ed: A Classroom Setup for Enhancing Cooperative Learning and Digital Literacy
The current Indian education system promotes competition among students, emphasizing highly on ranks and marks scored. As a consequence, students tend to drift away from a collaborative mindset to a competitive one. The learning gap is even larger in schools catering to lower income groups due to the absence of digital infrastructure and digital knowledge. As a result, a large section of Indian students is cut off from a major source of knowledge, the internet, causing the gap to increase between students in the society. With co ed, we attempt to bridge these gaps by providing a platform to foster mutual learning as well as combining digital mediums and the long-established pen and paper in a seamless manner.
Drawxi: An Accessible Drawing Tool for Collaboration
Visual impairment can profoundly impact well-being and social advancement. Current solutions for accessing graphical information fail to provide an affordable, user-friendly collaborative platform for visually impaired and sighted people to work together. Therefore, sighted users tend to have low expectations from visually impaired people while working in a team. Hence, visually impaired people feel discouraged to participate in a mixed population collaborative environment. Consequently, their generative capabilities remain devalued. In this paper, we propose an audio-haptic enabled tool (Drawxi) for free-form sketching and sharing simple diagrams (processes, workflows, ideas, perspectives, etc.). It provides a common platform for visually-impaired and sighted people to work together by communicating each other’s ideas visually. Thus, enabling the discovery of generative capabilities in a hands-on way. We relied upon participatory research methods (Contextual inquiry, Co-Design) involving visually impaired participants throughout the design process. We evaluated our proposed design through usability testing which revealed that collaboration between visually impaired and sighted people benefits from the use of common tools and platforms. Thereby, enhancing the degree of their participation in a collaborative environment and quality of co-creation activities.
Shadoji: View Body Shapes from a Different Angle
Body shape anxiety is one of the common problems distressing people around the world. While people are constantly suffering from body shape anxiety, they do not realize that beauty standard is not an absolute truth and that it is changing and depends on time and space. To help people free from the constraint imposed by social beauty standards and connect those who have concerns over their body shape, we devised a technological solution named “Shadoji”, which can bring people a new perspective on their body shapes, give them a chance to create their unique body shape emoji, and to explore diverse body shapes from users around the world.
Give Me a Break: Design for Communication Among Family Caregivers and Respite Caregivers
This study focuses on solutions to issues that arise from gaps in communication between primary family caregivers of older adults and respite caregivers. We collected data through 18 semi-structured interviews with primary family and respite caregivers and qualitatively analyzed the interviews to extract common needs. Participants identified three main needs that our designs address: building trust through status updates, learning routines & care management, and accessing technology. Based on those needs, we designed a prototype of an application which connects primary family caregivers with respite caregivers and facilitates communication between the involved parties. This design can serve as a framework for future work designed to improve elder care in general, the well-being of caregivers, and the effectiveness of respite care.
Amor: Supporting Emerging Adult Couples to Manage Finances for a Common Goal
A healthy romantic relationship is one of the life goals of many couples. Sharing feelings, emotions, thoughts, activities, and even finances enables couples to spend quality time with each other. In particular, financial management plays an important role in relationship quality of emerging adult couples. We present Amor, a mobile application that bridges emerging adult couples closely to pool and spend their money together for a common goal. Amor also allows the couples to explore activities that harmonize with their characteristics as well as an encouragement for the two to achieve their desired goals.
SENTIŌ: Reconnect with Yourself to Better Connect with Others
Occidental societies are becoming increasingly demanding. Citizens often feel overwhelmed and feel the need to rest, which they do so by isolating themselves from others. Individuals who use this time alone to reflect on their situation find that this reflection helps create deeper connections with others afterwards. With this in mind, we developed Sentiō, a solution that aims to encourage people to take time for themselves in hopes that it will lead them to be more open-minded when interacting with others. The solution helps people focus on bodily sensations and generates a comfortable and effortless time for introspection. Sentiō uses the subconscious benefits of perceiving one’s heartbeat to foster personal reflection.
Haptic Remembrance Book Series
While having strong social networks is essential for good quality of life, socializing gets more difficult as we age even if we are surrounded by people in a nursing home setting. Care staff and family members want to be able to connect with residents/loved ones but have limitations (time constraints, unsure how), and it can be challenging to feel a connection to another resident who has differing abilities. We propose that through sharing strong positive memories, members of a nursing home community (residents, care staff, and families) will be able to build empathy for one another and thus strengthen their community bonds. By applying multi-modal technologies to a familiar medium, the book, we provided a means by which intergenerational users, from various backgrounds, could interact with and gain meaning from the content.
HelloBox: Creating Safer and Kid-friendly Communities
Children’s ability to walk to school or play around their neighborhoods without parental supervision has severely declined in the past decade. Loss of local activities and mobility may leave children prepared for transitions to adulthood and make them less independent and may have negative health effects. In this paper, we discuss the findings of our research on children’s unsupervised play in their neighborhoods. We propose HelloBox, a system including an app, wearable RFID tags, and check-in stations to keep track of children when they are outside without an adult. HelloBox lets parents know where their children are without restricting their independence and builds community between local families. The project focuses on improving children’s independent mobility and encouraging social interactions between families within neighborhoods.
HearU: An Integrated System to Raise Awareness of Hearing Loss
Hearing loss is a prevalent public health crisis that often goes unnoticed in public eye due to its invisibility. People who are hard of hearing (HH) often experience social isolation caused by the lack of understanding from the hearing community. Our research revealed that there is a communication gap between the hearing and HH communities. The current state of art tends to focus on assisting HH people to participate in conversations, which places more responsibility on the HH community. There is a lack of responsibility of people in the hearing community. However, communication works in two directions; it requires both parties to take responsibility. In this paper, we present HearU – an integrated, multichannel system that includes public interaction. Through the creation of an immersive experience that increases people’s awareness of hearing loss, we aim to use HearU as a medium to weave these two communities together as a whole.
PALS: Patching ALS through Crowdsourced Advice, Social Links & Public Awareness
Amyotrophic Lateral Sclerosis (ALS) is a serious and poorly understood disease, impacting 50,000 people a year globally. Our research found that people with ALS express a lack of connection with other people with the disease, and that the general public lacks awareness about ALS. We also identified an engagement problem with the currently available resources to connect and support people with ALS. To address these issues, we introduce ‘PALS’ – an accessible crowdsourcing and connection quilt, first hung like a tapestry in the ALS clinic, then later used as an interactive public display. The quilt offers the opportunity to access crowdsourced information concerning individual experiences of ALS. Our work offers three primary contributions: 1) adding to limited HCI research concerning the ALS community by establishing the needs, 2) applying the ‘PALS’ quilt design solution to these needs, and 3) combining three modalities: crowdsourcing, tangible tapestry displays, and interactive waiting education in a unique way.
TechBuddies: Engaging Students to Teach Retirees about Technology
Retirees suffer from impaired mobility, loss of friends and family, and loneliness. Although these problems could be mitigated through the use of digital communication devices and the internet, many retirees lack the skill and confidence to use them effectively. The existing community initiatives that aim to help retirees understand technology often lack volunteers to teach them. This is why we developed TechBuddies, a two-component system aimed at engaging more students as volunteers. The first component consists of an interactive display that raises awareness about volunteering opportunities by simulating an interaction between a retiree and the potential volunteer. The second component involves an app-based platform that facilitates communication about events and encourages long-term engagement. TechBuddies raises awareness about and provides a platform for inter-generational interactions between students and retirees who are a part of the same local community.
SESSION: Student Research Competition
Can Changes in Heart Rate Variability Represented in Sound be Identified by Non-Medical Experts?
Heart rate variability (HRV) has become a wide-spread area for the investigation of the health and stress states of individuals. This paper aims at exploring the effectiveness of representing HRV measures with alternative modalities, other than visual displays, such as audio or haptics. Therefore, we undertook a preliminary study in which we applied a parameter mapping sonification approach to transform the HRV signal into an audible form. In this work, we sought to evaluate the human perception of the developed auditory interface. Hence, a dataset that involves interbeat interval measurements of individuals experiencing changes in mental state in the form of meditation was selected as the basis of the study. The HRV parameters of the dataset were mapped to acoustic features using a linear mapping technique. The feasibility of the system was assessed by measuring the learnability, performance, latency, and confidence aspects. The results suggest a great potential of incorporating auditory displays in the analysis of HRV. Participants were able to distinguish the different meditation states and types with minimal training time. However, further studies should be conducted on a larger population to provide verification of the findings of this preliminary study.
Muse: Scaffolding Metacognitive Reflection in Design-Based Research
While learning science research has explored approaches improving students’ problem solving skills by introducing tools that support students in metacognitive reflection, this work has focused on problems with clear solutions, rather than addressing the question of how metacognitive reflection can help students develop their self-regulation skills, which help students understand and control their learning environment through planning, practice, and self-evaluation. This paper presents Muse, an in-action chatbot interface that prompts students to reflect metacognitively on their self-direction process in the midst of working on their independent research projects. Students participate in the Design, Technology, and Research (DTR) program, which provides several undergrads the opportunity to self-direct an independent research project by using the socio-technological model Agile Research Studios (ARS). Results from a case study suggest that Muse helps students identify time consuming habits and set aside less important tasks by giving participants the opportunity to act on their reflections and adjust aspects of their process that they deem less effective.
Guidelines for Combining Storytelling and Gamification: Which Features Would Teenagers Desire to Have a More Enjoyable Museum Experience?
While museums are often designed to engage and interest a wide variety of audiences, teenagers are a neglected segment. This PhD research in Digital Media explores how digital technologies can facilitate natural history museums in creating immersive museum experiences for teenagers (15-18 years old), especially through digital storytelling and gamification frameworks. This contribution would be a set of guidelines that will aid in designing interactive experiences inside these museums. So far, we have involved a total of 155 teens through co-design sessions, 130 in focus groups, and 98 in usability studies, as well as 3 museums, 12 curators, and 17 master students. Through qualitative analysis, our preliminary findings suggest that teenagers value gamification and storytelling elements when thinking about enjoyable museum tours, while curators value story-based narratives as the most prominent method to provide enjoyable museum experience for teens. Based on the findings identified, and in collaboration with the Madeira-ITI, two interactive mobile experiences targeted at teenagers were developed for the Natural History Museum of Funchal, Portugal.
Understanding the Occupational Therapists Method to Inform the Design of Technologies for People with Dementia
This paper describes a PhD research project involving people with dementia and practitioners who work primarily with people with dementia to support engagement in meaningful activities and activities of everyday living. The aim of this work is to develop a technology which adapts to changing cognitive demands of people with dementia in order to facilitate continuous engagement in meaningful activities. In depth, semi-structured interviews were conducted with practitioners to understand their methods for personalization of activities and the implications for design of future adaptive technologies. Preliminary results from interviews with Occupational Therapists are presented.
Lake: A Digital Wizard of Oz Prototyping Tool
Mobile app designers aim to develop the best mobile software interfaces in the least amount of time, and rely on testing ideas with prototypes in lieu of building costly, fully functioning applications. Yet, designers cannot effectively prototype some complex app experiences, including augmented reality applications like Pokémon GO, because existing tools lack the needed features, or because prototyping in them is too time intensive to be feasible. To solve this problem, we introduce Lake, a mobile application prototyping tool that enables the creation of complex mobile applications with the same ease as paper prototyping. By leveraging the Wizard of Oz technique used in paper prototyping in our digital medium, we enable designers to prototype at the same low cost as paper, but at a much higher fidelity. Through a pilot study (N=6), we find that designers are able to gather organic in-context feedback from complex prototypes made with Lake.
Automatic Speech Recognition Services: Deaf and Hard-of-Hearing Usability
Nowadays, speech is becoming a more common, if not standard, interface to technology. This can be seen in the trend of technology changes over the years. Increasingly, voice is used to control programs, appliances and personal devices within homes, cars, workplaces, and public spaces through smartphones and home assistant devices using Amazon’s Alexa, Google’s Assistant and Apple’s Siri, and other proliferating technologies. However, most speech interfaces are not accessible for Deaf and Hard-of-Hearing (DHH) people. In this paper, performances of current Automatic Speech Recognition (ASR) with voices of DHH speakers are evaluated. ASR has improved over the years, and is able to reach Word Error Rates (WER) as low as 5-6% [1][2][3], with the help of cloud-computing and machine learning algorithms that take in custom vocabulary models. In this paper, a custom vocabulary model is used, and the significance of the improvement is evaluated when using DHH speech.
Affordable Smart Wheelchair
Power wheelchairs (PW) are an example of an assistive technology in that they are used to increase, maintain, or improve the functional capabilities of persons with disabilities [6]. As seen in [5] [3], the commercially available products do not provide any assistance beyond enhanced mobility. Furthermore, existing PW research fails at comprehensively noting an individual’s challenges, such as navigating through narrow passages or fixing their broken wheelchairs. Instead, they focus mostly on novel interaction methods such as BCI, head and gaze control [9] [15]. In this paper, we explore these individual needs and show that PWs have the novel potential to become a smart wheelchair at an affordable price. Our research follows the double diamond (DD) process [2] and relies on the participatory design (PD) methodology [7], which addresses all stakeholders involved. Namely, we consider individuals with wheelchairs, the assistive technology research community, and the PW industry. For further insight we also contacted medical doctors, healthcare professionals, and non- profit organizations. We spent time getting to know these communities through interviews, surveys, demonstrations, and continuous user inputs, aligning our work to the PD tools. We found that individuals using wheelchairs overall desire safety, accessibility, and a durable design. Guided by these results, we designed a proof of concept (POC) system called the Affordable Smart Wheelchair (ASW) for indoor use. This kit implements full-autonomy in the form of indoor navigation from one room to another and to predetermined docking locations through voice control. It also has semi-autonomous functions in the form of manual joystick control augmented with real-time collision avoidance and staircase detection.
Math Graphs for the Visually Impaired: Audio Presentation of Elements of Mathematical Graphs
The sense of sight takes a dominating role in learning mathematical graphs. Most visually impaired students drop out of mathematics because necessary content is inaccessible. Sonification and auditory graphs have been the primary methods of representing data through sound. However, the representation of mathematical elements of graphs is still unexplored. The experiments in this paper investigate optimal methods for representing mathematical elements of graphs with sound. The results indicate that the methods of design in this study are effective for describing mathematical elements of graphs, such as axes, quadrants and differentiability. These findings can help visually impaired learners to be more independent, and also facilitate further studies on assistive technology.
I Am What You Eat: Effects of Social Influence on Meal Selection Online
The availability of mHealth technologies has increased exponentially, particularly fitness and calorie tracking applications. Recent studies and anecdotal evidence has highlighted the potential of these technologies to serve as tools of bad eating behavior due to its focus on self-monitoring and calorie counting. The current research investigates on the potential of using social-orienting features of technology, specifically bandwagon and identity cues, to incentivize food-based nutrition (FBN) rather than a calorie-only approach. For this purpose, a 2 x 2 mixed factorial online experiment was conducted with bandwagon cue as a within-subject factor and identity cue as a between-subject factor. Results reveal that 67.6% of participants selected high bandwagon cue meals, regardless of its nutritional value. Bandwagon perception was the only significant predictor of meal selection, indicating that an increase in one unit improved the odds of an individual choosing a high bandwagon meal by 69%.
Designing Compassion Cultivating Interactions for New Mothers
This paper describes a research project that aims to examine an unexplored design space-compassion cultivation for wellness. This project seeks to understand the components of compassionate interactions between new mothers and their proximate and primary supporters-partners — to inform the design of compassion cultivating interventions for maternal wellness. A discussion of research activities undertaken thus far and preliminary results are presented.
Family-Centered Exploration of the Benefits and Burdens of Digital Home Assistants
Parents receive conflicting information on the benefits and burdens of children’s technology use, especially novel technologies such as digital home assistants. To understand parents’ views, we analyzed relevant Amazon Echo device product reviews posted to Amazon.com, and deployed a User Benefit and Burden Survey to 131 parents on Amazon Mechanical Turk to explore their perspectives of Amazon Echo digital home assistants. Our work explores parents’ perceptions of the devices with regards to their children and families, in terms of attributes such as benefits and burdens. This study contributes an empirical, family-centered understanding of and design opportunities for whole home personal assistants in support of a diversity of families.
Privacy Therapy with Aretha: What If Your Firewall Could Talk?
The rapid adoption of smart home devices has brought with it a widespread lack of understanding amongst users over where their devices send data. Smart home ecosystems represent complex additions to existing wicked problems around network privacy and security in the home. This work presents the Aretha project, a device which combines the functionality of a firewall with the position of voice assistants as the hub of the smart home, and the sophistication of modern conversational voice interfaces. The result is a device which can engage users in conversation about network privacy and security, allowing for the forming and development of complex preferences that Aretha is then able to act upon.
ParaXplore: Demystifying the Exploration of Large Design Spaces
Generative design tools together with large screen displays provide designers an opportunity to explore large numbers of design alternatives. There are numerous design studies on exploring multiple simultaneous designs, but few present interface solutions and system features for such exploration. To the best of my knowledge, no study probes exploring a large design space with multiple simultaneous states. The premise of my research is that, if designers can work directly with large numbers of designs with new representations and tools as part of the design workflow, we should expect new patterns and strategies to emerge and change the design process. Such task environments may present novel actions, task sequences, methods and techniques. What are new actions and techniques that would enable working seamlessly with multiple designs? My research aims to answer this (and similar) questions; and, more specifically, to uncover how designers’ augment their work through spatial structuring of the task environment to reduce the cognitive cost of working with multiple simultaneous designs on a large work-surface. I conducted a lab experiment with nine designers. The results suggest design features for new front-end gallery interfaces for managing a large set of design variations while enabling simultaneous editing.
Sense of Familiarity: Improving Older Adults’ Adaptation to Exergames
Exergames are proven to be effective in helping older adults improve their physical and mental capabilities. However, older adults’ maladaptation to exergames may occur due to the complexity of game tasks and interfaces. In this work, we show that familiarity can improve older adults’ adaptation to exergames. The results of our first study indicate that older adults with higher level of familiarity to exergames exhibit higher level of motivation and ability. To maximize the effectiveness of exergames, it is helpful to provide older adults exergames which they are more familiar with. To evaluate the familiarity level of exergames, we propose a novel familiarity model with five factors, namely prior experience, positive emotion, repeated time, level of processing, and retention rate. Results from our second study show that the identified five factors have significant positive correlations with familiarity and there is a high positive correlation between familiarity levels and participants’ satisfaction to the exergames.
GPK: An Efficient Special Symbol Input Method for Keyboards
We introduce a novel typing technique for special symbols in the keyboard-only environment. The technique, called GPK (Gliding on Physical Keyboard), consists of two steps for entering special symbols. First, a user draws the special symbol on the keyboard by gliding over keys; Second, the user can select the desired symbol from the predicted symbols generated by GPK. Users can also switch from this mode to normal typing mode. We also present an application of this input technique based on web browsers. A user study with nine participants who are familiar with keyboard input showed the input efficiency of GPK. We compared the typing efficiency of GPK with other special symbol typing methods. We could deploy this method to office environments where users have desktop computers with a keyboard only. It could also inspire future work of integrating this method into word processors, document preparation systems and web environments.
SESSION: Video Showcase
Digital Fabrication of Soft Actuated Objects by Machine Knitting
With recent interest in shape-changing interfaces, material-driven design, wearable technologies, and soft robotics, digital fabrication of soft actuatable material is increasingly in demand. Much of this research focuses on elastomers or non-stretchy air bladders. Computationally-controlled machine knitting offers an alternative fabrication technology which can rapidly produce soft textile objects that have a very different character: breathable, lightweight, and pleasant to the touch. These machines are well established and optimized for the mass production of garments, but have received much less attention as general purpose fabrication device compared to other digital fabrication techniques such as CNC machining or 3D printing. In this work, we explore a series of design strategies for machine knitting actuated soft objects by integrating tendons with shaping and anisotropic texture design.
Gamified Ads: Bridging the Gap Between User Enjoyment and the Effectiveness of Online Ads
Often, online ads are disruptive and annoying. As a consequence, ad blockers are used to prevent ads from appearing on a website. However, web service providers lose more than 35 billion dollars per year because of this development. As an alternative, we investigate the user enjoyment and the advertising effectiveness of playfully deactivating online ads. This video showcase illustrates the research method and the most interesting results of our previous research. Here, we assessed the perception of eight game concepts allowing users to playfully deactivate ads and implemented three well-perceived ones. These were found to be more enjoyable than deactivating ads without game elements, with one game concept being even preferred over using an ad blocker. We also found positive effects on ad effectiveness as compared to the baseline.
Deep Reality: Towards Increasing Relaxation in VR by Subtly Changing Light, Sound and Movement Based on HR, EDA, and EEG
We present an interactive Virtual Reality (VR) experience that uses biometric information for reflection and relaxation. We monitor in real-time brain activity using a modified version of the Muse EEG and track heart rate (HR) and electro dermal activity (EDA) using an Empatica E4 wristband. We use this data to procedurally generate 3D creatures and change the lighting of the environment to reflect the internal state of the viewer in a set of visuals depicting an underwater audiovisual composition. These 3D creatures are created to unconsciously influence the body signals of the observer via subtle pulses of light, movement and sound. We aim to decrease heart rate and respiration by subtle, almost imperceptible light flickering, sound pulsations and slow movements of these creatures to increase relaxation.
Hyper Sensorial — Human Computed Neurodivergent Poem
In this video artwork, the author looks at the interaction between neurodiversity and media design, focusing on his acoustic condition of hyperacusis as well as his neurodivergent condition of Asperger’s. There are many inner thoughts that the hyper sensorial acoustic disorder brings, and the author creates poems as an output, interrelating art and technology in the cognitive process, and underlining a cross-connection between the data and emotions.
ComunicArte: A Public Speaking Trainer in Virtual Reality
Project ComunicArte is a virtual reality videogame for training the ability of public speaking. It is built as an environment where the speaker confronts a virtual audience that reacts in real time to the speaker’s features, such as voice, gestures and bio-metric parameters (heart rate or skin conductivity, among others). The novelty of this videogame is that it is focused on the audience, since in real life, the only feedback we receive when we speak in public is that of our listeners. By their reactions we can determine if our communication is being effective. For that purpose, we included in the game a virtual audience based on agents that is able to provide feedback to the speakers in real time, so that they can react and adapt their speech accordingly.
Connected Resources – Empowering Older People to Age Resourcefully
With this video, we showcase the possibilities of the Internet of Things and machine learning to support improvisation in older people. Available smart products for older people tend to be developed with narrowly predefined use-scenarios, based on stereotypes of elderly as passive, immobile, and technologically incompetent. This type of products may restrict older people’s existing capabilities of resourcefulness and autonomy [1]. As an alternative approach, we created Connected Resources, a family of combinable sensors and actuators for older people that encourages them to negotiate and situate use according to their personal and changing circumstances with a high level of freedom. The objects learn older people’s unique ways of using these artifacts and share them through an online platform to encourage the development of new strategies. In this way, Connected Resources celebrate older people’s creativity, instead of providing normative solutions or enforcing compliance with certain behavior.
AttentivU: A Biofeedback System for Real-time Monitoring and Improvement of Engagement
It is increasingly hard for adults and children alike to be attentive given the increasing amounts of information and distractions surrounding us. We have developed AttentivU: a device, in a socially acceptable form factor of a pair of glasses, that a person can put on in moments when he/she wants/needs to be attentive. The AttentivU glasses use electroencephalography (EEG) as well as Electrooculography (EOG) sensors to measure attention of a person in real-time and provide either audio or haptic feedback to the user when their attention is low, thereby nudging them to become engaged again. We have tested this device in workplace and classroom settings with over 80 subjects. We have performed experiments with people studying or working by themselves, viewing online lectures as well as listening to classroom lectures. The obtained results show that our device makes a person more attentive and produces improved learning and work performance outcomes.
TrussFormer: 3D Printing Large Kinetic Structures
TrussFormer is an integrated end-to-end system that allows users to 3D print large-scale kinetic structures, i.e., structures that involve motion and deal with inertial forces. TrussFormer builds on TrussFab, from which it inherits the ability to create static large-scale truss structures from 3D printed connectors and PET bottles. TrussFormer adds movement to these structures by placing linear actuators into them: either manually, wrapped in reusable components called assets, or by demonstrating the intended movement. TrussFormer verifies that the resulting structure is mechanically sound and will withstand the dynamic forces resulting from the motion. To fabricate the design, TrussFormer generates the underlying hinge system that can be printed on standard desktop 3D printers. We demonstrate TrussFormer with several example objects, including a 6 legged walking robot and a 4m tall animatronics dinosaur with 5 degrees of freedom.
Plunder Planet: An Adaptive Fitness Game Setup for Children
Childhood obesity is one of the greatest public health challenges of the 21st century, although it could easily be prevented by regular physical activity. Exergames have been applauded for their potential to counteract this tendency in a playful and motivating manner. However, current solutions often lack a user-centered, interdisciplinary design approach covering the physical and virtual design levels. Consequently, motivational factors, attractiveness and effectiveness of these fitness games remain limited. To contribute to this topic, we – a team of sports scientist and game designers – developed the adaptive exergame Plunder Planet (Fig. 1) with and for children and young adolescents [1]. It can be played as single or cooperative multiplayer [2] with two different motion-based controllers that require either haptic or gesture-based input movements (Fig. 2) and trigger different (social) gameplay experiences. Based on the player’s heart rate and in-game performance the game difficulty and complexity are continuously adjusted to provide a maximum attractive and effective experience.
Slappyfications: Towards Ubiquitous Physical and Embodied Notifications
With emerging trends of notifying persons through ubiquitous technologies, such as ambient light, vibrotactile, or auditory cues, none of these technologies are truly ubiquitous and have proven to be easily missed or ignored. In this work, we propose Slappyfications, a novel way of sending unmissable embodied and ubiquitous notifications through a palm-based interface. Our prototype enables the users to send three types of Slappyfications: poke, slap, and the STEAM-HAMMER. Through a Wizard-of-Oz study, we show the applicability of our system in real-world scenarios. The results reveal a promising trend, as none of the participants missed a single Slappyfication.
Creating Worlds: Exploring Animated Videos as a Tool for Contextualization in User Research
Research indicates that personal adoption of emerging ubicomp technologies is being notoriously hampered by a variety of critical issues including trust, privacy and security. Issues such as these cannot be studied and understood by evaluating computer systems in isolation, but rather by taking a ‘big picture’ approach and examining their synergy with the broader social context. Traditional low-fidelity prototyping methods, such as interface mockups, are however poorly equipped to convey such broader settings. Video-based scenarios on the other hand are uniquely qualified to portray rich socio-technical ecosystems. By creating a set of provocative video scenarios that contextualize and provide a backdrop for prospective technologies, we thus seek to draw attention to the potentially important role that worldbuilding strategies might play in the future of low fidelity prototyping.
Feeling Fireworks: An Inclusive Tactile Firework Display
Fireworks are enjoyed throughout the world, but are primarily a visual experience. To make fireworks more inclusive for Blind and Low-Vision (BLV) persons, we have developed a large-scale interactive tactile display that produces tactile fireworks. Fast dynamic tactile effects are created at high spatial resolution, using directable nozzles that spray water jets onto the rear of a flexible screen. The screen furthermore has back-projected visual content and touch interaction.
A BLV focus group provided input during the development process, and a user study with BLV users showed that Feeling Fireworks is an enjoyable and meaningful experience. Quotes from blind users include “First time to get the feeling of what’s happening in the sky. Fountain – awesome, first time had a feeling of that is what is a fountain.”, “My mom always told me about fireworks but now I understand it.” and “Now I know why people like fireworks.”
Beyond the Feeling Firework application, this is a novel approach for scalable tactile displays with potential for broader use.
Cyborg Botany: Augmented Plants as Sensors, Displays and Actuators
The nature has myriad plant organisms, many of them carrying unique sensing and expression abilities. Plants can sense the environment, other living entities and regenerate, actuate or grow in response. Our interaction mechanisms and communication channels with such organisms in nature are subtle, unlike our interaction with digital devices. We propose a new convergent view of interaction design in nature by merging and powering our electronic functionalities with existing biological functions of plants.
Cyborg Botany is a design exploration of deep technological integration within plants. Each desired synthetic function is grown, injected or carefully placed in conjunction with a plant’s natural functions. With a nanowire grown inside the xylem of a plant [1, 2, 3, 4], we demonstrate its use as a touch sensor, motion sensor, antenna and more. We also demonstrate a software through which a user clicks on a plant’s leaves to individual control their movement [6], and explore the use of plants as a display [5]. Our goal is to make use of a plant’s own sensing and expressive abilities of nature for our interaction devices. Merging synthetic circuitry with plant’s own physiology could pave a way to make these lifeforms responsive to our interactions and their ubiquitous sustainable deployment.
Painting with CATS: Camera-Aided Texture Synthesis
We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other. This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings.
Mobeybou – A Digital Manipulative for Multicultural Narrative Creation
Mobeybou, is a digital manipulative (DM) that uses physical blocks to interact with digital content. It intends to create an environment for promoting the development of language and narrative competences as well as digital literacy among pre and primary school children. It offers a variety of characters, objects and landscapes from various cultures around the world and can be used for creating multicultural narratives. An interactive app developed for each country provides additional cultural and geographical information about each represented culture.
4D Printing A-line
In this video, we showcased “A-line”, a 4D printing system for designing and fabricating morphing three-dimensional shapes out of simple linear elements. A-line integrates a method of bending angle control in up to eight directions for one printed line segment, using a single type of thermoplastic material. A software platform to support the design, simulation and tool path generation is developed to support the design and manufacturing of various A-line structures. The video showcases the design space of A-line, including the unique properties of line sculpting, the suitability for compliant mechanisms and the ability to travel through narrow spaces and self-deploy or self-lock on site.
Showcasing ElectroDermis
In this video, we showcased “ElectroDermis”, a fabrication approach that simplifies the creation of highly-functional and stretchable wearable electronics that are conformal and fully untethered by discretizing rigid circuit boards into individual components. These individual components are wired together using stretchable electrical wiring and assembled on a spandex blend fabric, to provide high functionality in a robust form-factor that is reusable. We describe a series of example applications that illustrate the feasibility and utility of our system.
SESSION: Symposia
Symposium: WISH – Workgroup on Interactive Systems in Healthcare
The Workgroup on Interactive Systems in Healthcare (WISH) brings together industry and academic researchers in human-computer interaction, biomedical informatics, health informatics, mobile health, and other disciplines to develop a cross-disciplinary research agenda that will drive future innovations in health care. We propose a Symposium at CHI 2019 to host WISH with the goal of facilitating a common space to share and discuss methods, study designs, and dissemination in a collaborative fashion. The symposium also aims to actively provide mentoring opportunities to junior and new health technology researchers – from undergraduates to mid-career researchers who want to focus on interactive systems in Healthcare. This will be the eighth WISH meeting in a series of successful workshops bringing together different research communities and practitioners around challenges of designing, implementing, and evaluating interactive health technologies.
HCI Across Borders and Intersections
The HCI Across Borders (HCIxB) community has been growing in recent years, thanks in particular to the Development Consortium at CHI 2016 and the HCIxB Symposia at CHI 2017 and 2018. This year, we propose an HCIxB symposium that continues to build scholarship potential of early career HCIxB researchers, strengthening ties between more and less experienced members of the community. We especially invite scholarship with a focus on intersections, examining and/or addressing multiple forms of marginalization (e.g. race, gender, class, among others).
EduCHI 2019 Symposium: Global Perspectives on HCI Education
At CHI 2018, a workshop on developing a community of practice to support global HCI education was held, building on six years of research and collaboration in the area of HCI education. Many themes emerged from the workshop activities and discussions. Two particularly stood out: creating channels for discussions related to HCI education and providing a platform for sharing HCI curricula and teaching experiences. To that end, we are organizing a CHI 2019 symposium dedicated exclusively to HCI education: EduCHI 2019: Global Perspectives on HCI Education. The symposium will focus on the canons of HCI education in 2019 and beyond. It will offer a venue for HCI educators across disciplinary and geographical borders to discuss, dissect, and debate HCI teaching and learning. Through keynote addresses, paper presentations, and a panel discussion, we aim to discuss current and future HCI education trends, curricula, pedagogies, teaching practices, and diverse and inclusive HCI education. Post-symposium initiatives will aim to document and publish the discussions from the symposium.
CHI 2019 Early Career Development Symposium
The first few years after completing a PhD can be challenging to navigate. Job hunting, interviewing, navigating new contexts such as a junior academic position, applying for funding as a first time project investigator, learning to adapt to the culture of an industry-based workplace, supervising graduate students or full-time employees – these are just a few of the scenarios recent PhD graduates find themselves in. Within HCI, one may encounter more discipline-specific challenges, such as keeping up with the CHI publication cycles while taking on new administrative duties. The CHI community, however, strives to be collectively supportive and inclusive of researchers at all stages of their career – this is even more important as many of our design approaches are rooted in empathy for and empowerment of participants. By more actively supporting each other as researchers in our career paths, we can better grow as a community, and reflect it back into our collective body of practice. The Early Career Development Symposium has been proposed (and held yearly since 2016) to provide a more formal mentoring venue that reflects our aims as a community to more meaningfully support each other.
4th Symposium on Computing and Mental Health: Designing Ethical eMental Health Services
With psychiatric conditions like depression now the leading global causing of disability, the need for innovative solutions is apparent. The promise of mental health care delivered through technology (eMentalHealth) to provide personalized care offers a promising solution that has galvanized interest worldwide. However, in order to ensure that eMentalHealth is scalable and sustainable, service delivery and health and social care policies need to be integrated into the design of technology interventions. This will require new forms of interdisciplinary collaborations that we hope to foster in this symposium. Thus, in this fourth in our series of Symposia on Computing and Mental Health, the focus will beon the intersection of the communities innovating in this space: patients, designers, data scientists, clinicians, researchers, computer scientists, developers, and entrepreneurs guided by core medical ethical principles including respect for persons, beneficence, and justice. Our aim is thus to enable the vision of better mental health through ethical innovation powered by the right collaborations
Asian CHI Symposium: Emerging HCI Research Collection
This symposium showcases the latest work from Asia on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among academics (researchers and students) and practitioners and create a fresh research community from Asian region.
SESSION: Workshops
iHDI: International Workshop on Human-Drone Interaction
Commercial drones have recently been developed to encompass use cases beyond aerial photography and videography. Researchers have explored wider applications of drones including using drones as social companions, as key components in virtual environments, as assistive tools for people with disabilities, and even as sport companions. However the uptake of research in Human-Drone Interaction (HDI) also brought forth a plethora of challenges that are unique to this platform. While drones were initially considered as flying robots, recent works have shown that traditional Human-Robot Interaction (HRI) methodologies cannot simply be applied to HDI. For example, how do we deal with privacy and safety concerns associated with drones in public space? What is the appropriate methodology to evaluate HDI applications? How do the size, altitude, and speed of drones influence their perception? The aim of this workshop is to bring together researchers and practitioners from both academia and industry to identify: 1) novel HDI applications, and 2) key challenges in this area to drive research in the coming decade. The long-term goal is to create a strong interdisciplinary research community that includes researchers and practitioners from HCI, HRI, Ubiquitous Computing, Interaction Techniques, User Privacy, and Design.
With an Eye to the Future: HCI Practice and Research in the Arab World1
ArabHCI initiative was inaugurated in a CHI17 SIG Meeting that brought together 45+ HCI Arab/non-Arab researchers/practitioners who are conducting/interested in HCI within Arab communities. The meeting started an ongoing dialogue that recognizes the fact that HCI is still in its infancy in the Arab world and explores challenges and opportunities for shaping the future of HCI in the region. The meeting was further followed by three successful meetings in SIGCHI sponsored events that included general discussions about the state of HCI research in Arab countries, and a thematic discussion on exploring participatory design practices with Arab communities. In this workshop, we build on the momentum generated by our previous meetings, and attempt to draw a roadmap for HCI research and practice in Arab world. Our goal is to bring together researchers and practitioners to discuss case studies from their own work, share experiences and lessons learned, and envision the future of the field in this area. We plan to share the results of our discussions and research agenda with the wider CHI community through various social and scholarly channels.
Unpacking the Infrastructuring Work of Patients and Caregivers around the World
Many healthcare systems around the world are noted as fragmented, complex and low-quality, leaving patients and caregivers with no choice but to engage in “infrastructuring work” to make it function for them. However, the work patients do to construct their functioning health service systems often remain invisible, with very little resources and technologies available to assist them. This workshop aims to bring together researchers, health practitioners, and patients to examine, discuss, and brainstorm ways to re-envision our healthcare service systems from the perspective of patients and caregivers. We aim to unpack the types of work that patients and caregivers do to reconfigure, reconstruct and adapt the healthcare infrastructure, and to brainstorm design solutions that can provide better infrastructuring assistance to them.
HCI in China: Research Agenda, Education Curriculum, Industry Partnership, and Communities Building
Human Computer Interaction (HCI) is experiencing explosive growth in both Chinese industry and academia. We propose to organize an international workshop to coordinate and unite the ongoing efforts, and to facilitate collaboration between local community with the global HCI community. This extended abstract describes the background, goals, organizers, themes, and plans of the proposed workshop.
Technology to Mediate Role Conflict in Motherhood
Today, new mothers are experiencing parenthood differently. Digital resources can provide a wealth of information, present opportunities for socialising, and even assist in tracking a baby’s development. However, women are often juggling the role of motherhood with other commitments, such as work. The aim of this workshop is to understand the digital support needs and practices during parenthood from the perspective of employed mothers. We are interested in exploring the ways that women utilise the technologies which have been designed to support mothers, and specifically, the importance of work-life balance and the various roles that mothers play. There is a need to better understand and identify which technologies are being used to support working women through their motherhood journey, and ensure a healthy transition to support women’s changing identities.
DC2S2: Designing Crowd-powered Creativity Support Systems
Supporting creativity has been considered as one of the grand challenges of Human Computer Interaction. All creativity lies within humanity and crowdsourcing is a powerful approach for tapping into the collective insights of diverse crowds. Thus, crowdsourcing has great potential in supporting creativity. In this workshop, we brainstorm new crowdsourcing systems and concepts for supporting creativity, by bringing together researchers and industry professionals in a full-day workshop. The workshop consists of discussions of ideas contributed by the participants and hands-on brainstorming sessions in groups for ideating new crowd-powered systems that support creativity. We center the workshop around two themes: supporting the individual and facilitating creativity in groups.
HCI and Aging: Beyond Accessibility
Despite improvements in the accessibility of digital technologies and growing numbers of tools designed specifically for older adults, adoption of such tools remains low for this demographic. This workshop aims to explore the contextual factors that contribute to reduced uptake among older adults in order to understand how to design digital technologies that will be appealing to and work for them, fitting with recent calls for more holistic approaches to designing for older adults. Going beyond standard accessibility considerations, and aiming to inform design of technologies for the general population rather than the design of senior-friendly variants of such tools, we will generate a set of principles for developing tools that older adults can and will use.
Looking into the Future: Weaving the Threads of Vehicle Automation
Automated driving is one of the most discussed disruptive technologies of this decade. It promises to increase drivers’ safety and comfort, improve traffic flow, and lower fuel consumption. This has a significant impact on our everyday life and mobility behavior. Beyond the passengers of the vehicle, it also impacts others, for example by lowering the barriers to visit distant relatives. In line with the CHI2019 conference theme, our aim is to weave the threads of vehicle automation by gathering people from different disciplines, cultures, sectors, communities, and backgrounds (designers, researchers, and practitioners) in one community to look into concrete future scenarios of driving automation and its impact on HCI research and practice. Using design fiction, we will look into the future and use this fiction to guide discussions on how automated driving can be made a technology that works for people and society.
Where is the Human?: Bridging the Gap Between AI and HCI
In recent years, AI systems have become both more powerful and increasingly promising for integration in a variety of application areas. Attention has also been called to the social challenges these systems bring, particularly in how they might fail or even actively disadvantage marginalised social groups, or how their opacity might make them difficult to oversee and challenge. In the context of these and other challenges, the roles of humans working in tandem with these systems will be important, yet the HCI community has been only a quiet voice in these debates to date. This workshop aims to catalyse and crystallise an agenda around HCI’s engagement with AI systems. Topics of interest include explainable and explorable AI; documentation and review; integrating artificial and human intelligence; collaborative decision making; AI/ML in HCI Design; diverse human roles and relationships in AI systems; and critical views of AI.
Hacking Blind Navigation
Independent navigation in unfamiliar and complex environments is a major challenge for blind people. This challenge motivates a multi-disciplinary effort in the CHI community aimed at developing assistive technologies to support the orientation and mobility of blind people, including related disciplines such as accessible computing, cognitive sciences, computer vision, and ubiquitous computing. This workshop intends to bring these communities together to increase awareness on recent advances in blind navigation assistive technologies, benefit from diverse perspectives and expertises, discuss open research challenges, and explore avenues for multi-disciplinary collaborations. Interactions are fostered through a panel on Open Challenges and Avenues for Interdisciplinary Collaboration, Minute-Madness presentations, and a Hands-On Session where workshop participants can hack (design or prototype) new solutions to tackle open research challenges. An expected outcome is the emergence of new collaborations and research directions that can result in novel assistive technologies to support independent blind navigation.
Emerging Perspectives in Human-Centered Machine Learning
Current Machine Learning (ML) models can make predictions that are as good as or better than those made by people. The rapid adoption of this technology puts it at the forefront of systems that impact the lives of many, yet the consequences of this adoption are not fully understood. Therefore, work at the intersection of people’s needs and ML systems is more relevant than ever. This area of work, dubbed Human-Centered Machine Learning (HCML), re-thinks ML research and systems in terms of human goals. HCML gathers an interdisciplinary group of HCI and ML practitioners, each bringing their unique, yet related perspectives. This one-day workshop is a successor of Gillies et al. 2016 CHI Workshop and focuses on recent advancements and emerging areas in HCML. We aim to discuss different perspectives on these areas and articulate a coordinated research agenda for the XXI century.
The Challenges of Working on Social Robots that Collaborate with People
The advances in Robotics offer exciting opportunities for robots to become socially collaborative technologies. But are we ready for and are the robots capable of enabling a good level of interaction and user experience? How can the CHI community work with the Human Robot Interaction (HRI) community to share best practices and methods in order to continue to advance research that crosses methodological and cultural boundaries between HRI and HCI? This workshop will bring together key researchers working in and across both HCI and HRI to share existing challenges and opportunities to advance the field of Socially Collaborative Robotics. We will look to share our recent research experiences and practices in order to build capacity in the crossings between HCI and HRI.
Doing Things with Research through Design: With What, with Whom, and Towards What Ends?
This workshop provides a venue within CHI for research through design (RtD) practitioners to present their work and discuss how, with whom, and why it is used. Building on the success of prior RtD and design research workshops at CHI, this workshop will focus on how RtD artifacts are used, with the goal of connecting diverse works with broader methodologies in HCI and Design.
Standing on the Shoulders of Giants: Exploring the Intersection of Philosophy and HCI
The aim of this one-day workshop is to provide a forum for HCI researchers to discuss a wide range of issues at the intersection of philosophy and HCI. The participants will reflect on how philosophy influenced the development of HCI in the past, how philosophical insights are being utilized in current HCI research, and how philosophy can help HCI identify and address the emerging challenges facing the field. The main objectives of the workshop are to bring together HCI researchers interested in philosophy and produce an agenda for future research bringing HCI and philosophy closer together.
Human-Centered Study of Data Science Work Practices
With the rise of big data, there has been an increasing need to understand who is working in data science and how they are doing their work. HCI and CSCW researchers have begun to examine these questions. In this workshop, we invite researchers to share their observations, experiences, hypotheses, and insights, in the hopes of developing a taxonomy of work practices and open issues in the behavioral and social study of data science and data science workers.
Troubling Innovation: Craft and Computing Across Boundaries
Craft practices such as needlework, ceramics, and woodworking have long informed and broadened the scope of HCI research. Whether through sewable microcontrollers or programs of small-scale production, they have helped widen the range of people and work recognised as technological and innovative. However, despite this promise, few organisational resources have successfully drawn together the disparate threads of scholarship and practice attending to HCI craft. In this workshop, we propose to gather a globally distributed group of craft contributors whose work reflects crucial but under-valued HCI positions, practices, and pedagogies, Through historically and politically engaged work, we seek to build community across boundaries and meaningfully broaden what constitutes innovation in HCI to date.
Designing for Digital Wellbeing: A Research & Practice Agenda
Traditionally, many consumer-focused technologies have been designed to maximize user engagement with their products and services. More recently, many technology companies have begun to introduce digital wellbeing features, such as for managing time spent and for encouraging breaks in use. These are in the context of, and likely in response to, renewed concerns in the media about technology dependency and even addiction. The promotion of technology abstinence is also increasingly widespread, e.g., via digital detoxes. Given that digital technologies are an important and valuable feature of many people’s lives, digital wellbeing features are arguably preferable to abstinence. However, how these are defined and designed is something that needs to be explored further. In this one-day workshop we welcome both industry and academic participants to discuss what digital wellbeing means, who is responsible for it, and whether and how we should design for it going forward.
Designing for Outdoor Play
There is widespread societal concern regarding the reduction in the amount of time that we all spend playing outdoors. Outdoor play can be important for our social and physical well-being and moreover helps us to connect to space, place and environment. Of course, the CHI community continues to explore play across many contexts; however, specifically designing for outdoor play remains underexplored. This workshop aims to bring together those who are interested in technological, social and design aspects of outdoor play for all ages. We will use participants’ insights, energies and expertise to explore the challenges and focus on how we can build a community to share innovative designs, generate knowledge and make actionable research in this context.
Challenges Using Head-Mounted Displays in Shared and Social Spaces
Everyday mobile usage of AR and VR Head-Mounted Displays (HMDs) is becoming a feasible consumer reality. The current research agenda for HMDs has a strong focus on technological impediments (e.g. latency, field of view, locomotion, tracking, input) as well as perceptual aspect (e.g. distance compression, vergence-accomodation ). However, this ignores significant challenges in the usage and acceptability of HMDs in shared, social and public spaces. This workshop will explore these key challenges of HMD usage in shared, social contexts; methods for tackling the virtual isolation of the VR/AR user and the exclusion of collocated others; the design of shared experiences in shared spaces; and the ethical implications of appropriating the environment and those within it.
Mapping Theoretical and Methodological Perspectives for Understanding Speech Interface Interactions
The use of speech as an interaction modality has grown considerably through the integration of Intelligent Personal Assistants (IPAs- e.g. Siri, Google Assistant) into smartphones and voice based devices (e.g. Amazon Echo). However, there remain significant gaps in using theoretical frameworks to understand user behaviours and choices and how they may applied to specific speech interface interactions. This part-day multidisciplinary workshop aims to critically map out and evaluate theoretical frameworks and methodological approaches across a number of disciplines and establish directions for new paradigms in understanding speech interface user behaviour. In doing so, we will bring together participants from HCI and other speech related domains to establish a cohesive, diverse and collaborative community of researchers from academia and industry with interest in exploring theoretical and methodological issues in the field.
Interaction Design & Prototyping for Immersive Analytics
Immersive Analytics is concerned with the design and evaluation of interactive next-generation interfaces that support human understanding, data analysis, and decision making. New immersive technologies present many opportunities for enhancing humans’ experiences with data interaction, but also present many challenges, a subset of which are specific to the analytics domain. This workshop is centered around a set of group prototyping sessions, aimed at identifying new approaches to existing design challenges. In addition to giving perspective on opportunities and difficulties faced by future designers, these exercises will also explore new prototyping methods and tools for the design of interactive data-centric interfaces. This part-day workshop aims to build new ties between the existing immersive analytics community with researchers across many disciplines of the CHI community.
HCI for Accurate, Impartial and Transparent Journalism: Challenges and Solutions
While new media technologies hold the potential to serve journalism’s dual goals of informing and engaging the public, these technologies also challenge the journalistic norms of accuracy, impartiality and transparency. The key question in this workshop is: How can HCI support accurate, impartial and transparent journalism? This question is ever more timely as the need for accurate and credible journalism is growing amid the proliferation of disinformation and opinion manipulation. In this workshop, we will identify challenges and solutions in the design of user interfaces, user experiences and production processes in journalism. We bring together researchers and practitioners designing, deploying and studying new technologies in journalism. The goal of the workshop is to harness the potential of HCI for supporting accurate, impartial and transparent journalism.
All the World (Wide Web)’s a Stage: A Workshop on Live Streaming
With the rise of live streaming and esports in recent years, it becomes increasingly important for the HCI community to understand this phenomenon. The organizers encourage people to submit papers with novel interfaces they wish to explore in a live streaming context. In this workshop, participants will discuss different facets of the live streaming experience and obtain a greater understanding of the culture that exists on streaming platforms like Twitch. They will then participate in a design exercise, forming groups and iterating on a design together. Discussion/design topics will include ideas encouraging audience participation, moderation of toxicity, and other such topics. Participants will leave the workshop with ideas about how they can better design games and other experiences taking the live streaming ecosystem into consideration.
Towards a Responsible Innovation Agenda for HCI
In recent years responsible innovation has gained significant traction and can be seen to adorn a myriad of research platforms, education programs, and policy frameworks. In this workshop, we invite HCI researchers to discuss the relations between the CHI community and responsible innovation. This workshop looks to build provocations and principles for and with HCI researchers who are or wish to become responsible innovators. The workshop looks to do this by asking attendees to think about the social, environmental, and economic impacts of ICT and HCI and explore how research innovation frameworks speak to responsible HCI innovation. Through the workshop we look to examine 5 questions to develop a set of provocations and principles, which will help encourage HCI and computer science researchers, educators, and innovators to reflect on the impact of their research and innovation.
Everyday Automation Experience: Non-Expert Users Encountering Ubiquitous Automated Systems
Automated systems and their interfaces are increasingly merging with our ambient environmentleading to a heightened impact on our everyday leisure and work experiences. While automationsystems have been a realm for highly specialized tasks and trained experts until recently, now moreand more non-expert users encounter automated systems in their everyday life. The deploymentof these systems fundamentally changes practices and experiences in various domains. The overallgoal of this workshop is to investigate the requirements and design criteria for automation that areexperienced in everyday situations. In particular we will strive to come up with a set of principles forthree key areas of everyday automation experience: intelligibility, experienced control, and capturingautomation experience. This way, the workshop provides a first forum for knowledge exchange andnetworking across usage domains and contexts.
Computational Modeling in Human-Computer Interaction
We propose a workshop on rapidly emerging topic of Computational Modeling in HCI to address the challenges of increasing complexity of human behaviors we are able to track and collect today. The goal of this workshop is to reconcile two seemingly competing approaches to computational modeling: theoretical modeling, which seeks to explain behaviors vs. algorithmic modeling, which seeks to predict behaviors. The workshop will address: 1) convergence of the two approaches at the intersection of HCI, 2) updates to theoretical and methodological foundations, 3) bringing disparate modeling communities to CHI, and 4) sharing datasets, code, and best practices. This workshop seeks to establish Computational Modeling as a theoretical foundation for work in Human-Computer Interaction (HCI) to model the human accurately across domains and support design, optimization, and evaluation of user interfaces to solve a variety of human-centered problems.
CHInclusion: Working Toward a More Inclusive HCI Community
HCI has a growing body of work regarding important social and community issues, as well as various grassroots communities working to make CHI more international and inclusive. In this workshop, we will build on this work: first reflecting on the contemporary CHI climate, and then developing an actionable plan towards making CHI2019 and subsequent SIGCHI events and sister conferences more inclusive for all.
The Future of Work
We invite scholars, designers, developers, policymakers and provocateurs to explore non-standard, global and virtual work futures, to reflect on the impact of new sites and temporal patterns of work, and to consider emerging interpersonal and person-machine dynamics within work. We will frame these discussions with a consideration of the relationship between the future of work and existing modes of labor and political economy, with a view to identifying possibilities for both technological innovation and systemic change.
CHI4EVIL: Creative Speculation on the Negative Impacts of HCI Research
The HCI community is experiencing a resurgence of interest in the ethical, social, and political dimensions of HCI research and practice. Despite increased attention to these issues is not always clear that our community has the tools or training to adequately think through some of the complex issues that these commitments raise. In this workshop, we will explore the creative use of HCI methods and concepts such as design fiction or speculative design to help anticipate and reflect on the potential downsides of our technology design, research, and implementation. How can these tools help us to critique some of the assumptions, metaphors, and patterns that drive our field forward? Can we, by intentionally adopting the personas of would-be evil-doers, learn something about how better to accomplish HCI for Good?
Addressing the Challenges of Situationally-Induced Impairments and Disabilities in Mobile Interaction
Situationally-induced impairments and disabilities (SIIDs) make it difficult for users of interactive computing systems to perform tasks due to context (e.g., listening to a phone call when in a noisy crowd) rather than a result of a congenital or acquired impairment (e.g., hearing damage). SIIDs are a great concern when considering the ubiquitousness of technology in a wide range of contexts. Considering our daily reliance on technology, and mobile technology in particular, it is increasingly important that we fully understand and model how SIIDs occur. Similarly, we must identify appropriate methods for sensing and adapting technology to reduce the effects of SIIDs. In this workshop, we will bring together researchers working on understanding, sensing, modelling, and adapting technologies to ameliorate the effects of SIIDs. This workshop will provide a venue to identify existing research gaps, new directions for future research, and opportunities for future collaboration.
Mid-Air Haptic Interfaces for Interactive Digital Signage and Kiosks
Digital signage systems are transitioning from static displays to rich, dynamic interactive experiences while new enabling technologies that support these interactions are also evolving. For instance, advances in computer vision and face, gaze, facial expression, body and hand-gesture recognition have enabled new ways of distal interactivity with digital content. Such possibilities are only just being adopted by advertisers and retailers, yet, they face important challenges e.g. the lack of a commonly accepted gesture, facial expressions or call-to-action set. Another common issue here is the absence of active tactile stimuli. Mid-air haptic interfaces can help alleviate these problems and aid in defining a gesture set, informing users about their interaction via haptic feedback loops, and enhancing the overall user experience. This workshop aims to examine the possibilities opened up by these technologies and discuss opportunities in designing the next generation of interactive digital signage kiosks.
The Body as Starting Point: Applying Inside Body Knowledge for Inbodied Design
Inbodied design is an emerging area in HCI that focuses on using knowledge of the body’s internal systems and processes to better inform em-bodied and circum-bodied design spaces. The current challenge in developing an inbodied approach to HCI research/design is domain expertise: accessing sufficient and appropriate information about how the body itself works and how the body’s different systems interact dynamically. In this workshop, we review and build on last year’s introduction to inbodied foundations, focusing on applying inbodied knowledge to design challenges to explore (1) the foundational pillars of the inbodied design approach, and (2) how inbodied knowledge can affect / alter our understanding of em-bodied and circum-bodied design challenges and better inform design decisions. Our aim with this hands-on and cross-domain workshop is for HCI researchers to create innovative designs taking the body as a starting point
Sensitive Research, Practice and Design in HCI
New research areas in HCI examine complex and sensitive research areas, such as crisis, life transitions, and mental health. Further, research in complex topics such as harassment and graphic content can leave researchers vulnerable to emotional and physical harm. There is a need to bring researchers together to discuss challenges across sensitive research spaces and environments. We propose a workshop to explore the methodological, ethical, and emotional challenges of sensitive research in HCI. We will actively recruit from diverse research environments (industry, academia, government, etc.) and methods areas (qualitative, quantitative, design practices, etc.) and identify commonalities in and encourage relationship-building between these areas. This one-day workshop will be led by academic and industry researchers with diverse methods, topical, and employment experiences.
Conversational Agents: Acting on the Wave of Research and Development
In the last five years, work on software that interacts with people via typed or spoken natural language, called chatbots, intelligent assistants, social bots, virtual companions, non-human players, and so on, increased dramatically. Chatbots burst into prominence in 2016. Then came a wave of research, more development, and some use. The time is right to assess what we have learned from endeavouring to build conversational user interfaces that simulate quasi-human partners engaged in real conversations with real people. This workshop brings together people who developed or studied various conversational agents, to explore themes that include what works (and hasn’t) in home, education, healthcare, and work settings, what we have learned from this about people and their activities, and social or ethical possibilities for good or risk.
New Directions for the IoT: Automate, Share, Build, and Care
As the IoT is taking hold in the home, in healthcare, factories, and industry, new challenges and approaches arise for HCI research and design. For example, HCI is exploring agency delegation and automation to support the user in managing the deluge of IoT data, make decisions, or even take actions on behalf of the user, while economic models are being proposed to drive sharing economy services. This creates new problems including how to design appropriate solutions for uncertain and dynamic human behaviour, how to ensure resources are distributed fairly, and how to ensure that the user can understand system actions and ultimately remains in control. These issues are becoming more pertinent as the IoT diversifies into safety-critical domains such as manufacturing and healthcare. This one-day workshop intends to bring together the CHI community to explore the interactional, socio-cultural, ethical, and practical challenges and approaches that these new domains raise for the IoT. With this, we want to consider how such approaches could be integrated to achieve more sustainable, inclusive, or effective interactions.
SESSION: Late-Breaking
Scratch Nodes ML: A Playful System for Children to Create Gesture Recognition Classifiers
Children are growing up in a Machine Learning infused world and it’s imperative to provide them with opportunities to develop an accurate understanding of basic Machine Learning concepts. Physical gesture recognition is a typical application of Machine Learning, and physical gestures are also an integral part of children’s lives, including sports and play. We present Scratch Nodes ML, a system enabling children to create personalized gesture recognizers by: (1) Creating their own gesture classes; (2) Collecting gesture data for each class; (3) Evaluating the classifier they created with new gesture data; (4) Integrating their classifiers into the Scratch environment as new Scratch blocks, empowering other children to use these new blocks as gesture classifiers in their own Scratch creations.
Cross-country User Connections in an Online Social Network for Music
Social connections and cultural aspects play important roles in shaping an individual’s preferences. For instance, people tend to select friends with similar music preferences. Furthermore, preferences and friending are influenced by cultural aspects. Recommender systems may benefit from these phenomena by using knowledge about the nature of social ties to better tailor recommendations to an individual. Focusing on the specifities of music preferences, we study user connections on Last.fm—an online social network for music. We identify those countries whose users are mainly connected within the same country, and those countries that are characterized by cross-country user connections. Strong cross-country connection pairs are typically characterized by similar cultural, historic, or linguistic backgrounds, or geographic proximity. The United States, the United Kingdom, and Russia are identified as countries having a large relative amount of user connections from other countries. Our results contribute to understanding the complexity of social ties and how they are reflected in connection behavior, and are a promising source for advancements of personalized systems.
Character Alive: A Tangible Reading and Writing System for Chinese Children At-risk for Dyslexia
This paper presents Character Alive, a tangible system designed to improve early Chinese literacy acquisition for Mandarin-speaking children at-risk for dyslexia by enabling high-level interaction. Character Alive uses the multisensory training method to teach children the reading and writing of Chinese characters and words. The core design features of our system are augmented dynamic color cues, 2D radical cards and handwriting cards with tactile cues, and multimedia content such as character animations. Character Alive was built on our previous work on designing tangible and augmented reality reading and writing systems for children at-risk for dyslexia in English. Our previous work has demonstrated that dynamic color cues can draw children’s attention to key characteristics of letter-sound-correspondences and two-hand actions with tangible letters help children to better solve spelling tasks. We present the design rationale, the design and implementation of the Character Alive system, and the future plan on evaluating the system.
LookUnlock: Using Spatial-Targets for User-Authentication on HMDs
With head-mounted displays (HMDs), users can access and interact with a broad range of applications and data. Although some of this information is privacy-sensitive or even confidential, no intuitive, unobtrusive and secure authentication technique is available yet for HMDs. We present LookUnlock, an authentication technique for HMDs that uses passwords that are composed of spatial and virtual targets. Through a proof-of-concept implementation and security evaluation, we demonstrate that this technique can be efficiently used by people and is resistant to shoulder-surfing attacks.
Weaving the Topics of CHI: Using Citation Network Analysis to Explore Emerging Trends
This paper provides a comprehensive and novel analysis of the annual conference proceedings of CHI to explore the structure and evolution of the community. Self-awareness is healthy for a diverse and dynamic community, allowing it to anticipate and respond to emerging themes. Instead of using a traditional topic modelling approach to analyze the text of the papers, we adopt a social network analysis approach by analyzing the citation network of papers. After constructing such a citation network, community detection is applied in order to cluster papers into different groups. The keywords of these groups are found to represent different research themes in human-computer interaction, while the growth or decline of these groups is visualized by their paper shares over time. Lastly, we contribute a visualization tool for exploring emerging trends within our community, which can be used to predict likely topics at future CHI conferences.
HandiFly: Towards Interactions to Support Drone Pilots with Disabilities
This paper describes two studies with people with sensory, cognitive and motor impairments flying drones as a leisure activity. We found that despite several adaptations to existing technologies that match their abilities, flying remains very difficult due to the required perceptual and motor skills. We propose an adaptation space at hardware, software and automation levels to facilitate drone piloting. Based on this adaptation space, we designed HandiFly, a software for piloting drones which is adaptable to users’ abilities. Participants were able to fly and emphasized its ability to be tailored to specific needs. We conclude with future direction to make drone piloting a more inclusive activity.
My First Biolab: A System for Hands-On Biology Experiments
Biology labs routinely conduct direct experimentation with living organisms. However, most high-schools are not able to engage students in such experimentation due to multiple factors: sterility, cost of equipment, cost of skilled lab assistants, and difficulty measuring micro-scale processes. We present the design and implementation of My First Biolab (MFB), a lab in a box with a novel disposable fluidic vessel (experiment in a bag) using two sheets of Polyacrylamide-Polyethylene channeling liquids via paths created with a laser-cutter. The system implementation includes a 2D magnetic peristaltic pump, a spectral sensor, and a heat transfer plate. MFB is an affordable, safe, and sterile system for hands-on experimentation with live microorganisms. Our system supports temperature control, liquid circulation, measurement of optical density, and a web interface for remote control and monitoring. Our first experiment demonstrates the three phases of bacterial growth: initial lag phase, the rapid-growth log phase, and the stationary phase.
Bricoleur: A Tool for Tinkering with Programmable Video and Audio
This paper presents Bricoleur, a new tool designed primarily for children to create rich dynamic audiovisual projects on mobile devices. Building off of technology developed for the Scratch programming language, Bricoleur allows users to capture video and audio media, then use visual programming blocks to quickly construct complex, interactive, and multilayered projects. The programmability of video and audio assets enables new creative possibilities that do not exist in traditional timeline-based video editing software nor in existing programming platforms. Drawing from Seymour Papert’s assertion that all learning is an act of bricolage – i.e., a process of constructing knowledge while dialoguing back and forth with creative materials – we describe the design of an initial prototype of Bricoleur, which emphasizes tinkerability as a primary design goal.
Task Influence on Perceptions of a Person-Following Robot and Following-Angle Preferences
To improve the design of a person-following robot, this preliminary study evaluates the influence of user tasks on human preferences of the robot’s following angle and human perceptions of the robot’s behavior. 32 participants were followed by a robot at three different following angles twice, once with an auditory task and once with a visual task, for a total of six walking trials. Results indicate that the type of user task influences participant preferences and perceptions. For the auditory task, as the following angle increased, participants were more satisfied with the robot’s following behavior. For the visual task, as the following angle increased, participants were less satisfied with the robot’s following behavior. In addition, participants were more perceptive of the robot’s following behavior for the auditory task compared to the visual task. Additional research is required to better understand whether human preferences and perceptions depend on task modality or task complexity.
Using Video-based Technology in Powerlifting Sport to Support Referees’ Decision Making
Referees’ decisions in sport games, which must be made in less than a second, have an impact on the games’ outcome. The use of hardware and/or software solutions could contribute towards increased accuracy of referees’ decisions. Application of such solutions can be expensive, especially in the case of less popular sports. In this respect, we propose and evaluate a video-based system for helping referees in powerlifting to make better decisions. Results reveal promising accuracy rates of the proposed system. This attempt is the first step towards supporting referees in the powerlifting domain and further elaboration of the proposed system is required to achieve higher decision-making accuracy.
Detecting Demographic Bias in Automatically Generated Personas
We investigate the existence of demographic bias in automatically generated personas by producing personas from YouTube Analytics data. Despite the intended objectivity of the methodology, we find elements of bias in the data-driven personas. The bias is highest when doing an exact match comparison, and the bias decreases when comparing at age or gender level. The bias also decreases when increasing the number of generated personas. For example, the smaller number of personas resulted in underrepresentation of female personas. This suggests that a higher number of personas gives a more balanced representation of the user population and a smaller number increases biases. Researchers and practitioners developing data-driven personas should consider the possibility of algorithmic bias, even unintentional, in their personas by comparing the personas against the underlying raw data.
Different Specialties, Different Gaze Strategies: Eye Tracking Opportunities in Seismic Interpretation Context
Identifying how geoscientists interact with seismic images allows a deeper understanding of their rationale and the seismic interpretation process itself. Moreover, identifying nuances involving seismic interpreters performing slightly different roles opens new possibilities relative to how decision support systems could become part of the seismic interpretation. In this work, we detail an eye tracking study involving 11 seismic interpreters interacting with two seismic images. The results show that seismic interpreters, with different specialties, interact with seismic images differently. From the results, it is possible to better characterize gaze strategies from different seismic interpreters, which could also be used as input information for decision support systems.
HasAnswers: Development of a Digital Tool to Support Young People to Manage Independent Living
Many young people experience difficulty finding and keeping their first independent home, which can lead to homelessness or risk of homelessness. To help address this challenge, a young people’s service in Scotland (Calman Trust) is developing a digital tool called HasAnswers. This paper provides a brief description of HasAnswers, the results of iterative testing with 69 young people (40 male, 29 female) using paper and digital prototypes, and feedback from other services with a responsibility for supporting young people to achieve an independent adulthood, as a potential customer base for the future scaling up of HasAnswers to new geographical locations and organizations. While preliminary, the results/feedback has been consistently to confirm the potential usefulness and acceptability of HasAnswers. Next steps include applying the results of the latest user testing followed by pilot testing. The research contributes to the body of work within HCI on design for homelessness by providing a new digital tool with a greater emphasis on prevention and early intervention, informed by an iterative user-centred design process.
Understanding Turkish NGOs’ Digital Technology Use in Helping Refugees in Turkey
According to the UN Refugee Agency, there are about 22 million people considered refugees around the world. Since the beginning of the Syrian crisis in 2011 alone, approximately 11 million Syrians left their homes and sought refuge in other counties, mostly nearby countries such as Lebanon and Turkey. There are more than 3.7 million Syrian refugees in Turkey. Non-governmental organizations (NGOs) have a critical role in refugees’ lives, their access to services, and integration into communities. In this study, we investigated the barriers NGOs encounter in their work with refugees and the types of information communication technologies they use in the process. Semi-structured interviews with participants from eight Turkish NGOs revealed that digital technologies have a fundamental role in their activities. They use them in three main ways including as tools to help them become knowledge brokers between Turkish communities and refugee communities. Our findings provide insights on NGOs’ digital technology use and pave a path for future studies.
Engaging Pedestrians in Designing Interactions with Autonomous Vehicles
Driverless Passenger Shuttles are operating as a public transport alternative in the town of Sion, Switzerland since June’16, and traversing the populated commercial and residential zones of the city center. The absence of a human driver and the lack of dedicated AV-pedestrian interface makes it challenging for road users (pedestrians, cyclists, etc.) to understand the intent or operational state of the vehicle and negotiate road usage. In this article, we present a co-design study aimed at informing the design of interactive communication means between pedestrians and autonomous vehicles (AVs). Conducted in two stages with the local community –which is accustomed to the AV’s ecosystem and has interacted with it on a daily basis– the study highlights the interactive experiences of road users, and furnishes contextualized design guidelines to bridge the communication with the pedestrians.
Relevance by Play: An Integrated Framework for Designing Museum Experiences
The notion of relevance is often used as a concept to be considered for making a museum matter to its visitors. The term, however, is rarely operationalized for use by designers, practitioners, or scientists in their work on museum experiences. We propose an integrated framework for designing relevant museum experiences, in which we distinguish between four stages of seeding and growing relevance in new audiences, called “trigger”, “engage”, “consolidate” and “relate”. The framework proposes to see designing for relevance as developing ways of integrating meaning-making, play and acceptable visitor effort across all these stages. It is intended to provide sensitizing concepts for use in further research on designing for relevance, as well as in design-related activities such as crafting requirements for new museum experiences, analyzing existing museum experiences and developing new museum experiences.
Immersive Process Models
In many domains, real-world processes are traditionally communicated to users through abstract graph-based models like event-driven process chains (EPCs), i.e. 2D representations on paper or desktop monitors. We propose an alternative interface to explore EPCs, called immersive process models, which aims to transform the exploration of EPCs into a multisensory virtual reality journey. To make EPC exploration more enjoyable, interactive and memorable, we propose a concept that spatializes EPCs by mapping traditional 2D graphs to 3D virtual environments. EPC graph nodes are represented by room-scale floating platforms and explored by users through natural walking. Our concept additionally enables users to experience important node types and the information flow through passive haptic interactions. Complementarily, gamification aspects aim to support the communication of logical dependencies within explored processes. This paper presents the concept of immersive process models and discusses future research directions.
Towards Empathetic Car Interfaces: Emotional Triggers while Driving
Monitoring the emotions of drivers can play a critical role to reduce road accidents and enable novel driver-car interactions. To help understand the possibilities, this work systematically studies the in-road triggers that may lead to different emotional states. In particular, we monitored the experience of 33 drivers during 50 minutes of naturalistic driving each. With a total of 531 voice self-reports, we identified four main groups of emotional triggers based on their originating source, being those related to human-machine interaction and navigation the ones that more commonly elicited negative emotions. Based on the findings, this work provides some recommendations for potential future emotion-enabled interventions.
Public Engagement with Official-Source Content in Crisis
Authoritative online information is especially important in disaster, when social media users are seeking out time- and safety-critical information. In this work we investigate how people engage with the posts by authoritative accounts that fall into five social roles-politicians, government agencies, media, weather experts, and humanitarian organizations. More specifically, we explore whether in their disaster-time sensemaking activities social media users engage with the content from different types of authoritative sources differently, and why. We find that tweets by politicians garner most replies and shares, but not due to prevalence in them of tweet features that facilitate visibility and engagement-hashtags, URLs, and images. We find that while higher popularity of political accounts plays a role in higher engagement, it does not fully explain the differences. Preliminary qualitative analysis suggests that politicized event-related posts by politicians and politicized public response to their even innocuous tweets may affect engagement.
Machine-Crowd-Expert Model for Increasing User Engagement and Annotation Quality
Crowdsourcing and active learning have been combined in the past with the goal of reducing annotation costs. However, two issues arise with using AL and crowdsourcing: quality of the labels and user engagement. In this work, we propose a novel machine -> crowd <- expert loop model where the forward connections of the loop aim to improve the quality of the labels and the backward connections aim to increase user engagement. In addition, we propose a research agenda for evaluating the model.
Step-by-Step: Exploring a Social Exergame to Encourage Physical Activity and Social Dynamics among Office Workers
This paper presents an exploratory study of a social exergame, called Step-by-Step, to help office workers initiate physical movements and social interactions in the work routine. In this project, we developed a mobile system for exploring a new mechanism of office vitality, through which the fitness task can be relayed from one to another co-worker in a workplace. Based on our prototypes, we evaluated the feasibility of Step-by-Step through a user study with five office workers and an expert interview with three senior designers. We discuss implications for the future development of the Step-by-Step system based on our qualitative findings.
Ways Into the Design Space of Butterflies in the Stomach
This work presents ways into a design space of butterflies in the stomach; a qualia of belly tingling sensation possible of pleasure, discomfort and presence heightening. Three design instances are presented. From and within those are three conceptual directions drawn and exemplified. Conditional availability involves tuning the availability of an interaction to certain geographic locations, environmental conditions and time-of-day in strive for particular aesthetics. Erratic and dubious presence is about making interactions unpredictable and/or feeding a doubt whether the user is engaged in an interaction or not, in strive for confusion and startle. Sensorial evidence of interaction is a way of thinking about narratives within an interaction through elements of planning, exploration and suddenness in strive for experiential qualities like anticipation, surprise, and fascination of discovery. My felt experiences of a two-day camping trip were used as a design resource. Reflections of these experiences were used in the design and concept development through visualizations, textual narratives, technical implementation detailing, and thematic analysis. This work is a provocative step expanding on what human-computer interaction can be in the outdoors.
Idea Bits: A Tangible Design Tool to Aid Idea Generation for Tangible Manipulation
Tangible interaction design students often find it difficult to generate ideas for tangible manipulation. They often restrict their explorations to a few familiar possibilities. To our knowledge, there is no design tool that focuses on facilitating the exploration of a variety of manipulation and aiding generation of ideas for manipulation. To address this gap, we designed Idea Bits, a tangible design tool consisting of interactive physical artifacts that enable users to experience a set of manipulations. These artifacts are coupled with digital examples of tangible systems and technical implementation guidance to help users understand how to implement the manipulations. Our work contributes knowledge about the generation of ideas for manipulation and will be useful to tangible interaction design students, instructors, practitioners, and researchers.
Stimulight: Exploring Social Interaction to Reduce Physical Inactivity among Office Workers
Prolonged sitting at the workplace is a growing public health concern. In this paper, we propose the activity-focused design framework which provides an overview of recent work in HCI to stimulate physcial activity or to reduce sedentary behavior. Next, based on this framework, we present Stimulight, an intelligent system designed to explore the effect of providing personal and/or social feedback on the activity pattern of office workers. To test the intuitiveness of the feedback modalities of our design, three different feedback conditions were explored in a lab study with 61 participants. Our results show a positive effect of visualizing and sharing physical activity patterns with co-workers. Based on our findings, we present design implications and offer perspectives for future work on how to use social feedback mechanisms to encourage social interaction in the workplace to enhance physically active behavior among office workers.
Your Period Rules: Design Implications for Period-Positive Technologies
In this work, we address the challenges of designing interactive technologies that approach menstruation in a positive way. Building on a Research through Design approach, we underline the tensions emerging from first-hand experiences and reflections in a design workshop. In order to maintain a positive approach, rather than asking participants what problems they encountered while on their period, we asked them what desires they had, and what experiences might help them cope with it. The results of the workshop emphasized the need for reflecting critically on how we perceive menstruation when designing and how viewing menstruation as a problem might perpetuate taboos and distance women’s experiences from their bodies. We aim to contribute to the ongoing discussion on designing for women’s health in HCI by suggesting implications for researchers and practitioners.
Call Me by My Name: Exploring Roles of Sci-fi Voice Agents
Voice Conversational Agents (VCAs) are increasingly becoming part of our daily life. Calling them by their names can solve more than just a problem of efficiency in interaction between users and VCAs, but also can cast them as actors capable of playing a role in a relationship. However, due to the immature state of the technology and its related services, such relationships between users and VCAs can be limited in practice. To broaden the scope in designing different relationships, this study explores VCAs depicted in sci-fi movies. Sci-fi is not purely based on fantasy, but also is a social reflection on technology, which in turn can inspire design researchers to understand and speculate the complexity of VCAs. Through community sourcing with sci-fi enthusiasts, 43 Sci-fi VCAs were deduced. A movie event was also organized to discuss the role of VCAs. Finally, this paper presents several possible design insights for designing the role of VCAs.
Visual Quotes: Does Aesthetic Appeal Influence How Perceived Motivating Text Messages Impact Short-Term Exercise Motivation?
Visual Quotes, or the communication of motivational text messages in a visual format, are increasingly used across social media and online communities. While physical activity trackers could leverage visual quotes, empirical studies of activity tracking in HCI research have paid little attention to this phenomenon and their potential effects on motivation. In this work, we conducted an online experiment (129 participants) to evaluate the impact of aesthetic appeal in motivational text messages as it relates to extrinsic identified behavior regulation. This is the type of motivation linked to the initial adoption of exercise behavior. The results of our study demonstrate that a perceived motivating text message presented with different levels of aesthetic appeal – ugly, neutral, beautiful – has the same impact on the motivation linked to short-term exercise (extrinsic identified behavior regulation). In other words, the perceived aesthetic appeal did not influence the motivating capability of textual messages for encouraging physical activity.
Confabulation Radio: Reflexive Speculation in Counterfactual Soundscape
Material speculation through counterfactual artifacts has been an alternative approach to envisioning possible worlds in HCI. This paper further explores how false memory, or confabulation, can be a resource for enriching this constructive design approach. We conducted an explorative study to understand how personal experience of confabulation can contribute to the constitution of the body of possible worlds and to explore the potential of using mixed everyday soundscapes as a medium to trigger possible-world-experience. The result shows that these “counterfactual soundscapes” can create self-convinced experiences of counterpart self, which indicates a reflexive but fictional self across possible worlds. Based on these findings, we proposed the model of counterpart self that accounts for reflexive speculation in possible worlds and made a research artifact, the Confabulation Radio, as our attempt to inquire into this phenomenon.
Expected-Experience Entanglements: Reframing Morning Experience through Design Fiction and Sound Interaction
This research, starting from breakfast to explore varied morning experiences and expectations, aims to reframe morning experience through diverse possibilities. A good morning keeps people in good psychological and physical conditions, thus improving efficiency at work. In Taiwan, multiple ethnic groups and long working hours are common, which is also embodied by the choice of local breakfast. However, according to a Herbalife research in 2018 [4], around 25% of all Taiwanese lack the habit of eating breakfast every day. Our research starts to explore the role breakfast plays in people’s morning routine, as well as the morning expectations and experiences of different lifestyles. The end goal is to create designs that give people a different perspective to engage with a good experience of the beginning of their day, to wonder about how their morning could be, not just about breakfast but about morning experience as well.
“What does your Agent look like?”: A Drawing Study to Understand Users’ Perceived Persona of Conversational Agent
Conversational agents (CAs) become more popular and useful at home. Creating the persona is an important part of designing the conversational user interface (CUI). Since the CUI is a voice-mediated interface, users naturally form an image of the CA’s persona through the voice. Because that image affects users’ interaction with CAs while using a CUI, we tried to understand users’ perception via drawing method. We asked 31 users to draw an image of the CA that communicates with the user. Through a qualitative analysis of the collected drawings and interviews, we could see the various types of CA personas perceived by users and found design factors that influenced users’ perception. Our findings help us understand persona perception, and that will provide designers with design implications for creating an appropriate persona.
Sketch or Play?: LEGO® Stimulates Divergent Thinking for Non-sketchers in HCI Conceptual Ideation
Sketching is known to support divergent thinking during conceptual ideation. Yet, in HCI teams, non-designers are known to be reluctant to sketch. Looking for a tool that could support non-designers’ divergent thinking to creatively offset familiar solutions while providing starter suggestions, we hypothesized that LEGO pieces could replace sketching. In a comparative lab experiment, 36 participants did two conceptual ideations of Web interfaces, one using paper/pen, the other LEGO, in random sequence. The 72 resulting interfaces were assessed on their fluency, flexibility, elaboration and originality according to Guilford [6] and Torrance’s [9] divergent thinking framework. Our main finding is that LEGO could substitute sketching for non-designers; the 3D figurative, constructive pieces provide a stimulating visual representation that supports divergent thinking by offering alternate meanings, generating a greater number of elements to react to, thus enhancing the use of analogies.
Electric Acoustic: Exploring Energy Through Sonic & Vibration Displays
‘Energy’ is an abstract concept, invisible except through its effects, yet with vast geopolitical and environmental consequences-while driving many everyday practices. It is a curious ‘material’ to work with for designers, with experiential properties which are underexplored. In Electric Acoustic, we are exploring both sonification and vibration (cymatic displays) as media for experiencing energy, specifically electricity use. These materializations potentially enable deeper engagement with the invisible systems and infrastructures of everyday life. This short paper reports on our preliminary experiments and some of the issues and considerations arising during this initial exploration.
AI Inspired Recipes: Designing Computationally Creative Food Combos
If chocolate and broccoli sound a strange pairing, can you imagine a broccoli chocolate bar that combines them? As a matter of fact, the two ingredients share the highest number of flavour molecules, so their combination might not be as weird as it sounds. We applied computational creativity, that is AI systems to enhance human creativity, to the food domain, with the main goal of feeding the mind of the creative professional in the food business with new unexpected combinations.
Functional Digital Nudges: Identifying Optimal Timing for Effective Behavior Change
Digital nudges hold enormous potential to change behavior. Despite the appeal to consider timing as a critical factor responsible for the success of digital nudges, a comprehensive organizing framework to guide the design of digital nudges considering nudge moment is yet to be provided. In this paper, we advance the theoretical model to design digital nudges by incorporating three key components: (1) Identifying the optimal digital nudge moment (2) Inferring this optimal moment and (3) Delivering the digital nudge at that moment. We further discuss the existing work and open research avenues.
Design-for-error for a Stand-alone Child Attachment Assessment Tool
Designing technology for problem-free operation is vital, but equally important is considering how a user may understand or act upon errors and various other ‘stuck’ situations if and when they occur. Little is currently known about what children think and want for overcoming errors. In this paper we report on design-for-error workshops with children (age 5-10) in which we staged 3 simulated errors with a health assessment technology. In our developmentally-sensitive study, children witnessed the errors via a puppet show and created low-fidelity models of recovery mechanisms using familiar ‘play-things’. We found the children were able to grasp the representational nature of the task. Their ideas were playful and inspired by magical thinking. Their work forced us to reflect on and revisit our own design assumptions. The tasks have had a direct impact on the design of the assessment.
Exploring Uses of Augmented Reality in Participatory Marketing
This paper is an exploration of Augmented Reality applications to Participatory Marketing and overviews initial findings in this area of research. Participatory Marketing is the concept of marketing with customers rather than at them, and can potentially turn AR users form passive consumers to (pro-)active co-creators of this future media. We conducted a preliminary investigation to focus on possible challenges and opportunities.
V.Ra: An In-Situ Visual Authoring System for Robot-IoT Task Planning with Augmented Reality
We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning (human-robot-IoT) with one single AR-SLAM device. Users can perform task authoring in an analogous manner with the Augmented Reality (AR) interface. Then placing the device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot and IoT oriented tasks, and guides the path planning execution with the SLAM capability.
Usability of Virtual Reality Application Through the Lens of the User Community: A Case Study
The increasing availability and diversity of virtual reality (VR) applications highlighted the importance of their usability. Function-oriented VR applications posed new challenges that are not well studied in the literature. Moreover, user feedback becomes readily available thanks to modern software engineering tools, such as app stores and open source platforms. Using Firefox Reality as a case study, we explored the major types of VR usability issues raised in these platforms. We found that 77% of usability feedbacks can be mapped to Nielsen’s heuristics while few were mappable to VR-specific heuristics. This result indicates that Nielsen’s heuristics could potentially help developers address the usability of this VR application in its early development stage. This work paves the road for exploring tools leveraging the community effort to promote the usability of function-oriented VR applications.
The Missing Interface: Micro-Gestures on Augmented Objects
Augmenting arbitrary physical objects with digital content leads to the missing interface problem, because those objects were never designed to incorporate such digital content and so they lack a user interface. A review of related work reveals that current approaches fail due to limited detection fidelity and spatial resolution. Our proposal, based on Google Soli’s radar sensing technology, is designed to detect micro-gestures on objects with sub-millimeter precision. Preliminary results with a custom gesture set show that Soli’s core features and traditional machine learning models (Random Forest and Support Vector Machine) do not lead to robust recognition accuracy, and so more advanced techniques should be used instead, possibly incorporating additional sensor features.
Towards Computational Notebooks for IoT Development
Internet of Things systems are complex to develop. They are required to exhibit various features and run across several environments. Software developers have to deal with this heterogeneity both when configuring the development and execution environments and when writing the code. Meanwhile, computational notebooks have been gaining prominence due to their capability to consolidate text, executable code, and visualizations in a single document. Although they are mainly used in the field of data science, the characteristics of such notebooks could make them suitable to support the development of IoT systems as well. This work proposes an IoT-tailored literate computing approach in the form of a computational notebook. We present a use case of a typical IoT system involving several interconnected components and describe the implementation of a computational notebook as a tool to support its development. Finally, we point out the opportunities and limitations of this approach.
BrainShare: A Glimpse of Social Interaction for Locked-in Syndrome Patients
Locked-in syndrome (LIS) patients are partially or entirely paralyzed but fully conscious. Those patients report a high quality of life and desire to remain active in their society and families. We propose a system for enhancing social interactions of LIS patients with their families and friends with the goal of improving their overall quality of life. Our system comprises a Brain-Computer Interface (BCI), augmented-reality glasses, and a screen that shares the view of a caretaker with the patient. This setting targets both patients and caretakers: (1) it allows the patient to experience the outside world through the eyes of the caretaker and (2) it creates a way of active communication between patient and caretaker to convey needs and advice. To validate our approach, we showcased our prototype and conducted interviews that demonstrate the potential benefit for affected patients.
Improving Texture Discrimination in Virtual Tasks by using Stochastic Resonance
We investigate enhancing virtual haptic experiences by applying Stochastic Resonance or SR noise to the user’s hands. Specifically, we focus on improving users’ ability to discriminate between virtual textures modelled from nine grits of real sandpaper in a virtual texture discrimination task. We applied mechanical SR noise to the participant’s skin by attaching five flat actuators to different points on their hand. By fastening a linear voice-coil actuator and a 6-DOF haptic device to participants’ index finger, we enabled them to interact and feel virtual sandpapers while inducing different levels of SR noise. We hypothesize that SR will improve their discrimination performance.
Opportunities for In-Home Augmented Reality Guidance
The use of Augmented Reality (AR) systems has been shown to be beneficial in guiding users through structured tasks when compared to traditional 2D instructions. In this work, we begin to examine the potential of such systems for home improvement tasks, which present some specific challenges (e.g., operating at both large and small scales, and coping with the diversity in home environments). Specifically, we investigate user performance of a common low-level task in this domain. We conducted a user study where participants mark points on a planar surface (as if to place a nail, or measure from) guided only by virtual cues. We observed that participants position themselves so as to minimize parallax by kneeling, leaning, or side-stepping, and when doing so, they are able to mark points with a high degree of accuracy. In cases where the targets fall above one’s line of vision, participants are less able to compensate and make larger errors. We discuss initial insights from these observations and participant feedback, and present the first set of challenges that we believe designers and developers will face in this domain.
Methods for Investigating Mental Models for Learners of APIs
Despite almost all software development involving application programming interfaces (APIs), there is surprisingly little work on how people use APIs and how to evaluate and improve the usability of an API. One possible way of investigating the usability of APIs is through the user’s mental model of the API. Through discussions with the developers and UX practitioners at Google along with our own evaluations, a distributed data processing API called Apache Beam has been identified as difficult to use and learn. In our on-going study, we investigate methods for understanding users’ mental models of distributed data processing and how this understanding can lead to design insights for Beam and its documentation. We present our novel approach, which combines a background interview with two natural programming elicitation segments: the first designed for participants to express a high level mental model of a data processing API while the second asks questions contextualized to a data processing task to see how participants apply their conceptual understanding to a more specific situation. Our method shows promise as pilot participants expressed a “dataflow” mental model that matched one way that Beam has been described, resulting in a potential design modification.
Emojilization: An Automated Method For Speech to Emoji-Labeled Text
Speech To Text (STT) plays a significant role in Voice User Interface (VUI). While preserving necessary semantic information in converted text, STT generally captures no or limited emotional information. In this paper, we present an emojilization tool to automatically attach related emojis to the STT-generated texts by analyzing both textual and acoustic features in speech signals. For a given voice message, the tool selects the most representative emoji from 64 most commonly used emojis. We conducted a pilot study with 34 participants. In our study, 159 utterances were labeled with emojis by our tool. The emotion restoration effect was evaluated. The results indicate that the proposed tool effectively compensates for the emotion loss.
MyTukxi: Low Cost Smart Charging for Small Scale EVs
As the electrification of the transportation sector grows the electric grid must handle the new load resulting from electric vehicles (EV) charging. The integration of this new load in the grid has been subject to work in the smart-charging research field, however, while normal-sized EVs often offer chargers or other functions that support smart-charging, smaller EVs do not, which could be problematic. Especially considering that the consumption of small EV when aggregated can be significant. This article presents the motivation and development behind the development of MyTukxi, a hardware and software system that aims at implementing smart-charging algorithms for low consuming electric vehicles (EV), interacting with drivers to compensate for the lack of charging control in such vehicles.
InterPoser: Visualizing Interpolated Movements for Bouldering Training
Bouldering is an urban form of rock climbing that requires precise and complex movement. Similarly to other sports, the simplest way to learn bouldering skill is to mimic professional’s motion. However, ordinary beginner boulders cannot learn to coaches, so that they learn by themselves or tutorial videos. Even if they managed, bouldering has a communication difficulty between a trainee and a trainer, that is, climbers cannot mimic the trainer’s movement in parallel. Accordingly, we considered a video feedback system would be useful for beginners and suggested InterPoser: a novel visualization system for intermediate motion between a beginner climber and a more experienced. InterPoser receives two videos of different subjects climbing the sample problem and generates an intermediate movement. In addition, this motion is transferred into realistic images of the climber. The proposed system is expected to support beginner to acquire more detailed observation and understanding of the motion.
I Drive – You Trust: Explaining Driving Behavior Of Autonomous Cars
Driving in autonomous cars requires trust, especially in case of unexpected driving behavior of the vehicle. This work evaluates mental models that experts and non-expert users have of autonomous driving to provide an explanation of the vehicle’s past driving behavior. We identified a target mental model that enhances the user’s mental model by adding key components from the mental model experts have. To construct this target mental model and to evaluate a prototype of an explanation visualization we conducted interviews (N=8) and a user study (N=16). The explanation consists of abstract visualizations of different elements, representing the autonomous system’s components. We explore the relevance of the explanation’s individual elements and their influence on the user’s situation awareness. The results show that displaying the detected objects and their predicted motion was most important to understand a situation. After seeing the explanation, the user’s level of situation awareness increased significantly.
BalloonFAB: Digital Fabrication of Large-Scale Balloon Art
We propose an interactive system that allows common users to build large-scale balloon art based on a spatial augmented reality solution. The proposed system provides fabrication guidance to illustrate the differences between the depth maps of the target three-dimensional shape and the current work in progress. Instead of using color gradients for depth difference, we adopt a high contrast black and white projection of the numbers in considering balloon texture. In order to increase user immersion in the system, we propose a shaking animation for each projected number. Using the proposed system, the unskilled users in our case study were able to build large scale balloon art.
Extending Discrete Verbal Commands with Continuous Speech for Flexible Robot Control
Speech is a direct and intuitive method to control a robot. While natural speech can capture a rich variety of commands, verbal input is poorly suited to finer grained and real-time control of continuous actions such as short and precise motion commands. For these types of operations, continuous non-verbal speech is more suitable, but it lacks the naturalness and vocabulary breadth of verbal speech. In this work, we propose to combine the two types of vocal input by extending the last vowel of a verbal command to support real-time and smooth control of robot actions. We demonstrate the effectiveness of this novel hybrid speech input method in a beverage-pouring task, where users instruct a robot arm to pour specific quantities of liquid into a cup. A user evaluation reveals that hybrid speech improves on simple verbal-only commands.
Towards the Development of a Display Filter Algorithm for Command and Control (C2) Maps for Operators of Unmanned Aerial Systems
Operators of military unmanned aerial vehicles (UAVs) work in highly dynamic environments. They have to complete numerous tasks, sometimes simultaneously, while maintaining high situational awareness (SA) and making rapid decisions. Their main focus is on mission management via the UAV’s payload, yet, they continuously interact with the command and control (C2) map to obtain SA and make decisions. C2 maps, shared among forces in the environment, are cluttered and overloaded with information. We aim to develop a map display machine-learning based spatial-temporal algorithm that will identify the most relevant information items to the UAV operator and deliver the right visualized information on the C2 map at the right timing. Towards the algorithm development, experiments for collecting user-based importance data were conducted and analysed. For this, a designated UAV C2 Experimental System (UCES) has been developed. Results show high feasibility for the prediction model, allowing to move forward with the following steps of the algorithm development.
Eating Ads With a Monster: Introducing a Gamified Ad Blocker
Often, online ads are annoying. Ad blockers are a way to prevent ads from appearing on a web page. As a result, web service providers lose more than 35 billion dollars per year and freely available content on the web is at risk. Taking both interests of web service providers and users into account, we present a gamified ad blocker that allows users to drag a virtual monster over ads to eat them and make them disappear. For each deactivated ad, users receive ad-free time that they can take whenever they want. We report findings from a pre-study, establishing requirements for the implementation of the ad blocker as well as the results of a usability test of our prototype. As a next step, we will release the extension in the Chrome Web Store for upcoming in-the-wild studies.
Using Gameplay Design Patterns to Support Children’s Collaborative Interactions for Learning
Co-located games that bring players together have strong potential for supporting children’s collaborative competencies. However, there is a challenge how to make results from research work related to this within Child-Computer Interaction (CCI) field easily transferable to future CCI research. Pursuing this challenge, we combined levels of Collaborative Activity (CA) with the design tool gameplay design patterns (GDPs). This combination was used to support comparative play tests of a co-located game with children who have learning difficulties. We report our observations on using our approach, arguing that the possibility of making patterns based on CA concepts such as Reflective Communication points towards collaborative GDPs. Furthermore, this study presents an exemplar that as a flexible and extensible tool GDPs can be used with different theories and models in the CCI field.
The Space Journey Game: An Intergenerational Pervasive Experience
There is a need to re-design the entertainment systems for the older adults, incorporating the population of this age group into the digital culture. With this aim in mind this work presents an intergenerational experience carried out in an Interactive Space where tangible and gestures interaction are used to participate in pervasive gaming experiences. The experience makes use of a game initially designed just for children but in a very flexible way so that it can be tailored to different players’ characteristics. Family groups made up of one or two grandparents and one or two grandchildren have played together The Fantastic Journey fulfilling all the missions either on tangible tabletops, just moving around the space or interacting by gestures. The experience was positively valued by both age groups; they were indeed happy with the opportunity of playing together in a challenging game. Nevertheless, the difficulty of designing engaging experiences for both age groups points to a challenging research area.
Deep Player Behavior Models: Evaluating a Novel Take on Dynamic Difficulty Adjustment
Finding and maintaining the right level of challenge with respect to the individual abilities of players has long been in the focus of game user research (GUR) and game development (GD). The right difficulty balance is usually considered a prerequisite for motivation and a good player experience. Dynamic difficulty adjustment (DDA) aims to tailor difficulty balance to individual players, but most deployments are limited to heuristically adjusting a small number of high-level difficulty parameters and require manual tuning over iterative development steps. Informing both GUR and GD, we compare an approach based on deep player behavior models which are trained automatically to match a given player and can encode complex behaviors to more traditional strategies for determining non-player character actions. Our findings indicate that deep learning has great potential in DDA.
Using Pirate Plunder to Develop Children’s Abstraction Skills in Scratch
Scratch users often struggle to detect and correct ‘code smells’ (bad programming practices) such as duplicated blocks and large scripts, which can make programs difficult to understand and debug. These ‘smells’ can be caused by a lack of abstraction, a skill that plays a key role in computer science and computational thinking. We created Pirate Plunder, a novel educational block-based programming game, that aims to teach children to reduce smells by reusing code in Scratch. This work describes an experimental study designed to measure the efficacy of Pirate Plunder with children aged 10 and 11. The findings were that children who played the game were then able to use custom blocks (procedures) to reuse code in Scratch, compared to non-programming and programming control groups.
Analyzing Hate Speech Toward Players from the MENA in League of Legends
We analyze hate speech toward the MENA players as a form of toxic behavior in League of Legends in-game and forum chats. We find that this kind of toxicity: (1) is initiated by one or two players; (2) sparks from criticizing the skills of team members; (3) can be elevated by frustration with game elements and hardware; and (4) can turn into personal clashes. There is also non-toxic use of abusive language, which stresses the importance of context-aware analysis (i.e., interpreting what is actually toxic). Finally, we find evidence that the type of toxicity varies by server location, advising gaming companies to consider the location of players when setting up policies to mitigate hate speech.
MakerArcade: Using Gaming and Physical Computing for Playful Making, Learning, and Creativity
The growing maker movement has created a number of hardware and construction toolkits that lower the barriers of entry into programming for youth and others, using a variety of approaches, such as gaming or robotics. For constructionist-like kits that use gaming, many are focused on designing and programming games that are single player, and few explore using physical and craft-like approaches that move beyond the screen and single player experiences. Moving beyond the screen to incorporate physical sensors into the creation of gaming experiences provides new opportunities for learning about concepts in a variety of areas in computer science and making. In this early work, we elucidate our design goals and prototype for a mini-arcade system that builds upon principles in constructionist gaming – making games to learn programming – as well as physical computing.
Translating Affective Touch into Text
This paper presents a game-like experience that translates tactile input into text, which captures the emotional qualities of that touch. We describe the experience and the system that generates it: a plush toy instrumented with pressure sensors, a machine learning method that acquires a mapping from touch data into a feature vector of affect values, and a mechanism that transcribes that feature vector into text. We conclude by discussing the range of novel interactions that such a nuanced tactile interface can support.
Artificial Playfulness: A Tool for Automated Agent-Based Playtesting
Usertesting is commonly employed in games user research (GUR) to understand the experience of players interacting with digital games. However, recruitment and testing with human users can be laborious and resource-intensive, particularly for independent developers. To help mitigate these obstacles, we are developing a framework for simulated testing sessions with agents driven by artificial intelligence (AI). Specifically, we aim to imitate the navigation of human players in a virtual world. By mimicking the tendency of users to wander, explore, become lost, and so on, these agents may be used to identify basic issues with a game’s world and level design, enabling informed iteration earlier in the development process. Here, we detail our progress in developing a framework for configurable agent navigation and simple visualization of simulated data. Ultimately, we hope to provide a basis for the development of a tool for simulation-driven usability testing in games.
Explorations of Voice User Interfaces for 3 to 4 Year Old Children
The design of Voice User Interfaces (VUIs) has mostly focused on applications for adults, but VUIs provide potential advantages to young children in enabling concurrent interactions with the physical and social world. Current applications for young children focus on media playing, answering questions, and highly-structured activities. There is an opportunity to go beyond these applications by using VUIs to support high-quality, creative social play. In this paper, we describe our first step in pursuing this opportunity with 24 design sessions guided by a partnership with eight 3 to 4 year old children. In a social play setting, we learned that children wanted to physically interact with the voice agents and VUIs could redirect behaviors and promote social interactions.
A Good Scare: Leveraging Game Theming and Narrative to Impact Player Experience
Game narrative and theming are ways in which game designers can affect player experience. In this work, we incorporate theories of emotion into game design, to explore the relationship between aesthetic elements and player experience. We designed and playtested two differently themed variants of the game Outbreak, a ‘Horror’ and a ‘Sanitized’ version. We present preliminary findings about playing differently themed versions of the same game which suggest that scary content can sustain interest throughout play and transform players’ emotional response to uncertainty.
Designing Performative, Gamified Cultural Experiences for Groups
Envisioning cultural institutions as “social places”, where the visitors can “create, share, and connect to each other around the cultural heritage content” (as defined by Nina Simon), we explore how to design cultural group experiences that combine personal moments of reflection to social encounter. In previous work we proposed a storytelling game where visitors conceive and narrate stories about the artworks, orchestrating group interactions according to the game phases. Playtesting with physical materials revealed promising potential, cultivating theatrical narrations, lively discussions and fruitful social interactions. Here we present a mobile-based, group experience design for gamified cultural visits with performative elements, leveraging the trajectories HCI framework. We highlight the key role, interface and space transitions encountered in the experience and we elaborate on the adopted design choices, while we reflect on main challenges and future directions.
EyeR: Detection Support for Visually Impaired Users
Lack of adequate support in navigation and object detection can limit independence of visually impaired (VI) people in their daily routines. Common solutions include white canes and guide dogs. White canes are useful in object detection, but require physically touching objects with the cane, which may be undesired. Guide dogs allow navigation without touching objects in the vicinity, but cannot help in object detection. By addressing this gap, employing a user-centric research approach, we aim to find a solution to improve the independence of VI people. Here, we began by initially gathering requirements through online questionnaires. Working from this, we build a prototype of a glove that alerts its users when an obstacle is detected at the pointed position; we call this EyeR. Lastly, we evaluated EyeR with VI users and found out that in use our prototype provides real time feedback and is helpful in navigation. We also provide future recommendations for VI prototypes from our participants, who would additionally like the device to recognise objects.
MYND: A Platform for Large-scale Neuroscientific Studies
We present a smartphone application for at-home participation in large-scale neuroscientific studies. Our goal is to establish user-experience design as a paradigm in basic neuroscientific research to overcome the limits of current studies, especially in rare neurological disorders.The presented application guides users through the fitting procedure of the EEG headset and automatically encrypts and uploads recorded data to a remote server. User-feedback and neurophysiological data from a pilot study with eighteen subjects indicate that the application can be used outside of a laboratory, without the need for external guidance. We hope to inspire future work on the intersection between basic neuroscience and human-computer interaction as a promising paradigm to accelerate research on rare neurological diseases and assistive neurotechnology.
ReMiND: Improving Emotional Awareness for Persons in Recovery
This paper introduces ReMiND, a wearable for emotional awareness and mindfulness that targets individuals in recovery. Through a human-centered design process, working in conjunction with twelve-step fellowship groups, we conceived ReMiND as a tool to help those in recovery improve emotional awareness, reduce isolation, form new habits, and positively cope with new challenges.
Entertainment for All: Understanding Media Streaming Accessibility
Despite the increased demand, popularity, and cultural significance of streaming media and digital entertainment, many individuals with disabilities are unable to experience this content. Specifically, many video streaming technologies require input devices and content browsers that are inaccessible to individuals with sensory and physical impairments and do not work with their current assistive technologies. Our team of engineers, designers, and clinicians took an inclusive approach to assessing and redesigning these streaming service products, with the aim of creating more universally accessible experiences. We recruited 9 participants with diverse abilities to evaluate the accessibility of a large telecommunication company’s commercially available video streaming products. This evaluation revealed significant accessibility barriers and informed the design of a participatory design activity to create accessible remote-controls, an onboarding assistance prototype, and a content browsing prototype that is screen reader accessible and supports audio descriptions. We evaluated these accessible prototypes with 11 additional participants and found they were more accessible, flexible, and enjoyable to use compared to the off-the shelf products. In this paper we summarize these findings and discuss how future streaming technology must support customization and follow established accessibility guidelines and standards.
Development and Initial Validation of a Scale to Measure Engagement with eHealth Technologies
In eHealth, engagement is viewed as an important factor in explaining why interventions are beneficial to some and not to others. However, a shared understanding of what engagement is and how to measure it, is missing. This paper presents the set-up of the development and initial validation of a new scale to measure engagement with eHealth interventions, based on scientific literature and interviews with users and experts. Furthermore, it presents the preliminary results of a systematic review, 11 interviews with engaged users and the first version of the new engagement scale. It is expected that the final scale, which will be based on theoretical and empirical research and focus on the different components of engagement, will enable researchers to investigate what features and forms of technology influence individuals’ engagement and thereby pave the way to create more engaging technologies.
BEYES: A Shopping Solution for Independent Clothing Experiences of the Visually Impaired
While clothing is one of the requisites of human life, the visually impaired have limitations in their shopping and need to rely on acquaintances or shop assistants to choose and purchase clothes. Given that fashion is a means of self-expression, their independency of expressing themselves has been limited. In this work, we sought to provide technological solutions for visually impaired people, to help them distinguish the color of clothes while shopping. After conducting exploratory user studies, we derived design requirements and developed a voice service mobile application which guides them through recognizing and choosing colors of clothing.
Older Adults and Voice Interaction: A Pilot Study with Google Home
In this paper we present the results of an exploratory study examining the potential of voice assistants (VA) for some groups of older adults in the context of Smart Home Technology (SHT). To research the aspect of older adults’ interaction with voice user interfaces (VUI) we organized two workshops and gathered insights concerning possible benefits and barriers to the use of VA combined with SHT by older adults. Apart from evaluating the participants’ interaction with the devices during the two workshops we also discuss some improvements to the VA interaction paradigm.
AUFLIP – An Auditory Feedback System Towards Implicit Learning of Advanced Motor Skills
How can people learn advanced motor skills such as front flips and tennis swings without starting from a young age? The answer, following the work of Masters et. al., we believe, is implicitly. Implicit learning is associated with higher retention and knowledge transfer, but that is unable to be explicitly articulated as a set of rules. To achieve implicit learning is difficult, but may be taught using obscured feedback – that is feedback that provides little enough information such that a user does not overfit a mental model of their target action. We have designed an auditory feedback system – AUFLIP – that describes high level properties of an advanced movement using a simplified and validated physics model of the flip. We further detail the implementation of a wearable system, optimized placement procedure, and takeoff capture strategy to realize this model. With an audio cue pattern that conveys this high-level, obscured objective, the system is integrated into a gymnastics-training environment with professional coaches teaching novice adults how to perform front flips. We perform a pilot user study training front flips, evaluating using a matched-pair comparison.
DryNights,: a Self-powered Bedwetting Alarm for Children
This extended abstract describes the concept of DryNights, a bedwetting behaviour change support system concept consisting out of a self-powered sensor and a mobile application developed in collaboration with LifeSense Group (spinoff Imec, Holst Centre) and the Eindhoven University of Technology. The sensor uses the principle of an electrochemical cell to generate its own electricity and to power a harmless and wireless signal transmission from the sensor to a mobile device. The mobile application has been designed in collaboration with children (N=75) from the target group. The reliability of the signal transmission and the range of the sensor have been successfully evaluated in a small scale experiment. Trials with children wearing it to go to sleep are currently under way, and suggest that DryNights is comfortable and children are experiencing it positively.
Distributed User-Generated Card Based Co-Design: A Case-Study
Involving end-users is considered key to successful design of technology. It can be challenging, however to involve end-users when designing healthcare technology, due to the limited availability of patients because of their condition or treatment. This is especially difficult when co-designing healthcare technology, which often requires several end-users to collaborate in group activities such as ideation exercises and brainstorms. In an exploration of co-design methods that do not require participants to be co-located, this paper describes initial results from a small-scale ideation workshop in the context of fertility treatment. Preliminary data analysis suggests that user-generated card-based ideation could be used to inspire ideation while transferring knowledge and ideas between participants who are not co-located. This approach could benefit researchers in several healthcare technology settings that use co-design, and other domains where availability of participants is limited.
Augmented Reality for Early Alzheimer’s Disease Diagnosis
Alzheimer’s disease (AD), the most common type of dementia, is characterised by gradual memory loss. There is an increasing global research effort into strategies for early clinic-based diagnosis at the stage where patients present with mild memory problems. Initiating treatment at this stage would slow the progression of the condition and enable more years of good quality life. This paper presents the ongoing development of an augmented reality system using HoloLens that is designed to test an early onset of Alzheimer’s disease. The most important aspects in the early AD diagnostics are the symptoms that appear to be connected with early memory loss, in particular spatial memory. The ability to store and retrieve the memory of a particular event involving an association between items such as the place and the object properties is incorporated in a game environment.
The Point-of-Choice Prompt or the Always-On Progress Bar?: A Pilot Study of Reminders for Prolonged Sedentary Behavior Change
Prolonged sedentary behavior contributes to many chronic diseases. An appropriate reminder could help screen-based workers to reduce their prolonged sedentary behavior. The fixed-duration point-of-choice prompt has been frequently used in related work. However, this prompting system has several drawbacks. In this paper, we propose the SedentaryBar, a context-aware reminding system using an always-on progress bar to show the duration of a working session, as an alternative to the prompt. The new reminding system uses both users’ keyboard/mouse events on the computer and the state-of-the-art computer vision algorithm with the webcam to detect users’ presence, which makes the system more accurate and intelligent. Our evaluation study compared the SedentaryBar and the prompt using subjective and objective measurements. After using each method for a week respectively, more participants preferred the SedentaryBar. The participants’ perceived interruption and usefulness also suggested the SedentaryBar was more popular during the study. However, the logged data of the participants’ working durations indicated the prompt was more effective in reducing their sedentary behavior.
SMART 2.0: A Multimodal Weight Loss Intervention for Young Adults
A significant number of young Americans are vulnerable to excess weight gain, especially during the college years. While technology-based weight loss interventions have the potential to be very engaging, short-term approaches showed limited success. In our work we aim to better understand the impact of long-term, multimodal, technology-based weight loss interventions, and study their potential for greater effect among college students. In this paper we lay the basis for our approach towards a multimodal health intervention for young adults: we present formative work based on interviews and a design workshop with 26 young adults. We discuss our intervention at the intersection of user feedback, empirical evidence from previous work, and behavior change theory.
Designing Family-Centered Aids for the Intensive Care Unit
Family member involvement has been shown to be key to the well-being and recovery of patients in an Intensive Care Unit (ICU), but they often find themselves overwhelmed and in an emotionally heightened state. ICU care teams, especially nurses, are typically considered to be in the best position to help and provide support to family members of patients. However, the heavy workload, lack of time, and personal interaction styles can make it difficult for them to be receptive to family member needs. To understand how current aids in the ICU are used and the challenges associated with them, we conducted 22 interviews with both family members and the care team. We also created prototypes of family-centered aids through a co-design session to reveal the opportunities that emerge for technology to facilitate family member support in the ICU without adding additional burdens on the care team.
Facebook for Support versus Facebook for Research: The Case of Miscarriage
Researchers use Asynchronous Remote Communities (ARC) to reach out to target populations who may find it hard to meet in person, or make time for telephone interviews. So far, ARC studies have been conducted using closed and secure groups on Facebook, because most participants are active members of this social network. However, it is not clear how participants’ Facebook usage might affect their engagement with an ARC study. In this paper, we report a secondary analysis of a recent ARC study of women who had experienced at least one miscarriage that focused on their information and social support needs. We find participants tend to be comfortable with seeking emotional support on Facebook, and even those who say they rarely post to Facebook engage with most group activities. We discuss implications for choosing platforms for ARC studies.
Socializing via a Scarf: Individuals with Intellectual and Developmental Disabilities Explore Smart Textiles
Smart textiles, a subset of wearable technologies, inspire novel human-interactions that leverage people’s physical and cognitive actions. However, people with intellectual and developmental disabilities (IDD) have different communication, social, and sensory experiences than non-disabled people. We empirically investigate how adults with IDD determine the functionality of smart textiles and which smart textile qualities impact comfort. We demonstrate a research design method that elicits social interaction among adults with IDD. We found that smart textiles facilitate embodied and social interactions among adults with IDD. We also present an emerging design space based on smart textile capabilities and user needs.
MojiBoard: Generating Parametric Emojis with Gesture Keyboards
Inserting emojis can be cumbersome when users must swap through panels. From our survey, we learned that users often use a series of consecutive emojis to convey rich, nuanced non-verbal expressions such as emphasis, change of expressions, or micro stories. We introduce MojiBoard, an emoji entry technique that enables users to generate dynamic parametric emojis from a gesture keyboard. With MojiBoard, users can switch seamlessly between typing and parameterizing emojis.
3D Positional Movement Interaction with User-Defined, Virtual Interface for Music Software: MoveMIDI
This paper describes progress made in design and development of a new digital musical instrument (MIDI controller), MoveMIDI, and highlights its unique 3D positional movement interaction design differing from recent orientational and gestural approaches. A user constructs and interacts with MoveMIDI’s virtual, 3D interface using handheld position-tracked controllers to control music software, as well as non-musical technology such as stage lighting. MoveMIDI’s virtual interface contributes to solving problems difficult to solve with hardware MIDI controller interfaces such as customized positioning and instantiation of interface elements, and accurate, simultaneous control of independent parameters. MoveMIDI’s positional interaction mirrors interaction with some physical acoustic instruments and provides visualization for an audience. Beta testers of MoveMIDI have created emergent use cases for the instrument.
The ‘Magic Paradigm’ for Programming Smart Connected Devices
We are surrounded by an increasing number of smart and networked devices. Today much of this technology is enjoyed by gadget enthusiasts and early adaptors, but in the foreseeable future many people will become dependent on smart devices and Internet of Things (IoT) applications, desired or not. To support people with various levels of computer skills in mastering smart appliances as found, e.g., in smart homes, we propose the ‘magic paradigm’ for programming networked devices. Our work can be regarded as a playful ‘experiment’ towards democratizing IoT technology. It explores how we can program interactive behavior by simple pointing gestures using a tangible ‘magic wand’. While the ‘magic paradigm’ removes barriers in programming by waiving conventional coding, it simultaneously raises questions about complexity: what kind of tasks can be addressed by this kind of ‘tangible programming’, and can people handle it as tasks become complex? We report the design rationale of a prototypical instantiation of the ‘magic paradigm’ including preliminary findings of a first user trial.
Exploring Configuration of Mixed Reality Spaces for Communication
Mixed Reality (MR) enables users to explore scenarios not realizable in the physical world. This allows users to communicate with the help of digital content. We investigate how different configurations of participants and content affect communication in a shared immersive environment. We designed and implemented side-by-side, mirrored face-to-face and eyes-free configurations in our multi-user MR environment and conducted a preliminary user study for our mirrored face-to-face configuration, evaluating with respect to one-to-one interaction, smooth focus shifts and eye contact within a 3D presentation using the interactive Chalktalk system. We provide experimental results and interview responses.
Ohmic-Sticker: Force-to-Motion Type Input Device for Capacitive Touch Surface
We propose “Ohmic-Sticker”, a novel force-to-motion type input device to extend capacitive touch surfaces. It realizes various types of force-sensitive inputs by attaching on commercial capacitive touch surfaces. A simple force-sensitive-resistor (FSR)-based structure enables thin (less than 2 mm) form factors and battery-less operation. The applied force vector is detected as the leakage current from the corresponding touch surface electrodes by using “Ohmic-Touch” technology. Ohmic-Sticker can be used for adding force-sensitive interactions to touch surfaces, such as analog push buttons, the TrackPoint-like pointing devices, and full 6 DoF controllers for navigating virtual spaces.
One-handed Rapid Text Selection and Command Execution Method for Smartphones
We show a one-handed rapid text selection and command execution method for a smartphone; we term this Press & Slide. The user can perform caret navigation or text selection by sliding the finger on a software keyboard after pressing a key. Then, by releasing the key, a command such as “copy the selected text” is executed; the command is specified by the key that is pressed. Therefore, the user needs not touch the text, and thus the fat finger problem does not cause and the user needs not change his/her smartphone grip.
Assessing the Feasibility of Speech-Based Activity Recognition in Dynamic Medical Settings
We describe an experiment conducted with three domain experts to understand how well they can recognize types and performance stages of activities using speech data transcribed from verbal communications during dynamic medical teamwork. The insights gained from this experiment will inform the design of an automatic activity recognition system to alert medical teams to process deviations in real time. We contribute to the literature by (1) characterizing how domain experts perceive the dynamics of activity-related speech, and (2) identifying the challenges associated with system design for speech-based activity recognition in complex team-based work settings.
Are You There?: Identifying Unavailability in Mobile Messaging
Delays in response to mobile messages can cause negative emotions in message senders and can affect an individual’s social relationships. Recipients, too, feel a pressure to respond even during inopportune moments. A messaging assistant which could respond with relevant contextual information on behalf of individuals while they are unavailable might reduce the pressure to respond immediately and help put the sender at ease. By modelling attentiveness to messaging, we aim to (1) predict instances when a user is not able to attend to an incoming message within reasonable time and (2) identify what contextual factors can explain the user’s attentiveness—or lack thereof—to messaging. In this work, we investigate two approaches to modelling attentiveness: a general approach in which data from a group of users is combined to form a single model for all users; and a personalized approach, in which an individual model is created for each user. Evaluating both models, we observed that on average, with just seven days of training data, the personalized model can outperform the generalized model in terms of both accuracy and F-measure for predicting inattentiveness. Further, we observed that in majority of cases, the messaging patterns identified by the attentiveness models varied widely across users. For example, the top feature in the generalized model appeared in the top five features for only 41% of the individual personalized models.
Older Adults as Makers of Custom Electronics: Iterating on Craftec
Researchers have designed technologies for and with older adults to help them age in place, but there is an opportunity to support older adults in creating customized smart devices for themselves through electronic toolkits. We developed a plan for iterating on Craftec – one of the first electronic toolkits designed for older adults – informed by the results of a participatory design workshop and user evaluation. We focused on supporting older adults to create exemplar artifacts, such as medication adherence systems. We contribute the exemplars and the current plan for components of the Craftec system as a way to support older adults to design technology for themselves.
Takeover and Handover Requests using Non-Speech Auditory Displays in Semi-Automated Vehicles
Since non-speech sounds can convey urgency well, they have been used as alerts in the vehicle context, including control transitions (handover and takeover) in automated vehicles. However, their potential has not been fully investigated to make use in international standards. To contribute to making authentic standards, the present paper investigated the effects of various non-speech displays to further refine auditory variables. Twenty-four young drivers drove in the driving simulator that had both handover and takeover transitions between manual and automated modes with a secondary task. The reaction times for handover and takeover and other sound user experience questionnaire results are reported with discussions and future work.
Multimodal Displays for Take-over in Level 3 Automated Vehicles while Playing a Game
Take-over is one of the most crucial user interactions in semi-automated vehicles. To make better communication between driver and vehicle, research has been conducted on various take-over request displays, yet the potential has not been fully investigated. The present paper explored the effects of adding auditory displays to visual text. Earcon and speech showed the best performance and acceptance with spearcon the least. This study is expected to provide the basic data and guidelines for future research and design practice.
From Motions to Emotions: Classification of Affect from Dance Movements using Deep Learning
This work investigates classification of emotions from MoCap full-body data by using Convolutional Neural Networks (CNN). Rather than addressing regular day to day activities, we focus on a more complex type of full-body movement – dance. For this purpose, a new dataset was created which contains short excerpts of the performances of professional dancers who interpreted four emotional states: anger, happiness, sadness, and insecurity. Fourteen minutes of motion capture data are used to explore different CNN architectures and data representations. The results of the four-class classification task are up to 0.79 (F1 score) on test data of other performances by the same dancers. Hence, through deep learning, this paper proposes a novel and effective method of emotion classification, which can be exploited in affective interfaces.
WONDER — Enhancing VR Training with Electrical Muscle Stimulation
Training employees on workplace procedures in virtual environments (VEs) is becoming popular since it reduces cost and risks. Although haptic enhancements with force feedback make such VEs more realistic and increase performance. Such enhancements are only available for ‘spatial’ scenarios. One potential enhancement for low-cost VEs is electrical muscle stimulation (EMS), but it remains open how EMS can be used to support trainees. Therefore we present WONDER: A virtual training environment with an EMS feedback enhancing layer. In an initial study, we show the feasibility of the approach and that it can successfully support trainees in remembering workflows. We test feedback that supports participants by pushing their hand towards a button or pulling their hand away from it. Participants preferred a combination of both feedback types.
Exploring Word-gesture Text Entry Techniques in Virtual Reality
Efficient text entry is essential to any computing system. However, text entry methods in virtual reality (VR) currently lack the predictive aid and physical feedback that allows users to type efficiently. The state of the art methods such as using physical keyboards with tracked hand avatars require a complex setup which might not be accessible to the majority of VR users. In this paper, we propose two novel ways to enter text in VR: 1) Word-gesture typing using six degrees of freedom (6DOF) VR controllers; and 2) word-gesture typing using pressure-sensitive touchscreen devices. Our early stage pilot experiment shows that users were able to type at 16.4 WPM and 9.6 WPM on the two techniques respectively without any training, while an expert’s typing speeds reached up to 34.2 WPM and 22.4 WPM. Users subjectively preferred the VR controller method over the touchscreen one in terms of usability and task load. We conclude that both techniques are practical and deserve further study.
YourAd: A User Aligned, Personal Advertising System
Advertisers have optimized the periphery of our attention to drive complex purchasing behavior, typically using persuasive or rhetorical techniques to promote decisions that are agnostic to our best interest. Instead of serving the ambition of companies with large marketing budgets, what if these techniques were used to reinforce the behaviors and attachments we choose for ourselves? YourAd is an open-source browser extension and design tool that allows users to supplant their internet ads with custom replacements– designed by and for themselves. YourAd incorporates industry best practices into a platform to facilitate experimentation with user-aligned advertisement ecosystems, probe the limits of their influence, and optimize their design in support of an end user’s personal aspiration.
Paralympic VR Game: Immersive Game using Virtual Reality and Video
In the last few years, the interest in virtual reality has been increasing partially due to the emergence of cheaper and more accessible hardware, and the increase in content available. One of the possible applications for virtual reality is to lead people into seeing situations from a different perspective, which can help change beliefs. The work reported in this paper uses virtual reality to help people better understand paralympic sports by allowing them to experience the sports’ world from the athletes’ perspective. For the creation of the virtual environment, both computer-generated elements and 360 video are used. This work focused on wheelchair basketball, and a simulator of this sport was created resorting to the use of a game engine (Unity 3D). For the development of this simulator, computer-generated elements were built, and the interaction with them implemented. User studies were conducted to evaluate the sense of presence, motion sickness and usability of the system developed. The results were positive although there are still some aspects that should be improved.
Using Social Platforms to Increase Engagement in Teaching Computer Programming
Programming has the potential to bring to life that which is most minute in man’s imagination. Imagination, however, it will all remain, if no appropriate intervention is made to facilitate the learning of programming. Research studies show that the traditional, face-to-face method of teaching does not provide an enabling environment for learning programming. Hence, outside-classroom intervention is called for. Certain previous studies have tried to build new tools to support the outside-classroom intervention. However, there a need to study the use of existing, familiar and relaxing environments, such as social media, for this intervention. In this paper, we investigate the capability of a social media platform to support the learning of programming among learners in the developing world. We chose the WhatsApp platform as the starting point to uncover these design needs. We also reflected on the lessons learnt using this intervention.
Cyborg Botany: Exploring In-Planta Cybernetic Systems for Interaction
Our traditional interaction possibilities have centered around our electronic devices. In recent years, the progress in electronics and material science has enabled us go beyond chip layer and work at the substrate level. This has helped us rethink form, sources of power, hosts and in turn new interaction possibilities. However, the design of such devices has mostly been ground up and fully synthetic. In this paper, we discuss the analogy between artificial functions and natural capabilities in plants. Through two case studies, we demonstrate bridging unique natural operations of plants with the digital world. Each desired synthetic function is grown, injected carefully or placed in conjunction with a plant’s natural functions. Our goal is to make use of sensing and expressive abilities of nature for our interaction devices. Merging synthetic circuitry with plant’s own physiology could pave a way to make these lifeforms responsive to our interactions and their ubiquitous sustainable deployment.
Towards Understanding Interactions with Multi-Touch Spherical Displays
Interactive spherical displays offer unique educational and entertainment opportunities for both children and adults in public spaces. However, designing interfaces for spherical displays remains difficult because we do not yet fully understand how users naturally interact with and collaborate around spherical displays. This paper reports current progress on a project to understand how children (ages 8 to 13) and adults interact with spherical displays in a real-world setting. Our initial data gathering includes an exploratory study in which children and adults interacted with a prototype application on a spherical display in small groups in a public setting. We observed that child groups tended to interact more independently around the spherical display, whereas adult groups interacted with the sphere in a driver-navigator mode and did not tend to walk around the sphere. This work will lay the groundwork for future research into designing interactive applications for spherical displays tailored towards users of all age groups.
Controlling Temporal Change of a Beverage’s Taste Using Electrical Stimulation
In this paper, we discuss concepts for improving beverage experience using electrical stimulation in terms of the requirements for the procedure (design space), previous study (completion), and limitations of conventional technologies. To improve beverage experience, electrical stimulation has been indicated as a method for changing a beverage’s taste. However, the taste of a beverage changes temporally-a beverage’s taste during drinking is different from its taste after swallowing. There are methods of evaluating the taste of a beverage based on time-series, such as the Time-Intensity (TI) method and the Temporal Dominance of Sensations (TDS) method. Therefore, it is important to focus on the temporal change in a beverage’s taste for improving the beverage experience. Thus, we focused on the taste before and after swallowing as a first step. Based on this review, we propose the concept of controlling the temporal change of a beverage’s taste using electrical stimulation.
Ways of Qualitative Coding: A Case Study of Four Strategies for Resolving Disagreements
The process of qualitative coding often involves multiple coders coding the same data to ensure reliable codes and a consistent understanding of the codebook. One aspect of qualitative coding includes resolving disagreements, where coders discuss differences in coding to reach a consensus. We conduct a case study to evaluate four strategies of disagreement resolution and understand their impact on the coding process. We find that an open discussion and the n-ary tree metric lead coders to focus more on the disagreement of a particular data instance, whereas kappa values and Code Wizard direct coders to compare code definitions. We discuss opportunities for using different strategies at different stages of the coding process for more effective disagreement resolution.
Feeling-of-Safety Slider: Measuring Pedestrian Willingness to Cross Roads in Field Interactions with Vehicles
Can interactions between automated vehicles and pedestrians be evaluated in a quantifiable and standardized way? In order to answer this, we designed an input device in the form of a continuous slider that enables pedestrians to indicate their willingness to cross a road and their feeling of safety in real time in response to an approaching vehicle. In an initial field study, 71% of the participants reported that they were able to use the device naturally and indicate their feeling of safety satisfactorily. The feeling-of-safety slider can consequently be used to evaluate and benchmark interactions between pedestrians and vehicles, and compare communication interfaces for automated vehicles.
The Impact of Placebic Explanations on Trust in Intelligent Systems
Work in social psychology on interpersonal interaction has demonstrated that people are more likely to comply to a request if they are presented with a justification – even if this justification conveys no information. In the light of the many calls for explaining reasoning of interactive intelligent systems to users, we investigate whether this effect holds true for human-computer interaction. Using a prototype of a nutrition recommender, we conducted a lab study (N=30) between three groups (no explanation, placebic explanation, and real explanation). Our results indicate that placebic explanations for algorithmic decision-making may indeed invoke perceived levels of trust similar to real explanations. We discuss how placebic explanations could be considered in future work.
Can Privacy-Aware Lifelogs Alter Our Memories?
The abundance of automatically-triggered lifelogging cameras is a privacy threat to bystanders. Countering this by deleting photos limits relevant memory cues and the informative content of lifelogs. An alternative is to obfuscate bystanders, but it is not clear how this impacts the lifelogger’s recall of memories. We report on a study in which we compare viewing 1) unaltered photos, 2) photos with blurred people, and 3) a subset of the photos after deleting private ones, on memory recall. Findings show that obfuscated content helps users recall a lot of content, but it also results in recalling less accurate details, which can sometimes mislead the user. Our work informs the design of privacy-aware lifelogging systems that maximizes recall and steers discussion about ubiquitous technologies that could alter human memories.
Robots are Always Social: Robotic Movements are Automatically Interpreted as Social Cues
Physical movement is a dominant element in robot behavior. We evaluate if robotic movements are automatically interpreted as social cues, even if the robot has no social role. 24 participants performed the Implicit Associations Test, classifying robotic gestures into direction categories (“to-front” or “to-back”) and words into social categories (willingness or unwillingness for interaction). Our findings show that social interpretation of the robot’s gestures is an automatic process. The implicit social interpretation influenced both classification tasks, and could not be avoided even when it decreased participant’s performance. This effect is of importance for the HCI community as designers should consider, that even if a robot is not intended for social interaction (e.g. factory robot), people will not be able to avoid interpreting its movement as social cues. Interaction designers should leverage this phenomenon and consider the social interpretation that will be automatically associated with their robots’ movement.
The Role of HCI in Reproducible Science: Understanding, Supporting and Motivating Core Practices
The reproducibility crisis refers to the inability to reproduce scientific experiments and is one of science’s great challenges. Alarming reports and growing public attention are leading to the development of services and tools that aim to support key reproducible practices. In the face of this rapid evolution, we envision the unique opportunity for Human-Computer Interaction to impact scientific practice through the systematic study of requirements and moderating effects of technology on research reproducibility. In this paper, we report on the current state of technological and human factors in reproducible science and present challenges and opportunities for both HCI researchers and practitioners to understand, support and motivate core practices.
Accessible Instruments in the Wild: Engaging with a Community of Learning-Disabled Musicians
Disabled people face many barriers to access in all areas of life, including creative expression. With music making, a lack of accessible instruments can be a major barrier, as well as environmental factors. The Strummi is an accessible instrument based on the guitar, designed as a technology probe to explore the technical and cultural role of guitar-like design and interaction modality. We have been collaborating with Heart n Soul, an arts charity that works with young people and adults with learning disabilities. In this paper, we share findings from the first year of this collaboration, and reflect on the implications for doing HCI research with learning-disabled communities. We took a longitudinal, situated approach with an intentionally simple technology inspired by in the wild and technology probe methodologies, allowing for interest in the Strummi to grow organically.
Designing the Next Generation of Activity Trackers for Performance Sports: Insights from Elite Tennis Coaches
Wearable sport technologies and activity trackers help sportspeople by providing physiological information on their performance. However, professional sportspeople find this information irrelevant due to their high-performance training. They want these devices to provide real-time assistive feedback on their performance, despite the formidable limitations suggested by previous research on giving such feedback. On the other hand, sport coaches already give performance feedback to their sportspeople during their performance.
We speculated that some of their approaches might give clues for designing activity trackers with useful real-time performance feedback. Consequently, we interviewed six elite tennis coaches to explore their approaches of communicating performance information to their players, during tennis games. In this paper, we discussed the findings by comparing them with related work and formed two design insights for giving real-time performance feedback that might lead to novel approaches for activity trackers.
DIY Community WiFi Networks: Insights on Participatory Design
This paper presents a first version of a set of insights developed collaboratively by researchers during a three-year participatory design project spread across four European locations. The MAZI project explored potential uses of a “Do-It-Yourself” WiFi networking technology platform. Built using low-cost Raspberry Pi computer hardware and specially developed, open-source software, this toolkit has the potential to enable hyper-local applications and services to be developed and maintained within a host community for its own use. The nine insights are a distillation and articulation of the collective reflections of the project partners gained from their experiences of working in diverse settings with varied communities and stakeholders. In this paper, we discuss the reflective process, we present the insights to the CHI community in order to gain feedback, and we situate our findings within previous literature.
Trail of Hacks: Poster Co-Design as a Tool for Collaborative Reflection
This paper describes a process of collaborative sense-making through co-designing a reflective poster. We used this method in the ‘Empowering Hacks’ project which gathered two non-academic individuals with disabilities (authors 2 & 3) and a non-disabled HCI researcher (author 1) around DIY/Making by, for and with people with disabilities. To collectively review the achievements and challenges we experienced in this project, we designed a timeline which allowed us to equally engage in reflective thinking and curatorial discussions on how to present and explain identified key moments. We see the instance of this co-created poster as an opportunity to discuss with the CHI-community the potential and relevance of including research participants in analysis processes.
Chats with Bots: Balancing Imitation and Engagement
Advances in AI are paving the way towards more natural interactions, blurring the line between bot and human. We present findings from a two-week diary study exploring users’ interactions with the chatbot Replika. In particular, we focus on how users anthropomorphize chatbots and how this influences their engagement. We find that failing to adhere to social norms and glaring signs of humanity leads to decreased engagement unless balanced appropriately.
Wake-Up Task: Understanding Users in Task-based Mobile Alarm App
Popular alarm apps are offering task-based alarms that do not allow the user to dismiss an alarm unless they complete a specific task (e.g., solving math problems). Because such wake-up tasks cause discomforts, their usefulness and necessity could vary among individuals and their context. In this work, we aimed to understand the characteristics of Alarmy (task-based alarm app) users who (dis) likes wake-up task in terms of alarm set usage. We grouped 8,500 US users into three according to the proportion of the task selection and investigated group-wise usage differences. We found significant usage differences among the groups in terms of (1) set frequency, (2) set time, and (3) set consistency, possibly caused by consistent needs and task difficulty. The results suggest promising directions for inconvenient interaction and behavior change research.
Push Away the Smartphone: Investigating Methods to Counter Problematic Smartphone Use
There is a growing need to support people to counter problematic smartphone use. We analyse related research in methods to address problematic usage and identify a research gap in off-device retraining. We ran a pilot to address this gap, targeting automatic approach biases for smartphones, delivered on a Tabletop surface. Our quantitative analysis (n=40) shows that self-report and response-time based measures of problematic smartphone usage diverge. We found no evidence that our intervention altered reaction time-based measures. We outline areas of discussion for further research in the field.
"I Almost Fell in Love with a Machine": Speaking with Computers Affects Self-disclosure
Listening and speaking are tied to human experiences of closeness and trust. As voice interfaces gain mainstream popularity, we ask: is our relationship with technology that speaks with us fundamentally different from technology we use to read and type? In particular, will we disclose more about ourselves to computers that speak to us and listen to our answer? We examine this question through a controlled experiment where a conversational agent asked participants closeness-generating questions common in social psychology through either text-based or voice-based interfaces. We found that people skipped more invasive questions when reading-typing compared to speaking-listening. Surprisingly, typing in their answers seemed to increased the propensity for self-disclosure. This research has implications for the future design of voice-based conversational agents and deepens concerns of user privacy.
Go for GOLD: Investigating User Behaviour in Goal-Oriented Tasks
Building adaptive support systems requires a deep understanding of why users get stuck or face problems during a goal-oriented task and how they perceive such situations. To investigate this, we first chart a problem space, comprising different problem characteristics (complexity, time, available means, and consequences). Secondly, we map them to LEGO assembly tasks. We apply these in a lab study (N=22) equipped with several tracking technologies (i.e., smartwatch sensors and an OptiTrack setup) to assess which problem characteristics lead to measurable consequences in user behaviour. Participants rated occurred problems after each task. With this work, we suggest first steps towards a) understanding user behaviour in problem situations and b) building upon this knowledge to inform the design of adaptive support systems. As a result, we provide the GOLD dataset (Goal-Oriented Lego Dataset) for further analysis.
Designing for Preregistration: A User-Centered Perspective
The replication crisis—a failure to replicate foundational studies—has sparked a conversation in psychology, HCI, and beyond about scientific reliability. To address the crisis, researchers increasingly adopt preregistration: the practice of documenting research plans before conducting a study. Done properly, preregistration should reduce bias from taking exploratory findings as confirmatory. It is crucial to treat preregistration, often an online form/template, as a user-centered design problem to ensure preregistration achieves its intended goal. To understand preregistration in practice, we conducted 14 semi-structured interviews with preregistration users (researchers) who ranged in seniority and experience. We identified two main purposes researchers have for using preregistration, in addition to different user roles and adoption barriers. With the ultimate goal of improving the reliability of scientific findings, we suggest opportunities to explicitly support the different aspects of preregistration use based on our findings.
You Can’t Go Your Own Way: Social Influences on Travelling Behavior
Travel planning is increasingly done using assistive travel planning technologies. These technologies, however, tend to focus on the traveller as an individual, while travelling can often be a social endeavour involving other people. In order to explore the influence of other people on travelling behaviour, nineteen participants from the city of Ghent, Belgium, took part in a diary study and a subsequent interview. Our results show that the social context of certain travelling behaviours can influence the three main components that make up a displacement (i.e. the route, the departure time and the mode of transportation). Additionally, other aspects of the displacement, such as activities during the displacement, can also be influenced by a social travelling context. We propose that travel planning and travel assistance software could benefit from efforts to incorporate the social aspects of travelling into their systems and offer some suggestions.
Hyper Typer: A Serious Game for Measuring Mobile Text Entry Performance in the Wild
In this paper we introduce Hyper Typer, a serious Android game for collecting text entry performance data on a large scale in an unsupervised manner. Publishing the game on the Google Play Store resulted in a total of 1,917 usable transcribed phrases with 58,829 keystrokes over an eleven-week period. By analyzing the data, we demonstrate the feasibility of the method and give preliminary results regarding the overall performance and error rate of players. Moreover, the collected data allows us to compare two of the most commonly used Android soft-keyboards.
A/P(rivacy) Testing: Assessing Applications for Social and Institutional Privacy
The way information systems are designed has a crucial effect on users’ privacy, but users are rarely involved in Privacy-by-Design processes. To bridge this gap, we investigate how User-Centered Design (UCD) methods can be used to improve the privacy of systems’ designs. We present the process of developing A/P(rivacy) Testing, a platform that allows designers to compare several privacy designs alternatives, eliciting end-users’ privacy perceptions of a tested system or a feature (Figure 1). We describe three online experiments, with 959 participants, in which we created and validated the reliability of a scale for Users’ Perceived Systems’ Privacy (UPSP), and used it to compare between privacy designs alternatives by using scenarios and different variants. We show that A/B testing is applicable for privacy purposes and that our scale is differentiating between designs that perceived as legitimate and designs that may violate users’ expectations.
HappyPermi: Presenting Critical Data Flows in Mobile Application to Raise User Security Awareness
Malicious Android applications can obtain user’s private data and silently send it to a server. Android permissions are currently not sufficient enough to ensure the security of users’ sensitive information. For a sufficient permission model it is important to account the target of the outgoing data flow. On the other hand, permission dialogues often contain relevant information, but most of the users generally do not understand the implications or the visualization fails to guide the user attention to it. It is important to empower users by providing applications that show them who can access their private data and who might send this data to the outside. In order to raise user awareness considering Android permissions, we developed HappyPermi, an application that visualizes which user information is accessible by the granted permissions. Our evaluation (n=20) shows that most users are not aware of the sensitive data that their installed applications have access to. Our results suggest how different users feel about accessing their sensitive data when they are aware of its outgoing destinations.
Picture Passwords in Mixed Reality: Implementation and Evaluation
We present HoloPass, a mixed reality application for the HoloLens wearable device, which allows users to perform user authentication tasks through gesture-based interaction. In particular, this paper reports the implementation of picture passwords for mixed reality environments, and highlights the development procedure, lessons learned from common design and development issues, and how they were addressed. It further reports a between-subjects study (N=30) which compared usability, security, and likeability aspects of picture passwords in mixed reality vs. traditional desktop contexts aiming to investigate and reason on the viability of picture passwords as an alternative user authentication approach for mixed reality. This work can be of value for enhancing and driving future implementations of picture passwords in mixed reality since initial results are promising towards following such a research line.
Towards a Graph of (American) Tech Companies: A Prototype Visualization Tool for Research on Technology and Users
A number of large technology companies, or so-called “tech giants”, such as Alphabet/Google, Amazon, Apple, Facebook, and Microsoft, are increasingly dominant in people’s daily lives, and critically studied in fields such as Science, Technology and Society (STS) studies, with an emphasis on technology, data, and privacy. This project aims to contribute to research at the intersection of technology and society with a prototype visualization tool that shows the vast spread and scope of these large technology companies. In this paper, a prototype graph visualization of notable American technology companies, their acquisitions, and services is presented. The potential applications and limitations of the visualization tool for research are explored. This is followed by a discussion of applying the visualization tool to research on personal data and privacy concerns and possible extensions. In particular, difficulties of data collection and representation are emphasized.
Engaging Users with Educational Games: The Case of Phishing
Phishing continues to be a difficult problem for individuals and organisations. Educational games and simulations have been increasingly acknowledged as versatile and powerful teaching tools, yet little work has examined how to engage users with these games. We explore this problem by conducting workshops with 9 younger adults and reporting on their expectations for cybersecurity educational games. We find a disconnect between casual and serious gamers, where casual gamers prefer simple games incorporating humour while serious gamers demand a congruent narrative or storyline. Importantly, both demographics agree that educational games should prioritise gameplay over information provision — i.e. the game should be a game with educational content. We discuss the implications for educational games developers.
Metro-Viz: Black-Box Analysis of Time Series Anomaly Detectors
Millions of time-based data streams (a.k.a., time series) are being recorded every day in a wide-range of industrial and scientific domains, from healthcare and finance to autonomous driving. Detecting anomalous behavior in such streams has become a common analysis task for which data scientists employ complex machine learning models. Analyzing the behavior and performance of these models is a challenge on its own. While traditional accuracy metrics (e.g., precision/recall) are often used in practice to measure and compare the performance of different anomaly detectors, such statistics alone are insufficient to characterize and compare the algorithms in a systematic, human-interpretable way. In this extended abstract, we present Metro-Viz, a visual analysis tool to help data scientists and domain experts reason about commonalities and differences among anomaly detectors, and to identify their strengths and weaknesses.
Using a Game to Explore Notions of Responsibility for Cyber Security in Organisations
Improving the cyber literacy of employees reduces a company’s risk of cyber security breach. Game-based methods are found to be more effective in teaching users how to avoid fraudulent phishing links than traditional learning material such as videos and text. This paper reports on the development of a mobile app designed to improve cyber literacy and provoke users’ perceptions of who is responsible for cyber security in organisations. Based on a preliminary trial with 17 participants, we investigated users perceptions of a tongue-in-cheek, provocative cyber security awareness game where users’ jobs depend on their aptitude for protecting their organisations’ cyber security. Findings suggest that users accepted the high responsibility levelled upon them in the game and that ludic elements hold promise for engagement and increasing users’ cyber awareness.
PaxVis: Visualizing Peace Agreements
Peace is a universal concern involving a complex process of negotiations between select groups (i.e. policy makers, mediators, scholars and civil society groups). In this paper we present PaxVis, a platform of two interactive data visualizations for a large database of peace agreements (PA-X). We developed PaxVis to support comparative analysis of peace processes and to improve understandings of the complex dynamics behind the establishment of peace.
What is ‘Cyber Security’?: Differential Language of Cyber Security Across the Lifespan
People experience and understand cyber security differently. Our ongoing work aims to address the fundamental challenge of how we can understand a diverse range of cyber security experiences, attitudes and behaviours in order to design better, more effective cyber security services and educational materials. In this paper, we take a lifespan approach to study the language of cyber security across three main life stages – young people, working age, and older people. By applying text feature extraction and analysis techniques to lists of cyber security features generated by each age group, we illustrate the differential language of cyber security across the lifespan and discuss the implications for design and research in HCI.
"Jack-of-All-Trades": A Thematic Analysis of Conversational Agents in Multi-Device Collaboration Contexts
A growing number of conversational agents are being embedded into larger systems such as smart homes. However, little attention has been paid to the user interactions with conversational agents in the multi-device collaboration context (MDCC), where a multiple number of devices are connected to accomplish a common mission. The objective of this study is to identify the roles of conversational agents in the MDCC. Toward this goal, we conducted semi-structured interviews with nine participants who are heavy users of smart speakers connected with home IoT devices. We collected 107 rules (usage instances) and asked benefits and limitations of using those rules. Our thematic analysis has found that, while the smart speakers perform the role of voice controller in the single device context, their role extended to automation hub, reporter, and companion in the MDCC. Based on the findings, we provide design implications for smart speakers in the MDCC.
A Preliminary Study of the Role of Language in Home Network Troubleshooting
We present the results of a preliminary study into the usability of troubleshooting terminology around home computer networks. Forty-seven participants classified 29 terms, selected from interview transcripts and online help forums, in an open card sort. We analyzed words participants explicitly indicated as unfamiliar as well as words that participants misclassified. The study serves as a proof of concept for a broader study to determine whether certain technical terms and/or their colloquial counterparts are understandable by technical novices and intermediates. Our findings indicate that participants found technical and colloquial terms equally problematic. These findings have implications for the design of troubleshooting tools and systems as well as the design of technical support scripts and training.
The Impact of “Cosmetic” Changes on the Usability of Error Messages
Programmatic errors are often difficult to resolve due to poor usability of error messages. Applying theories of visual perception and techniques in visual design, we created three visual variants of a representative error message in a modern UI framework. In an online experiment, we found that the visual variants led to substantial improvements over the original error message in both error comprehension and resolution. Our results demonstrate that seemingly cosmetic changes to the presentation of an error message can have an oversized impact on its usability.
Bridging the Gap Between Business, Design and Product Metrics
The integration of User-Centered Design with Agile practices studies the interactions between designers and developers and the alignment of the design and development processes. However, beyond the interactions with the development team, designers are often required to operate within a wider business context, driven by goals set on high-level metrics, like Monthly Active Users, and to show how design-led initiatives and improvements address those metrics. In this paper we generalize learnings from prior work on applying usability improvements to Jira, a project tracking software tool created by Atlassian, and we describe a structured approach to bridging the gap between feature work and business metrics.
AR in the Gallery: The Psychogeographer’s Table
In recent years, museums have embraced the use of digital technologies to add interactivity to exhibits. New tools such as wireless beacons, QR codes and markerless trackers paired with powerful smartphones are used to implement applications ranging from guides that provide supplementary material as web pages or audio to spatially precise augmented reality (AR). In this work we explore the use of head-mounted (rather than the more common hand-held) AR in a museum space. Our goal is to explore visitor behaviour when using such technology, to inform the design of a future longitudinal study. We found that visitors enjoyed their experience with head-mounted AR and learned fairly quickly how to navigate and interact with virtual content.
Let’s Chat in Alipay: Understanding Social Function Usage in Task-oriented Apps
Recently, many mobile apps start to incorporate social functions into their design, even for the task-oriented ones, i.e., those designed mainly to help users complete certain tasks. For instance, Taobao, a shopping app, includes online-sharing and instant-messaging functions. However, there is still lack of research on how the users accept and use these social functions. This paper aims to unveil the user requirements on the social functions in task-oriented apps, and accordingly provide design suggestions for app developers. To this end, we conduct semi-structured interviews with 16 participants on how they use instant-messaging functions in three widely-used task-oriented apps, the shopping app Taobao, an online payment app Alipay, and an entertainment app NetEase Cloud Music. Our findings demonstrate that the instant-messaging functions in these apps are not widely accepted, although they benefit user experience and facilitate users’ online social activities. We show that both the design and the users’ stereotype towards the apps are important reasons. Finally, we suggest several design guidelines.
Magika, a Multisensory Environment for Play, Education and Inclusion
Magika is an interactive Multisensory Environment that enables new forms of playful interventions for children, especially those with Special Education Needs. Designed in cooperation with more than 30 specialists at local care centers and primary schools, Magika integrates digital worlds projected on the wall and the floor with a gamut of “smart” physical objects (toys, ambient lights, materials, and various connected appliances) to enable tactile, auditory, visual, and olfactory stimuli. The room is connected with an interface for educators that enables them to: control the level of stimuli and their progression; define and share a countless number of game-based learning activities; customize such activities to the evolving needs of each child. This paper describes Magika and discusses its potential benefits for play, education and inclusion.
One Metric for All: Calculating Interaction Effort of Individual Widgets
Automating usability diagnosis and repair can be a powerful assistance to usability experts and even less knowledgeable developers. To accomplish this goal, evaluating user interaction automatically is crucial, and it has been broadly explored. However, most works focus in long interaction sessions, which makes it difficult to tell how individual interface components influence usability. In contrast, this work aims to compare how different widgets perform for a same task, in the context of evaluating alternative designs for small components, implemented as refactorings. For this purpose, we propose a unified score to compare the widgets involved in each refactoring by the level of effort required by users to interact with them. This score is based on micro-measures automatically captured from interaction logs, so it can be automatically predicted. We show the results of predicting such score using decision trees.
Exploring Machine Teaching for Object Recognition with the Crowd
Teachable interfaces can enable end-users to personalize machine learning applications by explicitly providing a few training examples. They promise higher robustness in the real world by significantly constraining conditions of the learning task to a specific user and their environment. While facilitating user control, their effectiveness can be hindered by lack of expertise or misconceptions. Through a mobile teachable testbed in Amazon Mechanical Turk, we explore how non-experts conceptualize, experience, and reflect on their engagement with machine teaching in the context of object recognition.
Negative Emotions, Positive Experience: What Are We Doing Wrong When Evaluating the UX?
Since its idealization more than 20 years ago, much research has been carried out on the User eXperience (UX) field, with several evaluation methods being proposed. However, studies have been pointing out conflicting results when evaluating the UX. Users frequently evaluate their UX as positive, even when experiencing many negative emotions when interacting with a product. Moreover, variables such as the peak-end effect and the memory-experience gap may also have been influencing the results, leading to misinterpretations about the product’s quality. In this context, this paper presents our work-in-progress research on the following research question: “What are we doing wrong when evaluating the UX?”. We discuss about different variables that may have been influencing users’ perceptions about their experiences in previous studies and highlight research opportunities. With this work, we expect to shed light on and bring reflection to current practices on UX evaluations in order to progress the research in the UX field.
Investigating the Role of User’s English Language Proficiency in Using a Voice User Interface: A Case of Google Home Smart Speaker
Amazon’s Echo, and Apple’s Siri have drawn attention from different user groups; however, these existing commercial VUIs support limited language options for users including native English speakers and non-native English speakers. Also, the existing literature about usability differences between these two distinct groups is limited. Thus, in this study, we conducted a usability study of the Google Home Smart Speaker with 20 participants including native English and non-native English speakers to understand their differences in using the Google Home Smart Speaker. The findings show that compared with their counterparts, the native English speakers had better and more positive user experiences in interacting with the device. It also shows that users’ English language proficiency plays an important role in interacting with VUIs. The findings from this study can create insights for VUI designers and developers for implementing multiple language options and better voice recognition algorithms in VUIs for different user groups across the world.
Invisible Touch: How Identifiable are Mid-Air Haptic Shapes?
Mid-air haptic feedback constitutes a new means of system feedback in which tactile sensations are created without contact with an actuator. Though earlier research has already focused on its abilities to enhance our experiences, e.g. by increasing a sense of immersion during art exhibitions, an elaborate study investigating people’s abilities to identify different mid-air haptic shapes has not yet been conducted. In this paper, we describe a user study involving 50 participants, with ages between 19 – 77 years old, who completed a mid-air haptic learning experiment involving eight different mid-air haptic shapes. Preliminary results showed no learning effect throughout the task. Age was found to be strongly related to a decline in performance, and interestingly, significant differences in accuracy rates were found for different types of mid-air haptic shapes.
A Learning Design Framework to Support Children with Learning Disabilities Incorporating Gamification Techniques
Gamification is increasingly being applied in education to engage and motivate learners. Yet the application of gaming elements can be problematic because it can have a negative effect on cognitive load (CL) and on working memory (WM). This is a particular issue for children with learning disabilities who suffer from deficits in working memory. While studies have explored the relationship between gamification and cognitive load, there is little research to address the management of cognitive load in gamified learning applications for children with learning disabilities. This study is suggesting a framework based on existing guidelines derived from HCI concepts and cognitive load theories to design user-centered gamified applications for children with learning disabilities to exploit their limited WM capacity and manage cognitive load.
Pseudo-haptic Controls for Mid-air Finger-based Menu Interaction
Virtual Reality (VR) is more accessible than ever these days. While topics like performance, motion sickness and presence are well investigated, basic topics as VR User Interfaces (UIs) for menu control are lagging far behind. A major issue is the absence of haptic feedback and naturalness, especially when considering mid-air finger-based interaction in VR, when “grabbable” controllers are not available. In this work, we present and compare the following two visual approaches to mid-air finger-based menu control in VR environments: a planar UI similar to common 2D desktop UIs, and a pseudo-haptic UI based on physical metaphors. The results show that the pseudo-haptic UI performs better in terms of all tested aspects including workload, user experience, motion sickness and immersion.
Relax Yourself – Using Virtual Reality to Enhance Employees’ Mental Health and Work Performance
This paper presents work-in-progress aiming to develop an actively adapting virtual reality (VR) relaxation application. Due to the immersive nature of VR technologies, people can escape from their real environment and get into a relaxing state. Goal of the application is to adapt to the users’ physiological signals to foster the positive effect. Until now, a first version of the VR application was constructed and is currently evaluated in an experiment. Preliminary results of this study demonstrate that people appreciate the immersion into the virtual environment and escape from reality. Moreover, participants highlighted the option to adapt users’ needs and preferences. Based on the final study data, the constructed application will be enhanced with regard to adoption and surrounding factors.
Digital Doctors and Robot Receptionists: User Attributes that Predict Acceptance of Automation in Healthcare Facilities
Advances in artificial intelligence offer the promise of accessibility, precision, and personalized care in health settings. However, growth in technology has not translated to commensurate growth in automation of healthcare facilities. To gain a better understanding of user psychology behind the acceptance of automation in clinics, a 3 (Role: Receptionist, Nurse, Doctor) x 3 (Digital Agent Representation: Human, Avatar, Robot) factorial experiment (N = 283) was conducted. Results suggest that the digital nature of the interaction overpowers any individual role effects, with acceptance depending upon individual traits (belief in machine heuristic; power usage). Implications for theory and the design of digital healthcare facilities are discussed.
Effects of Visual Enhancements and Delivery Time on Receptivity of Mobile Push Notifications
Using real-world logs from 6,866 users who received relevant smartphone notifications we show that visual elements in the notification influence its receptivity. Users responded significantly more to notifications that included an image or an icon compared to standard notifications and to notifications including an action button compared to those not including such button. In addition, timing of the notifications also had a significant effect on receptivity, with lower click rates during the morning hours and higher rates during the afternoon and evening hours.
CooperationCaptcha: On-The-Fly Object Labeling for Highly Automated Vehicles
In the emerging field of automated vehicles (AVs), the many recent advancements coincide with different areas of system limitations. The recognition of objects like traffic signs or traffic lights is still challenging, especially under bad weather conditions or when traffic signs are partially occluded. A common approach to deal with system boundaries of AVs is to shift to manual driving, accepting human factor issues like post-automation effects. We present CooperationCaptcha, a system that asks drivers to label unrecognized objects on the fly, and consequently maintain automated driving mode. We implemented two different interaction variants to work with object recognition algorithms of varying sophistication. Our findings suggest that this concept of driver-vehicle cooperation is feasible, provides good usability, and causes little cognitive load. We present insights and considerations for future research and implementations.
Hands-On Math: A Training System for Children with Dyscalculia
Dyscalculia affects comprehension of numerical mathematical problems, working with numbers and arithmetics. We describe our work on a training system for an exercise that trains connections between verbal and numerical representations of numbers and finger counting. Fingers support embodied cognition and constitute a natural numerical representation. We describe the design rationale and iterative development process, and first evaluation results for our system that enables children to train without guidance and feedback by a trainer.
Informal STEAM Education Case Study: Child-Robot Musical Theater
STEAM education fuses arts with traditional STEM fields so that the diverse disciplines can broaden and inform each other. Our eight-week STEAM after school program exposed elementary school children to social robotics and musical theater. Approximately 25 children grades K-5 participated over the course of the program with an average of 12 children attending each week. The program covered acting, dancing, music, and drawing with the robots in two-week modules based around the fairy tale, “Beauty and the Beast”. The modular design enabled children who could come to only a few sessions to participate actively. The children demonstrated enthusiasm for both the robots and the musical theater activities and were engaged in the program. Efforts such as this can provide meaningful opportunities for children to explore a variety of arts and STEM fields in an enjoyable manner. The program components and lessons learned are discussed with recommendations for future research.
SolveDeep: A System for Supporting Subgoal Learning in Online Math Problem Solving
Learner-driven subgoal labeling helps learners form a hierarchical structure of solutions with subgoals, which are conceptual units of procedural problem solving. While learning with such hierarchical structure of a solution in mind is effective in learning problem solving strategies, the development of an interactive feedback system to support subgoal labeling tasks at scale requires significant expert efforts, making learner-driven subgoal labeling difficult to be applied in online learning environments. We propose SolveDeep, a system that provides feedback on learner solutions with peer-generated subgoals. SolveDeep utilizes a learnersourcing workflow to generate the hierarchical representation of possible solutions, and uses a graph-alignment algorithm to generate a solution graph by merging the populated solution structures, which are then used to generate feedback on future learners’ solutions. We conducted a user study with 7 participants to evaluate the efficacy of our system. Participants did subgoal learning with two math problems and rated the usefulness of system feedback. The average rating was 4.86 out of 7 (1: Not useful, 7: Useful), and the system could successfully construct a hierarchical structure of solutions with learnersourced subgoal labels.
Land.Info: Interactive 3D Visualization for Public Space Design Ideation in Neighborhood Planning
In the contemporary practice of participatory neighborhood planning, planners leverage digital support tools with realistic, interactive 3D visualization to support perception processing and to increase engagement among diverse public stakeholders. However, capturing the aspirations of a community lacking design and planning expertise requires a more thorough evaluation and considered design of support tools. We present Land.Info, a proof-of-concept software that allows users to design open spaces with 3D visualization and see the subsequent costs and environmental consequences. To assess how the public engages in design discussion with 3D visualization, we organized three community design workshops for developing a vacant lot. We found that 3D visualization 1) promotes public ideation of user stories around objects, and 2) prohibits ideas beyond spatial design elements. Future research will investigate whether it is possible to aggregate more diverse public aspirations, whether or not visual realism sets expectations for designs, and the potential impacts of expanding the software user base for neighborhood planning cases.
SociaBowl: A Dynamic Table Centerpiece to Mediate Group Conversations
In this paper, we introduce SociaBowl, a dynamic table centerpiece to promote positive social dynamics in 2-way cooperative conversations. A centerpiece such as a bowl of food, a decorative flower arrangement, or a container of writing tools, is commonly placed on a table around which people have conversations. We explore the design space for an augmented table and centerpiece to influence how people may interact with one another. We present an initial functional prototype to explore different choices in materiality of feedback, interaction styles, and animation and motion patterns. These aspects are discussed with respect to how it may impact people’s awareness of their turn taking dynamics as well as provide an additional channel for expression. Potential enhancements for future iterations in its design are then outlined based on these findings.
Embodying Historical Learners’ Messages as Learning Companions in a VR Classroom
Online learning platforms such as MOOCs have been prevalent sources of self-paced learning to people nowadays. However, the lack of peer accompaniment and social interaction may increase learners’ sense of isolation. Prior studies have shown the positive effects on visualizing peer students’ appearances in VR learning environments. In this work, we propose to build virtual classmates, which were constructed by synthesizing previous learners’ time-anchored messages. Configurations of virtual classmates and their behavioral features can be adjusted. To build the characteristics of virtual classmates, we developed a technique called comment mapping to aggregate prior online learners’ comments to shape virtual classmates’ behaviors. We evaluated the effects of the virtual classmates built with and without the comment mapping and the amount of virtual classmates rendered in VR.
NoteStruct: Scaffolding Note-taking while Learning from Online Videos
Note-taking activities in physical classrooms are ubiquitous and have been emerging in online learning. To investigate how to better support online learners to take notes while learning with videos, we compared free-form note-taking with a prototype system, NoteStruct, which prompts learners to perform a series of note-taking activities. NoteStruct enables learners to insert annotations on transcripts of video lectures and then engages learners in reinterpreting and synthesizing their notes after watching a video. In a study with a sample of Mechanical Turk workers (N=80), learners took longer and more extensive notes with NoteStruct, although using NoteStruct versus free-form note-taking did not impact short-term learning outcome. These longer notes were also less likely to include verbatim copied video transcripts, but more likely to include elaboration and interpretation. We demonstrate how NoteStruct influences note-taking during online video learning.
DetourNavigator – Using Google Location History to Generate Unfamiliar Personal Routes
With the ubiquity of turn-by-turn navigation on toady’s smartphones, personal exploration of the unseen has been drastically diminished. Such services make it less likely for users to conquer their less frequented parts of the urban environment. In this paper we present the DetourNavigator, a navigation service that creates routes based on Google Location History along areas that are unfamiliar to the user. Our preliminary user study indicates that these personalized graphs are well suited to generate routes that might lead to more holistic knowledge about the built environment.
An Energy Lifestyles Program for Tweens: A Pilot Study
Prior work has demonstrated that energy education programs designed for young children can influence the adoption of energy efficiency measures in the home. Here, we introduce the Know Your Energy Numbers (KYEN) program, an energy education program designed to teach an older audience of pre-teens, or tweens, about: (i) their energy consumption lifestyles, (ii) available residential energy tools, and (iii) methods to extract insights from their energy data. We also describe results from two pilots with 18 tweens from Girl Scout and Boy Scout troops living in Northern California. We report on how participants and their families reacted to our energy-based curricula, the benefits and challenges they perceived about using energy tools, and their preferences regarding the display of home energy data. We conclude with a brief discussion of the outcomes and limitations of this work before describing next steps for the program.
P300-Based 3D Brain Painting in Virtual Reality
Brain Painting is a brain-computer interface (BCI) application that gives users the ability to paint on a virtual canvas without requiring physical movement [1-2]. Brain Painting has shown to improve the Quality of Life (QOL) of patients with Amyotrophic lateral sclerosis (ALS), by giving patients a way to express themselves and affect society through their art [1]. Although there is currently no known cure for ALS, through such outlets we can help mitigate the physical and psychological impairments of those living with ALS. This paper discusses the development and testing of an immersive Brain Painting application using a Google brush-like tool in a 3D Environment, for able-bodied users. It also discusses how it can help provide a more immersive medium for users to express themselves creatively. In addition, we also discuss feedback received from a preliminary study on how the brush and application can be improved to better allow users to paint in VR using their brain.
Persuading the Driver: A Literature Review to Identify Blind Spots
We present a review of persuasive systems in vehicles based on the Persuasion Interface Design in the Automotive context Framework (PIDAF). It integrates intents, cues, persuasive principles and design options for automotive persuasion. Our results show that most systems target safety and eco-driving using conscious cues to alert the driver. Most systems use self-monitoring, tailoring or suggestion as persuasive principles. Visual modalities are still much more popular than auditory or haptic ones. We identified blind spots to support designers and researchers in developing systems addressing areas which are less explored in automotive persuasion.
Analysis of Previsualization Tasks for Animation, Film and Theater
Previsualization (previs) is an essential phase in the visual design process of narrative media such as film, animation, and stage plays. In digital previs complex 3D tools are used that are not specifically designed for the previs process making it hard to use for creative persons without much technical knowledge. To enable building dedicated previs software, we analyze the tasks performed in digital previs based on interviews with domain experts. In order to support creative persons in their previs work we propose the use of natural user interfaces and discuss which are suited for the specific previs tasks.
Youth Concerns and Responses to Self-Tracking Tools and Personal Informatics Systems
Though some work has looked at the implementation of personal informatics tools with youth and in schools, the approach has been prescriptive; students are pushed toward behaviour change intervention or otherwise use the data for prescribed learning in a particular curriculum area. This has left a gap around how young people may themselves choose to use personal informatics tools in ways relevant to their own concerns. We gave workshops on personal informatics to 13 adolescents at two secondary schools in London, UK. We asked them to use a commercial personal informatics app to track something they chose that they thought might impact their learning. Our participants proved competent and versatile users of personal informatics tools. They tracked their feelings, tech activity, physical activity, and sleep with many using the process as a system for understanding and validating aspects of their own lives, rather than changing them.
Guess or Not?: A Brain-Computer Interface Using EEG Signals for Revealing the Secret behind Scores
Now examinations and scores serve as the main criterion for a student’s academic performance. However, students use guessing strategies to improve the chances of choosing the right answer in a test. Therefore, scores do not reflect actual levels of the student’s knowledge and skills. In this paper, we propose a brain-computer interface (BCI) to estimate whether a student guesses on a test question or masters it when s/he chooses the right answer in logic reasoning. To build this BCI, we first define the “Guessing” and employ Raven’s Progressive Matrices as logic tests in the experiment to collect EEG signals, then we propose a sliding time-window with quorum-based voting (STQV) approach to recognize the state of “Guessing” or “Understanding”, together with FBCSP and end-to-end ConvNet classification algorithms. Results show that this BCI yields an accuracy of 83.71% and achieves a good performance in distinguishing “Guessing” from “Understanding”.
Cross-platform Interactions and Popularity in the Live-streaming Community
Twitch, a live video-streaming platform, provides strong financial and social incentives to developing a follower base. While streamers benefit from Twitch’s own features for forming a wide community of engaged viewers, many streamers look to external social media platforms to increase their reach and build their following. We collect a corpus of Twitch streamer popularity measures and their behavior data on Twitch and third party platforms. We test the community-proposed relationship between behavior on social media accounts and popularity through examining the timing of creation and use of social media accounts. We conduct these experiments by studying the correlations between streamer behaviors and two popularity measures used by Twitch: followers and average concurrent viewers. We find that we cannot yet define which behaviors have statistically significant correlations with popularity, and propose future directions for this research.
Inferring User Engagement from Interaction Data
This paper presents preliminary results of a study designed to quantify users’ engagement levels with interactive media content, through self-reported measures and interaction data. The broad hypothesis of the study is that interaction data can be used to predict the level of engagement felt by the user. The challenge addressed in this work is to explore the effectiveness of interaction data to act as a proxy for engagement levels and reveal what that data shows about engagement with media content. Preliminary results suggest several interesting insights about participants engagement and behaviour. Crucially, temporal statistics support the hypothesis that the participant making use of the controls in the interactive, video-based experience positively correlates with higher engagement.
Characterizing Homelessness Discourse on Social Media
Social media allows us to connect and maintain relationships in spite of physical distance and barriers; as computers and the internet become more accessible, hard-to-reach populations are finding a voice on these platforms. One such group is those who are or have been homeless. Through a computational linguistic analysis of a large corpus of Tumblr blog posts, this paper provides preliminary insights to understand the unique ways homeless bloggers express their needs, frustrations and financial/social distress, connect with others, and seek emotional and practical support from others. We highlight future investigations, building upon this research, that can be pursued in HCI to assist an underserved population with the difficult life experience of homelessness.
Finsta: Creating "Fake" Spaces for Authentic Performance
Finsta is a “fake” Instagram account that some people maintain in addition to their real Instagram account (rinsta) for a more authentic performance. We draw on Goffman’s theatrical metaphor and use a mixed-methods approach to explore how and why people do the work of performing their identity across these distinct presentations of the self. We found that finsta users deliberately partition their audience and mostly maintain a small audience of close friends to avoid context collapse. Additionally, we discovered that finsta is a space where distinct norms shape performance: humor, authenticity, and “unfiltered” self-expression. Given that finsta users are mostly teenagers and young adults, we ask how an expectation for authentic performance by peers might itself increase pressure on users.
Availability and Boundary Management: An Exploratory Study
As technology is becoming more powerful and widespread, it is used in multiple areas and for diverse purposes making us available to others anytime and anyplace. Boundary management focuses on the organisation of domains in life and their borders (e.g., between work vs. non-work). Working parents of young children are facing particular challenges of orchestrating their life domains. We present the results of an interview study of parents of young children on their boundary management and availability across domains. The paper contributes an identification of life domains; a classification of availability statuses; and details on the status we call ad-hoc availability with a melange of a priori rules and spontaneous behaviours. Ad-hoc availability is not only determined by a general personal preference for connection, but very importantly by a practical information need from the parent towards the person wanting to connect.
Using a Conversational Agent to Facilitate Non-native Speaker’s Active Participation in Conversation
When a non-native speaker talks with a native speaker, he/she sometimes feels hard to take speaking turns due to language proficiency. The resulting conversation between a non-native speaker and a native speaker is not always productive. In this paper, we propose a conversational agent to support a non-native speaker in his/her second language conversation. The agent joins the conversation and makes intervention by a simple script based on turn-taking rules for taking the agent’s turn, and gives the next turn to the non-native speaker for prompting him/her to speak. Evaluation of the proposed agent suggested that it successfully facilitated the non-native speaker’s participation over 30% of the agent’s interventions, and significantly increased the frequency of turn-taking.
Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis
Prior research reported that workers on Amazon Mechanical Turk (AMT) are underpaid, earning about $2/h. But the prior research did not investigate the difference in wage due to worker characteristics (e.g., country of residence). We present the first data-driven analysis on wage gap on AMT. Using work log data and demographic data collected via online survey, we analyse the gap in wage due to different factors. We show that there is indeed wage gap; for example, workers in the U.S. earn $3.01/h while those in India earn $1.41/h.
Alliance for My Idol: Analyzing the K-pop Fandom Collaboration Network
As the K-pop industry has been rapidly expanded, the strength of the K-pop fandoms is under the spotlight. In particular, the collaborations among fandoms for mutually supporting their artists have contributed to the success of the K-pop artists. This paper investigates the current practice of the fandom collaborations in K-pop. To this end, we first introduce the notion of the ‘fandom collaboration network’ that represents the collaborations among K-pop fandoms. By collecting and analyzing a large-scale fandom activity data from DCinside, we investigate (i) to what extent fandom collaboration is prevalent in K-pop, (ii) how fandoms collaborate with other fandoms, and (iii) what fandoms play more roles in fandom collaboration than others. We find that K-pop fandoms actively collaborate with other fandoms for mutually supporting their artists. By analyzing the structural properties of the fandom collaboration network, we show the fandom collaboration is basically based on the reciprocity. However, we also show that the amount of collaborations between the two fandoms is often unfair. Among all the active fandoms in our data, we find that there a small number of fandoms who play significant roles in fandom collaborations in K-pop. We believe our work can provide important insight for K-pop stakeholders such as fans, agencies, artists, marketers, and broadcasting companies.
Lithopia: Prototyping Blockchain Futures
Lithopia is a prototype of a blockchain-managed fictional village that uses satellite and drone data to trigger smart contracts on the open source blockchain platform, Hyperledger. The project is testing the possibility of anticipatory governance of emerging blockchain and distributed ledger technologies (DLTs) by involving stakeholders in the design process over templates. The goal is to question the promises of blockchain governance happening over automation and smart contracts and to offer an alternative to the misuses of emerging technologies in the so-called predictive and anticipatory design. The prototype consists of a functional Node-RED dashboard used as an interface for the Hyperledger smart contracts and a design fiction movie about the lives of the Lithopians.
Crowdsourcing Perspectives on Public Policy from Stakeholders
Personal deliberation, the process through which people can form an informed opinion on social issues, serves an important role in helping citizens construct a rational argument in the public deliberation. However, existing information channels for public policies deliver only few stakeholders’ voices, thus failing to provide a diverse knowledge base for personal deliberation. This paper presents an initial design of PolicyScape, an online system that supports personal deliberation on public policies by helping citizens explore diverse stakeholders and their perspectives on the policy’s effect. Building on literature on crowdsourced policymaking and policy stakeholders, we present several design choices for crowdsourcing stakeholder perspectives. We introduce perspective-taking as an approach for personal deliberation by helping users consider stakeholder perspectives on policy issues. Our initial results suggest that PolicyScape could collect diverse sets of perspectives from the stakeholders of public policies, and help participants discover unexpected viewpoints of various stakeholder groups.
How "True Bitcoiners" Work on Reddit to Maintain Bitcoin
After the meteoric rise in price, and subsequent public interest, of the cryptocurrency Bitcoin, a developing body of work has begun examining its impact on society. In recent months, as Bitcoin’s price has rapidly declined, uncertainty and distrust have begun to overshadow early enthusiasm. In this late-breaking work, we investigated one of the largest and most important Bitcoin online communities, the r/Bitcoin Reddit forum. A vocal subgroup of users identify themselves as “true Bitcoiners”, and justify their continued devotion to Bitcoin. These subreddit participants explained and justified their trust in Bitcoin in three primary ways: identifying characteristics of beneficial versus harmful Bitcoin users, diminishing the importance of problems, and describing themselves as loyal to Bitcoin over time.
Evaluating a Wearable Camera’s Social Acceptability In-the-Wild
With increasing ubiquity, wearable technologies are becoming part of everyday life where they may cause controversy, discomfort and social tension. Particularly, body-worn “always-on” cameras raise social acceptability concerns as their form factors hinder bystanders to infer whether they are “in the frame”. Screen-based status indicators have been suggested as remedy, but not evaluated in-the-wild. Simultaneously, best practices for evaluating social acceptability in field studies are rare. This work contributes to closing both gaps. First, we contribute results of an in-the-wild evaluation of a screen-based status indicator testing the suitability of the “displayed camera image” design strategy. Second, we discuss methodical implications for evaluating social acceptability in the field, and cover lessons learned from collecting hypersubjective self-reports. We provide a self-critical, in-depth discussion of our field experiment, including study-related behavior patterns, and prototype fidelity. Our work may serve as a reference for field studies evaluating social acceptability.
Morphing Design for Socially Interactive Autonomous Car by Multi-Material 3D-Printing
An autonomous car, also known as a robot car, self-driving car, is a vehicle that is capable of sensing its environment and moving with little or no human control. In most cases today, a driver can switch “driving mode” of his/her car between manual and autonomous. However, while the mode can be smartly changed, it is not able to show the current driving mode to nearby pedestrians. This might become a source of anxieties for many ordinary people living in the era of autonomous car. To overcome this issue, we propose to create a car with new expressive functions, and make it communicable and interactive. Unlike conventional approaches which use displays and LEDs, our approach is to develop 3D shape-transforming, morphable body of a car by using multi-material 3D-printing. Our first trial with printed soft membranes was successful in representing three different modes of a car seamlessly. In this paper, we introduce our concept, core technologies and implementations, and discuss further possibilities and future works.
Social Microbial Prosthesis: Towards Super-Organism Centered Design
In this paper, I speculate on a future where our need to socialize physically is solely to exchange bacteria. With our biological data being in the hands of private companies and governments and our environments’ microbiomes becoming less diverse, our social systems, social identities and social interactions are redefined and reinvented to adapt to this new reality. In this world, everyone wears a “social microbial prosthesis” that analyzes their microbial composition from their breath and reveals sensitive information on their chest. The microbial prosthesis would be able to give off information not only on the microbial composition but also on the mental and physical health of a person. This second skin plays a role in controlling communication and interaction between people, where one is able to, by inspecting surrounding people’s prosthesis, take careful considerations of who to interact with and who to avoid. Social Microbial Prosthesis is a critique of the race of private companies and governments to collect our biological data, the role of commercializing such data in shaping and changing our social identities and a response to the loss of microbial diversity in our environments due to our modern lifestyles and surroundings.
Howel: A Soft Wearable with Dynamic Textile Patterns as an Ambient Display for Cardio Training
In-situ exploration of heart rate (HR) zones during cardio training (CT) is important for training efficiency. However, approaches for monitoring HR, either depend on complex visualizations on small screens (i.e., smartwatches) or intrusive modalities (i.e. haptic, auditory) that might force the attention to the information. We developed an early prototype, Howel, a novel wrist-worn soft wearable to display HR zone information during CT. Our concept utilizes mapping information onto dynamic patterns (color changing stripes) as an easy-to-understand ambient display. To preserve non-intrusiveness, it uses non-emissive modality by heating thermochromic paints on its textile surfaces. Early feedback from three participants suggests that soft wearables with non- emissive dynamic patterns have potential (1) to embed information organically on the body, (2) to give easy-to-understand in-situ intensity information and (3) to keep the attention on the exercise instead of performance measures.
Flow: Towards Communicating Directional Cues through Inflatables
Current research in wearable technologies have shown that we can use real-time tactile instructions to support the learning of physical activities through vibrotactile stimulation. While tactile cues based on vibration may indicate direction, they do not convey the direction of movement. We would like to propose the use of inflatables as an alternative form of actuation to express such information through pressure. Inspired by notions from embodied interaction and somaesthetic design, we present in this paper a research through design (RtD) project that substitutes directional metaphors with push against the body. The result, Flow, is a wearable designed to cue six movements of the wrist/forearm to support the training of elementary sensory-motor skills of physical activities, such as foil fencing. We contribute with the description of the design process and reflections on how to design for tactile motion instructions through inflatables.
"Nothing Comes Before Profit": Asshole Design In the Wild
Researchers in HCI and STS are increasingly interested in describing ethics and values relevant for design practice, including the formulation of methods to guide value application. However, little work has addressed ethical considerations as they emerge in everyday conversations about ethics in venues such as social media. In this late breaking work, we describe online conversations about a concept known as “asshole design” on Reddit, and the relationship of this concept to another practitioner-focused concept known as “dark patterns.” We analyzed 1002 posts from the subreddit ‘/r/assholedesign’ to identify the types of artifact being shared and the interaction purposes that were perceived to be manipulative or unethical as a type of “asshole design.” We identified a subset of these posts relating to dark patterns, quantifying their occurrences using an existing dark patterns typology.
As If I Am There: A New Video Chat Interface Design for Richer Contextual Awareness
We introduce a novel video chat interface with a 360° photo as background in order to offer richer contextual and background information. We conduct a preliminary user evaluation in a lab setting. Paired participants were randomly assigned to two conditions, using regular interface or 360° photo interface. Each pair video chatted in pairs, then completed a post-study survey and answered several questions about their experience. Participants reported to have less behavioral interdependence and more inclusion using 360° photo video chat interface. They also reported having strong interest in employing it in long-distance intimate relationship, and made some suggestions for design iterations.
MementoKey: Keeping Passwords in Mind
In this paper, we introduce a novel system of password generation, MementoKey, consisting of private words that exist only in a user’s memory and a corresponding set of public (non-secret) words that will facilitate users’ recall of the private words, which they are associated with. We will demonstrate how MementoKey offers a useful alternative to existing options for storing passwords in password managers, or to using cryptographically weak, but memorable, passwords. We have conducted a user study to evaluate the word-association technique for recalling passwords and the effectiveness of our prototype software training and checking system to guide the user successfully through the memorization process. Our study involving 60 diverse participants indicates that our prototype can effectively lead users through a visualization and memorization technique to create a strong word-association memory between pairs of adjectives and nouns.
Dispelling the Blunt Perception of Social Technology
Participatory design prescribes intensive stakeholder involvement in the design process. One challenge in such projects is to enable stakeholders to develop an open mind for novel solutions of the design problem at hand. When designing social technology, this is further complicated by prejudices about technology as being too blunt and inadequate to interfere with the sensitivity of social context. In this paper, we present a novel approach that supports a more neutral and open discussion on the benefits and pitfalls of social technology. The approach helps stakeholders see social technology in a broader perspective, which in turn enables the design of solutions with improved social sensitivity.
Opportunities of Quantified Self for Resocialisation of (Ex-)Convicts
Resocialisation is a guided process by which ex-convicts are introduced back into society. An issue that arises in this process is that ex-convicts are behind on technological developments when they return to society. Here, we present work on how quantified self technology, as an alternative to the present-day ankle monitor, can be a helpful tool to obtain overview and insight in their progress. In particular, we present a prototype that physically monitors stress levels as an indicator of behavioural patterns. Results from research with former convicts shows how giving ownership over tracking data can help the user group understand their societal status and become more sovereign during their resocialisation process. Finally, we reflect on ethical questions regarding data gathering, Quantified Other and privacy for ex-convicts.
Disconnect: A Proposal for Reclaiming Control in HCI
As the lines between the digital and analog worlds become increasingly blurred, it is nearly impossible to traverse modern life without creating a digital footprint. This integration is so deep-rooted into the fabric of society, that if one attempted to choose to disconnect from today’s hyperconnected world, one would have to move away from civilization. Weiser’s vision of the omni-present, ubiquitous computer of the 21st century [21] has been realized, but at a cost. With invisible interfaces we forego the ability to recognize when we are being watched, heard or influenced by external actors. This paper takes a bottom-up approach of using design fiction narratives to explore how to design mechanisms of control (MoC) that may help reinstate human control and agency over our data. Preliminary results show emergent themes pertaining to data access, governance and sharing; the forms of MoC; as well as methodological lessons.
Flight Chair: An Interactive Chair for Controlling Emergency Service Drones
In future, emergency services will increasingly use technology to assist emergency service dispatchers and call taker with information during an emergency situation. One example could be the use of drones for surveying an emergency situation and providing contextual knowledge to emergency service call takers and first responders. The challenge is that drones can be difficult for users to maneuver in order to see specific items. In this paper, we explore the idea of a drone being controlled by an emergency call taker using embodied interaction on a tangible chair. The interactive chair, called Flight Chair, allows call takers to perform hands-free control of a drone through body movements on the chair. These include tilting and turning of one’s body
Designing Personalities of Conversational Agents
Recent years have seen numerous attempts to imbue conversational agents with marked identities by crafting their personalities. However, the question remains as to how such personalities can be systematically designed. To address this problem, this paper proposes a conceptual framework for designing and communicating agent personalities. We conducted two design workshops with 12 designers, discovering three dimensions of an agent personality and three channels to express it. The study results revealed that an agent personality can be crafted by designing common traits shared within a service domain, distinctive traits added for a unique identity, and neutral traits left intentionally undecided or user-driven. Also, such a personality can be expressed through how an agent performs services, what contents it provides, and how it speaks and appears to be. Our results suggest a renewed view of the dimensions of conversational agent personalities.
Lucid Loop: A Virtual Deep Learning Biofeedback System for Lucid Dreaming Practice
Lucid dreaming, knowing one is dreaming while dreaming, is an important tool for exploring consciousness and bringing awareness to different aspects of life. We present a proof-of-concept system called Lucid Loop: a virtual reality experience where one can practice lucid awareness via biofeedback. Visuals are creatively generated before your eyes using a deep learning Artificial Intelligence algorithm to emulate the unstable and ambiguous nature of dreams. The virtual environment becomes more lucid or “clear” when the participant’s physiological signals, including brain waves, respiration, and heart rate, indicate focused attention. Lucid Loop enables the virtual embodied experience of practicing lucid dreaming where written descriptions fail. It offers a valuable and novel technique for simulating lucid dreaming without having to be asleep. Future developments will validate the system and evaluate its ability to improve lucidity within the system by detecting and adapting to a participants awareness.
Eve: A Sketch-based Software Prototyping Workbench
Prototyping involves the evolution of an idea into various stages of design until it reaches a certain level of maturity. These design stages include low, medium and high fidelity prototypes. Workload analysis of prototyping using NASA-TLX showed an increase in workload specifically in frustration, temporal demand, effort, and decline in performance as the participants progressed from low to high fidelity. Upon reviewing numerous commercial and academic tools that directly or indirectly support software prototyping in one aspect or another, we identified a need for a comprehensive solution to support the entire software prototyping process. In this paper, we introduce Eve, a prototyping workbench that enables the users to sketch their concept as low fidelity prototype. It generates the consequent medium and high fidelity prototypes by means of UI element detection and code generation. We evaluated Eve using SUS with 15 UI/UX designers; the results depict good usability and high learnability (Usability score: 78.5). In future, we aim to study the impact of Eve on subjective workload experienced by users during software prototyping.
Auto-Inflatables: Chemical Inflation for Pop-Up Fabrication
This research aims to utilize an output method for zero energy pop-up fabrication using chemical inflation as a technique for instant, hardware-free shape change. By applying state-changing techniques as a medium for material activation, we provide a framework for a two-part assembly process starting from the manufacturing side whereby a rigid structural body is given its form, through to the user side, where the form potential of a soft structure is activated and the structure becomes complete. To demonstrate this technique, we created two use cases: firstly, a compression material for emergency response, and secondly a self-inflating packaging system. This paper provides details on the auto-inflation process as well as the corresponding digital tool for the design of pneumatic materials. The results show the efficiency of using zero energy auto-inflatable structures for both medical applications and packaging. This rapidly deployable inflatable kit starts from the assumption that every product can provide its own contribution by responding in the best way to a specific application.
Designing Narrative Learning in the Digital Era
This paper presents a first prototype of Mobeybou, a Digital Manipulative that uses physical blocks to interact with digital content. It intends to create an environment for promoting the development of language and narrative competences as well as digital literacy among pre and primary school children. Mobeybou offers a variety of characters, objects and landscapes from various cultures around the world and can be used for creating multicultural narratives. An interactive app developed for each country provides additional cultural and geographical information. A pilot study carried out with a group of 3rd graders showed that Mobeybou motivated and inspired them to actively and collaboratively create narratives integrating elements from the different countries. This may indicate Mobeybou’s potential to promote multiculturalism.
Material Sketching: Towards the Digital Fabrication of Emergent Material Effects
Designing for digital or robotic fabrication typically involves a virtual model in order to determine and coordinate the required operations of its construction. As a result, its creative design space becomes constrained to material expressions that can be predicted through digital modeling. This paper describes our preliminary thinking and first empirical results when this digital modeling phase is skipped, and the designing occurs interactively ‘with’ the fabrication operations themselves. By analyzing the material responses of corrugated cardboard to simple linear cutting operations that are executed by a robotic arm, we demonstrate how emergent material effects can be discovered improvisationally. Such material effects cannot be virtually modeled, however, they can be recreated and controlled by the robotic manipulations. We believe this form of ‘material sketching’ broadens the advances in ‘human-fabrication interaction’ towards a novel and unforeseeable expressions of physical form that require a much more direct, yet still digital, relationship with materiality.
BalBoa: A Balancing Board for Handstand Training
Balance is an essential physical skill to master, but a challenging one given that it requires a heightened body awareness to control, maintain and develop. In HCI physical training research, the design space of technology support for developing such body awareness remains narrow. Here, we introduce BalBoa, a balancing board to support balance training during handstands. We describe key highlights of the design process behind the Balboa, and present a work-in-progress prototype, which we tested with handstand beginners and experts. We discuss feedback from our users, preliminary insights, and sketch the future steps towards a fully developed prototype.
Grasping the Future: Identifying Potential Applications for Mid-Air Haptics in the Home
Mid-air haptics is an emerging technology that can produce a sense of touch in mid-air using ultrasound. While the use of mid-air haptics has a lot of potential in various domains such as automotive, virtual reality or professional healthcare, we suggest that the home is an equally promising domain for such applications. We organized an ideation workshop with 15 participants preceded by a sensitizing phase to identify possible applications for mid-air haptics within the home. From the extensive set of ideas that resulted from this, five themes emerged: guidance, confirmation, information, warning and changing status. As general ‘application categories’, we propose that they can provide a useful basis for the future design and development of mid-air haptic applications in the home, and possibly also beyond.
What You Sketch Is What You Get: Quick and Easy Augmented Reality Prototyping with PintAR
Augmented Reality(AR) tools are currently primarily targeted at programmers, making designing for AR challenging and time-consuming. We developed an interactive prototype, PintAR, that enables the authoring and rapid-prototyping of situated experiences by allowing designers to bring their ideas to life using a digital pen for sketching and a head-mounted display for visualizing and interacting with virtual content. In this paper, we explore the versatility such a tool could provide through case studies of a researcher, an artist, a ballerina, and a clinician.
Learning from Public Toilet Doors: Designing a Participatory Feedback Platform for a Connected Campus
Writing on walls in public spaces has been a way for people to communicate and express themselves. In this paper, we present the design of a participatory feedback gathering system inspired by this practice. By engaging the campus community in sharing their feedback on the use of spaces and facilities, we aim to encourage them to participate in co-creating the campus space. Our prototype combines a physical object’s affordance to attract attention with an internet forum to gather feedback. We document some key findings from our exploratory study and share ideas about future work.
Potentials and Challenges for User-generated Video Content in Public Libraries
The role of libraries are rapidly shifting, in large part as a consequence of digitization. In addition to providing access to collections of books and other physical media, public libraries today are embracing a new role of becoming urban hubs, in which a wide range of activities take place. In these activities, local knowledge is developed, exchanged, and disseminated. However, there are still very few digital services that support this new role. Here, we explore how to develop digital services for supporting and leveraging user-generated video content in library activities. Based on interviews and design scenarios as probes, we describe the potentials and challenges for designing such services, as seen from the perspective of library staff. Our insights will inform the design of a new digital service for publics to participate in collaborative production of videos to document, exchange, and disseminate local knowledge generated in library activities.
Designing Computational Materiality: A Preliminary Study to Explore the Lived Experience with transTexture Lamp
This paper articulates the design, manufacturing, and deployment of transTexture, a digital lamp features dynamic kinetic textures and changeable lighting effects. We deployed transTexture in the homes of two domain expert participants in a preliminary study to explore how computational materiality of interaction can fit a changing everyday environment. We conducted two semi-structured interviews with domain expert participants at the beginning and end of the field study. Based on the lived experiences uncovered in our initial findings, we elaborate on two notions that describe types of user engagements with computational materiality: transformation and Integration. We conclude with lessons learned from our study that will guide our future research on computational materiality.
ML-Process Canvas: A Design Tool to Support the UX Design of Machine Learning-Empowered Products
Machine learning (ML) is now widely used to empower products and services, but there is a lack of research on the tools that involve designers in the entire ML process. Thus, designers who are new to ML technology may struggle to fully understand the capabilities of ML, users, and scenarios when designing ML-empowered products. This paper describes a design tool, ML-Process Canvas, which assists designers in considering the specific factors of the user, ML system, and scenario throughout the whole ML process. The Canvas was applied to a design project, and was observed to contribute in the conceptual phase of UX design practice. In the future, we hope that the ML-Process Canvas will become more practical through continued use in design practice.
Growkit: Using Technology to Support People Growing Food at Home
The rapid growth of urban populations creates challenges for food production. One solution that is potentially more sustainable than current methods is localized production, in particular food production by individuals at home. Growing food at home is possible, but it is a process that requires motivation, knowledge and skills. Here, we present the design of a sensor platform aimed at helping individuals in urban environments grow food at home by informing them about the needs of their plants and, based on urban farming practices, by connecting them with a network of growers to share knowledge and produce.
Sign Language Recognition: Learning American Sign Language in a Virtual Environment
In this paper, we propose an approach, for sign language recognition, that makes use of a virtual reality headset to create an immersive environment. We show how features from data acquired by the Leap Motion controller, using an egocentric view, can be used to automatically recognize a user signed gesture. The Leap features are used along with a random forest for real-time classification of the user’s gesture. We further analyze which of these features are most important, in an egocentric view, for gesture recognition. To test the efficacy of our proposed approach, we test on the 26 letters of the alphabet in American Sign Language in a virtual environment with an application for learning sign language.
ShadowLamp: An Ambient Display with Controllable Shadow Projection using Electrochromic Materials.
In this paper we present ShadowLamp, a lamp concept supporting controllable shadow casting for displaying ambient information. The concept uses electrochromic displays to mask light and thereby allow switching of projected shadows. We implemented a prototype in a hexagon frame with six separately controlled LEDs compartmentalized to casts shadows in 60° angles. Alongside the LEDs, each compartment contains an electrochromic display for shadow control. As a use case we fabricated displays for a children’s book and used them to change the shadows as the story progress. The displays and LEDs are controlled by a Bluetooth connected Android application.
Fibritary: Rotary Jet-Spinning for Personal Fiber Fabrication
The development of personal fabrication technologies has enabled end users to model and prototype desired objects. 3D printing technologies have eased our access to solid models, however, it is still a challenge to develop thin fibers rapidly at personal levels that may help enriching textures of models. We propose a system and method inspired by cotton candy making, which uses rotary jet-spinning to extract thin plastic fibers at high speed. We report our exploration of the proposed method where we studied various plastic materials, the effects of the rotation speed, and the hole size of the fiber exit. The method allows plastic fibers to be extracted at micro-scale, and we propose various examples of use cases. Our approach can be used in combination with traditional 3D printing techniques, where soft and/or hairy models are required to design the texture of a 3D model.
Temporal Proximity Filtering
Users bundle the consumption of their favorite content in temporal proximity to each other, according to their preferences and tastes. Thus, the underlying attributes of items implicitly match user preferences. However, current recommender systems largely ignore this fundamental driver in identifying matching items. In this work, we introduce a novel temporal proximity filtering method to enable items-matching. First, we demonstrate that proximity preferences exist. Second, we present a temporal proximity induced similarity metric driven by user tastes, and third, we show that this induced similarity can be used to learn items pairwise similarity in attribute space. The proposed model does not rely on any knowledge outside users’ consumption and provide a novel way to devise user preferences and tastes driven novel items recommender.
Eevee: Transforming Images by Bridging High-level Goals and Low-level Edit Operations
There is a significant gap between the high-level, semantic manner in which we reason about image edits and the low-level, pixel-oriented way in which we execute these edits. While existing image-editing tools provide a great deal of flexibility for professionals, they can be disorienting to novice editors because of the gap between a user’s goals and the unfamiliar operations needed to actualize them. We present Eevee, an image-editing system that empowers users to transform images by specifying intents in terms of high-level themes. Based on a provided theme and an understanding of the objects and relationships in the original image, we introduce an optimization function that balances semantic plausibility, visual plausibility, and theme relevance to surface possible image edits. A formative evaluation finds that we are able to guide users to meet their goals while helping them to explore novel, creative ideas for their image edit.
Improving the Input Accuracy of Touchscreens using Deep Learning
Touchscreens combine input and output in a single interface. While this enables an intuitive interaction and dynamic user interfaces, the fat-finger problem and the resulting occlusions still impact the input accuracy. Previous work presented approaches to improve the touch accuracy by involving visual features on the top side of fingers, as well as static compensation functions. While the former is not applicable on recent mobile devices as the top side of a finger cannot be tracked, compensation functions do not take properties such as finger angle into account. In this work, we present a data-driven approach to estimate the 2D touch position on commodity mutual capacitive touchscreens which increases the touch accuracy by 23.0% over recently implemented approaches.
Towards Narrative-Driven Atmosphere for Virtual Classrooms
In this paper, we propose the integration of audience atmosphere generation techniques into Interactive Storytelling (IS) engines to obtain more realistic and variable Virtual Reality (VR) training systems. We outline a number of advantages of this novel combination compared to current atmosphere generation techniques. The features of recent IS engines can be extended to automatically adapt the atmosphere produced by a group of virtual humans in response to user intervention while staying coherent with the unfolding story of the training scenario. This work is currently being developed in the context of a VR training for teachers, in which they learn to manage a difficult classroom under the guidance of an instructor.
Interactive Visualizer to Facilitate Game Designers in Understanding Machine Learning
Machine Learning (ML) is a useful tool for modern game designers but often requires a technical background to understand. This gap of knowledge can intimidate less technical game designers from employing ML techniques to evaluate designs or incorporate ML into game mechanics. Our research aims to bridge this gap by exploring interactive visualizations as a way to introduce ML principles to game designers. We have developed QUBE, an interactive level designer that shifts ML education into the context of game design. We present QUBE’s interactive visualization techniques and evaluation through two expert panels (n=4, n=6) with game design, ML, and user experience experts.
IPME Workbench: A Data Processing Tool for Mixed-Methodology Studies of Group Interactions
Today’s small group interactions often occur in multi-device, multi-artifact ecosystems. CHI researchers studying these group interactions may adopt a socio-behavioral or sensing/data mining approach or both. A mixed-methodological approach for studying group interactions in collocated settings require collecting data from a range of sources, like audio, video, multiple sensor streams, and multiple software logs. Analyzing these disparate data sources systematically – with opportunities to rapidly form and correct research insights can help researchers who study group interactions. But engineering solutions assimilating multiple data sources to support different methodologies, ranging from grounded theory methodology to log analysis are rarely found. To address this frequent and tedious problem of data collection and processing in mixed-methodology studies of group interactions, we introduce a workbench tool: Interaction Proxemics in Multi-Device Ecologies (IPME). The IPME workbench synchronizes multiple data sources, provides data visualization, and opportunities for data correction and annotation.
Interactive Lyric Translation System: Implementation and Experiments
We can listen to music from many different countries due to the evolution of the Internet. However, understanding lyrics written in foreign languages is still difficult, even though many international songs have been translated. In this paper, we propose an interactive lyric translation system and describe its implementation. Users can modify lyrics by selecting a sample lyric from candidate translations and then freely edit the lyric using the proposed system. The proposed system also allows users to listen to their translation by applying singing voice synthesizer and search for related words. We conducted experiments with 12 participants to compare the lyric translation by the proposed system to manual lyric translation. The translation using the proposed system had better evaluation results comparing to the manual translation.
Experizone: Integrating Situated Scientific Experimentation with Teaching of the Scientific Method
Citizen Science projects ask their participants to contribute work to pre-defined topics, thereby typically rendering the participants as mere consumers of often narrowly defined tasks. In this work-in-progress paper, we present our work on an interactive experimentation platform that allows anybody – researchers as well as members of the crowd – to run experiments and test scientific hypotheses with a local crowd of volunteers. The platform also enforces a lightweight review process for teaching its users how to formulate valid scientific hypotheses and experimental designs.
A Dynamic Hierarchical Approach to Modelling and Orchestrating the Web of Things Using the DOM, CSS and JavaScript
There is a lot of work in progress by the W3C and others surrounding a Web standards compliant Web of Things (WoT) which it is hoped will unify the current Internet of Things infrastructure. Our contribution to this uses the Document Object Model (DOM) to represent complex physical environments, with a CSS-like syntax for storing and controlling the state of ‘things’ within it. We describe how JavaScript can be used in conjunction with these to create an approach which is familiar to Web developers and may help them to transition more smoothly into WoT development. We share our implementation and explore some of the many potential avenues for future research. These include rich WoT development tools and the possibility of content production for physical environments.
Coordi: A Virtual Reality Application for Reasoning about Mathematics in Three Dimensions
The goal of our research has been to create software that extends the benefits of virtual reality (VR) to mathematics education. We report on the design and evaluation of a VR application meant to support students’ reasoning about objects in three-dimensional (3D) coordinate systems and to explore the possibilities of the application for mathematics education in high school classrooms.
A Mouse (H)Over a Hotspot Survey: An Exploration of Patterns of Hesitation through Cursor Movement Metrics
This paper presents the results of an empirical exploration of 10 cursor movement metrics designed to measure respondent hesitation in online surveys. As a use case, this work considers an online survey aimed at exploring how people gauge the electricity consumption of domestic appliances. The cursor metrics were derived computationally from the mouse trajectories when rating the consumption of each appliance and analyzed using Multidimensional Scaling, Jenks Natural Breaks, and the Jaccard Similarity Index techniques. The results show that despite the fact that the metrics measure different aspects of the mouse trajectories, there is an agreement with respect to the appliances that generated higher levels of hesitation. The paper concludes with an outline of future work that should be carried out in order to further understand how cursor trajectories can be used to measure respondent hesitation.
Self-Efficacy-Based Game Design to Encourage Security Behavior Online
Eliciting cybersecurity behavior change in users has been a difficult task. Although most users have concerns about their safety online, few take precautions. Transformational games offer a promising avenue for cybersecurity behavior change. To date, however, studies typically focus on entertainment value instead of investigating the effectiveness and design potential of games in cybersecurity. As a first step to filling this gap, we present the design of Hacked Time, a desktop game that aims to encourage cybersecurity behavior change by translating self-efficacy theory into the game’s design. As cybersecurity games are a relatively novel area, our design aims to serve as a prototype for mapping specific behavior change principles relevant to this area onto game design practice.
IPANDA: A Playful Hybrid Product for Facilitating Children’s Wildlife Conservation Education
In this paper, we introduced the concept of a hybrid product which combines a digital game with physical experiences and discussed practical recommendations for hybrid products development in the domain of wildlife conservation for children. IPANDA including hardware and software applications with sensing technology was able to gather real-time environmental data and connected to one virtual wildlife experiencing environmental challenges regarding its living habits. Children who play the product can learn about the environment around them and foster wildlife protection awareness. To evaluate our conceptual system, we created a preliminary prototype and conducted user study within the semi-structured interview and Smileyometer. Our striking findings revealed IPANDA as a promising tool to serve as groundwork to encourage children to explore the physical environment and gain potential wildlife protection education.
Outstanding: A Perspective-Switching Technique for Covering Large Distances in VR Games
Room-scale virtual reality games allow players to experience an unmatched level of presence. A major reason is the natural navigation provided by physical walking. However, the tracking space is still limited, and viable alternatives or extensions are required to reach further virtual destinations. Our current work focuses on traveling over (very) large distances–an area where approaches such as teleportation are too exhausting and WIM teleportations potentially reduce presence. Our idea is to equip players with the ability to switch from first-person to a third-person god-mode perspective on demand. From above, players can command their avatar similar to a real-time strategy game and initiate travels over large distance. In our first exploratory evaluation, we learned that the proposed dynamic switching is intuitive, increases spatial orientation, and allows players to maintain a high degree of presence throughout the game. Based on the outcomes of a participatory design workshop, we also propose a set of extensions to our technique that should be considered in the future.
Impact of Game Elements in Players Artistic Experience
This paper analyses the impact that introducing game elements can have in players’ artistic valuation of video-games. We put forth a hypothesis that aesthetic experiences are incompatible with game elements (challenges and rewards/penalties). We tested it, by allowing (n=76) subjects to experience two different variants of the same artistic video game, one with game elements, another without. Using a mixed methods approach, we study results from self-reports and open-ended questionnaires. These indicate that in the game version’s case, subjects reported being less focused in understanding the experience’s meaning and found it less meaningful to a statistically significant degree. Therefore, we conclude that game designers seeking to mediate artistic experiences should be cautious in the introduction of game elements, as they can negatively impact the experience’s value.
MirrorMe: Increasing Prosocial Behaviour in Public Transport
Public transport can be a place where commuters feel rushed or stressed. Missing your train, a delayed bus or crowdedness at the station does not induce happiness among most people. As a consequence, prosocial behaviour like offering someone a seat is displayed less often. We discuss the design and design process of MirrorMe, a simple communal game to induce positive mood of commuters. MirrorMe aims to increase prosocial behaviour through mimicry. Commuters are challenged to “make a face” and thereby connect to other commuters. MirrorMe will be installed in a public display close to a train station enabling access to all commuters and passers-by. This work addresses the need for games and play in public setting to stimulate prosocial behaviour. It exemplifies how multidisciplinary HCI approaches in a gamejam setting can contribute to real life challenges. We conclude with open questions for impact evaluation in future work.
iGYM: A Wheelchair-Accessible Interactive Floor Projection System for Co-located Physical Play
Physical play opportunities for people with motor disabilities typically do not include co-located play with peers without disabilities in traditional sport settings. In this paper, we present a prototype of a wheelchair-accessible interactive floor projection system, iGYM, designed to enable people with motor disabilities to compete on par with, and in the same environment as, peers without disabilities. iGYM provides two key system features-peripersonal circle interaction and adjustable game mechanic (physics)-that enable individualized game calibration and wheelchair-accessible manipulation of virtual targets on the floor. Preliminary findings from our pilot study with people with motor disabilities using power wheelchairs, manual wheelchairs, and people without disabilities showed that the prototype system was accessible for all participants at higher than anticipated target speeds. Our work has implications for designing novel, physical play opportunities in inclusive traditional sport settings.
Beyond Horror and Fear: Exploring Player Experience Invoked by Emotional Challenge in VR Games
Digital gameplay experience depends not only on the type of challenge that the game provides, but also on how the challenge be presented. With the introduction of a novel type of emotional challenge and the increasing popularity of virtual reality (VR), there is a need to explore player experience invoked by emotional challenge in VR games. We selected two games that provides emotional challenge and conducted a 24-subject experiment to compare the impact of a VR and monitor-display version of each game on multiple player experiences. Preliminary results show that many positive emotional experiences have been enhanced significantly with VR while negative emotional experiences such as horror and fear have less been influenced; participants’ perceived immersion and presence were higher when using VR than using monitor-display. Our finding of VR’s expressive capability in emotional experiences may encourage more design and research with regard to emotional challenge in VR games.
FoBo: Towards Designing a Robotic Companion for Solo Dining
Despite the known benefits of commensal eating, eating alone is becoming increasingly common as people struggle to find time and manage geographical boundaries to enjoy a meal together. Eating alone however can be boring, less motivating and shown to have negative impact on health and wellbeing of a person. To remedy such situations, we undertake a celebratory view on robotic technology to offer unique opportunities for solo-diners to feel engaged and indulged in dining. We present, Fobo, a speculative design prototype for a mischievous robotic dining companion that acts and behaves like a human co-diner. Besides tackling solo-dining, this work also aims to reorient the perception that robots are not always meant to be infallible. They could be erroneous and clumsy, like we humans are.
Designing an Escape Room in the City for Public Engagement with AI-enhanced Surveillance
Escape the Smart City is a critical pervasive game that uses an escape room format to help players develop an understanding of the implications of urban surveillance technologies. Set in downtown Amsterdam, players work together as a team of hackers to stop the mass deployment of an all-seeing AI-enhanced surveillance system. In order to defeat the system players need to understand its attributes and exploit its weaknesses. Novel gameplay elements include locating hidden surveillance cameras in the city, discovering and exploiting algorithmic biases in computer vision, and exploring new techniques to avoid facial recognition systems. This work makes two distinct contributions to the CHI community: first, it introduces critical pervasive games as an approach to engage the public in complex sociotechnical issues, and second, it experiments with the escape room format as a platform for critical play.
Moldy Ghosts and Yeasty Invasions: Glitches in Hybrid Bio-Digital Games
Hybrid bio-digital games integrate real, biological materials into computer systems. They offer a rich, playful space in which interactions between humans, computers, and non-human organisms can be explored. However, the concept of video game ‘glitching’ in hybrid bio-digital games, specifically those that result from interactions between the biological and the computer hardware and/or software, have not been explored in great detail. We report two incidences of glitches observed during Mold Rush – a hybrid bio-digital game based on growth patterns of living mold: The creation of an additional game character (Moldy Ghosts), and the gameplay freeze (a Yeasty Invasion). As we interpret our observations, we question the potential for glitches to become valuable tools in framing HCI investigations into designing a productive and meaningful biological-digital interactions. The goal of this paper is to propose three testable routes in which glitches could be implemented. 1) Glitch as a tool for learning 2) Glitch as a precursor for an experience-enhancing game component, and 3) Glitch as an instigator for discourse on ethical implications of bio-digital games.
HedgewarsSGC: A Competitive Shared Game Control Setting
Sharing game control (SGC) is a multiplayer context that is considered within games user research. With the popular Twitch Plays Pokémon, settings of this type have also received broad media attention. In this paper, we introduce and describe HedgewarsSGC, our modifications to the open-source game Hedgewars to investigate different player roles in this shared game control context: besides considering competing groups who share control over their units via input aggregators, it also provides options for spectators that do not want to give up individual control. Thus, HedgewarsSGC is an approach to investigate SGC in such a scenario and additionally, allows further reasoning about input aggregation.
Gamification of a To-Do List with Emotional Reinforcement
Gamification can change how and why people interact with software. A common approach is to use quantitative feedback to give users a feeling of progress or achievement. There are, however, other ways to provide users with motivation or meaning during normal computer interactions, such as using emotional reinforcement. This could provide a powerful new tool to allow the positive effects of gamification to reach wider contexts. This paper investigates the design and evaluation of a mobile to-do list application, ‘Tamu To-Do’, which utilises gamified emotional reinforcement, as seen in Figure 1. A week-long field study (N=9) recorded user activity and impressions with the application. The results supported emotional reinforcement’s potential as a gamification strategy to improve user motivation and engagement.
Left-Handed Control Configuration for Side-Scrolling Games
Nowadays video games are more inclusive: children, disabled people, and seniors are considered. However, there are players that require special configuration so they could enjoy playing equally. One of these types of players is left-handed players. In order to determine whether to create a special configuration designed for the needs of left-handed audience, we carried out a study where left-handed players were targeted and offered to play with two types of control: a standard right-handed and a customized left-handed configuration. A significant main effect on player experience on the left-handed control configuration was demonstrated. The study reveals the importance of a catered control configuration to create a fair, non-stressful and user-friendly environment for players.
FRVRIT: A Tool for Full Body Virtual Reality Game Evaluation
Virtual reality (VR) games continue to grow in popularity with the advancement of commercial VR capabilities such as the inclusion of full body tracking. This means game developers can design for novel interactions involving a player’s full body rather than solely relying on controller input. However, existing research on evaluating player interaction in VR games primarily focuses on game content and inputs from game controllers or player hands. Current approaches for evaluating player full body interactions are limited to simple qualitative observation which makes evaluation difficult and time-consuming. We present a Full Room Virtual Reality Investigation Tool (FRVRIT) which combines data recording and visualization to provide a quantitative solution for evaluating player movement and interaction in VR games. The tool facilitates objective data observation through multiple visualization methods that can be manipulated, allowing developers to better observe and record player movements in the VR space to improve and iterate on the desired interactions in their games.
Understanding Experiences of Blind Individuals in Outdoor Nature
Research shows that exposure to nature has benefits for people’s mental and physical health and that ubiquitous and mobile technologies encourage engagement with nature. However, existing research in this area is primarily focused on people without visual impairments and is not inclusive of blind and partially sighted individuals. To address this gap in research, we interviewed seven blind people (without remaining vision) about their experiences when exploring and experiencing the outdoor natural environment to gain an understanding of their needs and barriers and how these needs can be addressed by technology. In this paper, we present the three themes identified from the interview data; independence, knowledge of the environment, and sensory experiences.
Perceptions of Chatbots in Therapy
Several studies have investigated the clinical efficacy of remote-, internet- and chatbot-based therapy, but there are other factors, such as enjoyment and smoothness, that are important in a good therapy session. We piloted a comparative study of therapy sessions following the interaction of 10 participants with human therapists versus a chatbot (simulated using a Wizard of Oz protocol), finding evidence to suggest that when compared against a human therapist control, participants find chatbot-provided therapy less useful, less enjoyable, and their conversations less smooth (a key dimension of a positively-regarded therapy session). Our findings suggest that research into chatbots for cognitive behavioural therapy would be more effective when directly addressing these drawbacks.
Preferred Appearance of Captions Generated by Automatic Speech Recognition for Deaf and Hard-of-Hearing Viewers
As the accuracy of Automatic Speech Recognition (ASR) nears human-level quality, it might become feasible as an accessibility tool for people who are Deaf and Hard of Hearing (DHH) to transcribe spoken language to text. We conducted a study using in-person laboratory methodologies, to investigate requirements and preferences for new ASR-based captioning services when used in a small group meeting context. The open-ended comments reveal an interesting dynamic between: caption readability (visibility of text) and occlusion (captions blocking the video contents). Our 105 DHH participants provided valuable feedback on a variety of caption-appearance parameters (strongly preferring familiar styles such as closed captions), and in this paper we start a discussion on how ASR captioning could be visually styled to improve text readability for DHH viewers.
Paper Prototyping Comfortable VR Play for Diverse Sensory Needs
We co-designed paper prototype dashboards for virtual environments for three children with diverse sensory needs. Our goal was to determine individual interaction styles in order to enable comfortable and inclusive play. As a first step towards an inclusive virtual world, we began with designing for three sensory-diverse children who have labels of neurotypical, ADHD, and autism respectively. We focused on their leisure interests and their individual sensory profiles. We present the results of co-design with family members and paper prototyping sessions were conducted by family members with the children. The results contribute preliminary empirical findings for accommodating different levels of engagement and empowering users to adjust environmental thresholds through interaction design.
Designing Free-Living Reports for Parkinson’s Disease
Parkinson’s disease is a progressive neurodegenerative disorder that is also characterized by its motor fluctuations throughout the day. This makes clinical assessment to be hard to accomplish in an appointment as the patient status at the time may be largely different from his condition two hours before. Clinicians can only evaluate patients from time to time, making symptom fluctuations difficult to discern. The emergence of wearable sensors enabled the continuous monitoring of patients out of the clinic, in a free-living environment. Although, these sensors exist and they are being explored in a research setting, there have been limited efforts in understanding which information and how it should be presented to non-technical people, clinicians (and patients). To fill this gap, we started by performing a focus group with clinicians to capture the information they would like to see devised from free-living sensors, and the different levels of detail they envision. Building on the insights collected, we developed a data-driven platform, DataPark, that presents usable visualizations of data collected from a wearable tri-axial accelerometer. It enables report parameterization and includes a battery of state-of-the-art algorithms to quantify physical activity, sleep, and clinical evaluations. A two-month preliminary deployment in a rehabilitation clinic showed that patients feel rewarded and included by receiving a report, and that the change in paradigm is not burdensome and adds information for clinicians to support their decisions.
Digitally Augmenting the Physical Ground Space with Timed Visual Cues for Crutch-Assisted Walking
This late-breaking work presents initial results regarding a novel mobile-projection system, aimed at helping people to learn how to walk with crutches. The existing projection-based solutions for gait training disorders are based on walking over a fixed surface (usually a treadmill). In contrast, our solution projects visual cues (footprints and crutch icons) directly into the floor, augmenting the physical space surrounding the crutches, in a portable way. Walking with crutches is a learning skill that requires continuous repetition and constant attention to detail to make sure they are being used correctly, avoiding negative consequences, such as falls or injuries. We conducted expert consultation sessions, and we identified the main issues that patients face when walking with crutches. This informed the design of Augmented Crutches. We performed a qualitative evaluation and conclude with design implications: the importance of timing, self-assurance and awareness.
Virtual Humans in Health-Related Interventions: A Meta-Analysis
Virtual humans are computer-generated characters designed to simulate key properties of human face-to-face conversation—verbal and nonverbal. Their human-like physical appearance and nonverbal behavior set them apart from chatbot-type embodied conversational agents, and has recently received significant interest as a potential tool for health-related interventions. As healthcare providers deliberate whether to adopt this new technology, it is crucial to examine the empirical evidence about their effectiveness. We systematically evaluated evidence from controlled studies of interventions using virtual humans on their effectiveness in health-related outcomes. Nineteen studies were included from a total of 3354 unique records. Although study objectives varied greatly, most targeted psychological conditions, such as mood, anxiety, and autism spectrum disorders (ASD). Virtual humans demonstrated effectiveness in improving health-related outcomes, more strongly when targeting clinical conditions, such as ASD or pain management, than general wellness, such as weight loss. We discuss the emerging differences when designing for clinical interventions versus wellness.
Family-Based Sleep Technologies: Opportunities and Challenges
Sleep is a critical component of overall wellness, and pervasive and ubiquitous computing technologies have shown promise for allowing individuals to track and manage their sleep quality. However, sleep quality is also affected by interpersonal factors, especially for families with young children. In this study, we adopted a family informatics approach to understand opportunities and challenges for sleep technologies at the family level. We conducted home-based interviews with 10 families with young children, asking them about their current practices, values, and perceived role for technology. We describe challenges across three phases: bedtime, nighttime, and waking. We show that family-based sleep technologies may have the greatest impact by supporting family activities and rituals, encouraging children’s independence, and providing comfort.
“Why is the Doctor a Man”: Reactions of Older Adults to a Virtual Training Doctor
Shared decision making (SDM) is increasingly considered as the best way to reach a treatment decision in a clinical environment. However, the use of SDM in practice can be obstructed by a number of factors, such as time constraints or lack of applicability due to patient characteristics. Our project, PrepDoc, explores how a Virtual Training Doctor (VTD) can help patients overcome some of these obstacles to experiencing effective SDM during doctor’s visits. In this paper, we report on user studies conducted with 19 participants in Scotland aged 65+. The goal of these studies was to identify the reactions of this audience to the PrepDoc system, evaluate its suitability within Scotland, and elicit suggestions to improve it. Our findings revealed that the idea of empowering people to participate in SDM using a virtual agent was positively received by all participants. However, the reactions to how this idea was implemented in the PrepDoc system varied greatly across participants. Based on this, our paper outlines recommendations for enhancing the user experience with VTDs, accommodating individual differences of older adults, and accounting for the national context.
HeliCoach: A Drone-based System Supporting Orientation and Mobility Training for the Visually Impaired
Orientation and mobility (O&M) training is essential for improving the independence and wellbeing of the visually impaired. However, the shortage of qualified trainers and the unengaging training contents limit the number of O&M training recipients. In this paper, we propose a drone-based intelligent tutor system – HeliCoach – to provide cost-effective and personalized O&M training. We first elaborate on the system design and potential scenarios of HeliCoach use. We then demonstrate the effectiveness of this concept using a preliminary user study. Finally, we discuss the implication and challenges of this system.
Designing for Reminiscence with People with Dementia
We investigate how technology can be used to support people with dementia to engage in Reminiscence Therapy. We used a participatory design approach carried out over three stages: scope, design and evaluation, involving five participants with dementia. We also engaged professionals and caregivers through a survey. We provide initial recommendations for engaging participants with dementia on how they wish to reminisce and what technology may support this.
MATY: Designing An Assistive Robot for People with Alzheimer’s
Caregiving to a person with Alzheimer’s can be a very demanding task, both from physical and psychological perspectives. Technological responses to support caregiving, and improve the quality of life of people with Alzheimer’s and their caregivers are lacking. Using a research through design approach, we devised a robot focused on empowering people with Alzheimer and fostering their autonomy, from the initial sketch to a working prototype. MATY is a robot that encourages communication with relatives and promotes routines by eliciting the person to take action, using a multisensorial approach (e.g., projecting biographical images, playing suggestive sounds, or emitting soothing aromas). The paper reports the iterative, incremental design process performed together with stakeholders. We share first lessons learned in this process with HCI researchers and practitioners designing solutions, particularly robots, to assist people with dementia and their caregivers.
An N-of-1 Evaluation Framework for Behaviour Change Applications
Mobile behaviour change applications should be evaluated for their effectiveness in promoting the intended behavior changes. In this paper we argue that the ‘gold standard’ form of effectiveness evaluation, the randomised controlled trial, has shortcomings when applied to mobile applications. We propose that N-of-1 (also known as single case design) based approaches have advantages. There is currently a lack of guidance for researchers and developers on how to take this approach. We present a framework encompassing three phases and two related checklists for performing N-of-1 evaluations. We also present our analysis of using this framework in the development and deployment of an app that encourages people to walk more. Our key findings are that there are challenges in designing engaging apps that automate N-of-1 procedures, and that there are challenges in collecting sufficient data of good quality. Further research should address these challenges.
Ergotact: Including Force-based Activities into Post-stroke Rehabilitation
Ergotact introduces the possibility of including force-based rehabilitation activities of the upper limb of post-stroke survivors. These activities are integrated into a dedicated game which is deployed on a tabletop. The patient interacts in the game with a tangible object which has to be moved, rotated, tightened/untightened and lifted according to the gameplay. The surface of the object is equipped with a matrix of force sensors which allows to introduce force-based activities into the game; for the purpose of the game, the object also includes an accelerometer and a gyroscope. The paper presents the concept and first feedbacks from therapists.
Understanding and Correcting Inaccurate Calorie Estimations on Amazon Mechanical Turk
Current research on technology for fitness is often focused on tracking and encouraging healthy lifestyles. In contrast, we adopt an approach based on improving consumer knowledge of food energy. An interactive survey was distributed on Amazon Mechanical Turk to assess how well crowdworkers can judge the calories in a series of foods. Our subjects yielded results comparable to traditional participants, exhibiting well-known phenomena such as underestimating the energy contained in foods perceived to be healthy. Several techniques from the online education literature, such as prompts for reflection, were also investigated for their efficacy at increasing estimation accuracy. Although calories were more accurately judged after applying these methods on aggregate, the effects of individual techniques on our participants were inconclusive. A more thorough investigation is thus needed into effective educational methods for correcting calorie estimations on the Web.
Reaching Optimal Health: The Voice of Clinicians from a Roleplay Simulation
Helping patients to reach optimal health entails a holistic approach of complex interventions including clinical decision support systems, patient decision aids, and self-management tools. In real-world settings, understanding the human factors in technological interventions is the core of HCI research; however, it requires a considerable amount of time to run experimental procedures, especially for patients with mental disorders. We conducted a roleplay simulation over a period of two weeks that comprised observations, and semi-structured interviews with eight health care professionals participated in the simulated use of a health optimization system. The study revealed the SWING model of enabling interventions towards optimal health as i) Sharing feelings, ii) Weaving of information, iii) Improving awareness, iv) Nurturing trust v) Giving support. This model establishes a common path from research to practice for researchers and practitioners in eHealth and HCI.
Nurse-led Design and Development of an Expert System for Pressure Ulcer Management
The use of Clinical Practice Guidelines (CPGs) is known to enable better care outcomes by promoting a consistent way of treating patients. This paper describes a user-centered design approach involving nurses, to develop a prototype expert system for modelling CPGs for Pressure Ulcer management. The system was developed using Visirule, a software tool that uses a graphical approach to modeling knowledge. The system was evaluated by 5 staff nurses and compared nurses’ time and accuracy to assess a wound using CPGs accessed via the Intranet of an NHS Trust and the expert system. A post task qualitative evaluation revealed that nurses found the system useable with a systematic design, that it increased access to CPGs by reducing time and effort required by other usual methods of access, that it provided opportunities for learning due to its interactive nature, and that its recommendations were more actionable that those provided by usual static CPG documents.
Efficacy of Film for Raising Awareness of Diverse Users
Technology companies are increasingly acknowledging the need to make their products usable for diverse users that include people with disabilities and aging populations; as a result, educators need to consider how to include accessibility-related topics in college level technology-based courses. In this paper, we present a study of the efficacy of short (10-minute) documentaries, created by student filmmakers, that portrayed three people with different disabilities. We evaluated the films with undergraduate and graduate students who were enrolled in technology-related courses to explore the films’ abilities to raise awareness for concerns related to accessibility. We found that the films were effective at changing some incorrect assumptions about designing for diverse users and increasing recognition of the importance of designing for diversity.
Tangible Organs: Introducing 3D Printed Organ Models with VR to Interact with Medical 3D Models
Medical images contain important information for diagnosis and preoperative planning in modern medicine. Interacting with these images still happens mostly with a mouse, abstract gestures or handles. In a focus group with five surgeons, we evaluate the possibilities of 3D printed organ models for interaction in VR for the use case of surgery planning. The surgeons rate the approach as highly useful and highlighted the advantage of easier grasping the space relations, which would greatly improve the planning phase of surgery.
Designing for Wellbeing-as-Interaction
This paper introduces the concept of wellbeing-as-interaction. Instead of designing and evaluating technologies that locate wellbeing in the individual, this paper presents early-stage work on designing technologies for people to collaboratively express, interpret, discuss and enact wellbeing. To explore this concept, we examined the wellbeing of six pairs of university students through a 7-day deployment of a technology probe ‘MoodCloud’. MoodCloud consisted of a mobile app and an ambient display to share wellbeing updates through colour. We observed three patterns of wellbeing interactions: updates, follow-ups, and message chains. Wellbeing interactions benefitted from the ambiguity of colour and a clearly defined target audience, but students also communicated through other channels to make sense of updates and to enact support. The concept of wellbeing-as-interaction seeks to offer an analytic lens for the CHI community as well as inspiration for novel wellbeing technologies that emphasise meaningful interactions with friends.
in5: a Model for Inbodied Interaction
The human body is a complex system itself composed of complex systems; its state influences all aspects of our health, wellbeing including our cognitive to physical performance. In HCI most of us are not well versed in how this complex system works. The following paper proposes in5, a model to help make that physiology accessible for design. The model has two parts: (1) the MEECS dichotomies: five fundamental-to-life, volitional processes – move, eat, engage, cogitate, sleep – that are affected by parameters of quality, quantity, time and context, and (2) tuning: an approach to adjust the parameters of these dichotomies toward “dialing in” health, wellbeing, performance. The paper also offers examples for how this model can be explored for design research.
Designing Efficacious Mobile Technologies for Anxiety Self-Regulation
This paper presents a step-by-step process that was developed primarily to extract design pre-requisites for personalized mobile technologies assisting anxiety self-regulation. This process, which is recognized as a preliminary framework, was developed, refined, and tested based on a multidisciplinary literature review and an exploratory study conducted with mental health professionals who treat anxiety disorders. The step-by-step nature of this framework draws from multiple disciplinary and stakeholder perspectives, integrates knowledge about efficacious psychological interventions, considers individual differences and specific challenges faced by patients, and realizes contextual needs. It also includes incremental and iterative refinements based on multidisciplinary sources to foster more evidence-based interface designs. Once reached its maturity, this framework can potentially be applied for designing efficacious technologies for a range of mental health conditions. The expected future contributions and limitations of the framework are also discussed.
A Preliminary Investigation of the Role of Anthropomorphism in Designing Telehealth Bots for Older Adults
Autonomous virtual agents (VAs) are increasingly used commercially in critical information spaces such as healthcare. Existing VA research has focused on microscale interaction patterns such as usability and artificial intelligence. However, the macroscale patterns of users’ information practices and their relationship with the design and adoption of VAs have been largely understudied, especially when it comes to older adults (OAs), who stand to benefit greatly from VAs. We conducted a preliminary investigation to understand the role design elements, such as anthropomorphic aspects of VAs, play in OAs’ perception of VAs and in OAs’ preferences for VAs’ participation within their health information practices. Some unexpected findings indicate that the fidelity of anthropomorphic features influences perception in ways that are dependent on the context of the information tasks. This suggests that research on improving the design and increasing the adoption of VAs should factor the interplay between fidelity of VA representation and information context.
SoundGlance: Briefing the Glanceable Cues of Web Pages for Screen Reader Users
Screen readers have become a core assistive technology for blind web users to browse web pages. Although screen readers can convey the textual information or structural properties of web pages, they cannot deliver their overall impression. Such a limitation hinders blind web users from obtaining an overview of the website, which non-blind people can do in a short time. As such, we present SoundGlance, a novel application that briefly delivers an auditory summary of web pages. SoundGlance supports the screen reader users by converting the important glanceable cues of the pages into sound. The feasibility of prototype was examined in a pilot study with fourteen blind people. Several practical insights were derived from the experiment.
On-road Stress Analysis for In-car Interventions During the Commute
This paper focuses on the larger question of when to administer in-car just-in-time stress management interventions. We look at the influence of driving-related stress to find the right time to provide personalized and contextually-aware interventions. We address this challenge with a data driven approach that takes into consideration driving-induced stress, driver (cognitive) availability, and indicators of risky driving behavior such as lane departures and high steering reversal rates. We ran a study with sixteen commuters during morning and evening traffic while applying an in-situ experience sampling. During 45 minutes of driving through various scenarios including city, highway, and neighborhood roads we captured physiological measurements, video of participants and surroundings, and CAN bus driving data. Initial review of the data shows that stress levels changed greatly between 2 and 9 (out of a 0-min to 10-max scale). We conclude with a discussion on how to prepare the data to train supervised algorithms to find the right time to intervene stress while driving.
On-road Guided Slow Breathing Interventions for Car Commuters
This is the first on-road study testing the efficacy and safety of guided slow breathing interventions in a car. This paper presents design and experimental implications when evolving from prior simulator to on-road scenarios. We ran a controlled study (N=40) testing a haptic guided breathing system in a closed circuit under stress and not-stressed driving conditions. Preliminary results validate prior findings about the efficacy and safety of the intervention. Initial qualitative analysis shows an overall positive acceptance, and no safety-critical incidents (e.g., hard brakes or severe lane departures) — all participants graded the intervention as safe for real traffic applications. Going further, additional analysis is needed before exposing commuters to the intervention on public roads.
The Effect of Rotational Jitter on 3D Pointing Tasks
Even when in a static position, data acquired from 6 Degrees of Freedom (DoF) trackers is affected by noise, which is typically called jitter. In this study, we analyzed the effects of 3D rotational jitter on Virtual Reality (VR) controllers in a 3D Fitts’ law experiment, which explored how such jitter affects user performance. Eight subjects performed a Fitts’ law experiment with or without additional jitter on the cursor. Results show that while error rate significantly increased above ±0.5° jitter and subjects’ effective throughput started to decrease significantly above ±1° jitter, there was no significant effect on users’ movement time. Further, the Fitts’s law movement time model was affected when ±2° jitter was applied to the tracker. According to these results, ±0.5° jitter on the controller does not significantly affect user performance for the tasks explored here. The results of our study can guide the design of 3D controller and tracking systems for 3D user interfaces.
Kirigami Keyboard: Inkjet Printable Paper Interface with Kirigami Structure Presenting Kinesthetic Feedback
We propose a DIY process to produce customized paper keyboards with kinesthetic feedback that interact with touchscreens. The process is built using two techniques: kirigami and printable double-layered circuits. Our goal is to improve the extensibility and usability of various interfaces made with 2D paper substrates. First, Our kirigami structures provide kinesthetic sensations whose z-directional key stroke is comparable to that of traditional keyboards. In order to design keys with appropriate stroke and reaction force, we adopted the Rotational Erection System (RES). Second, printable double-layered circuits allow users to easily adjust input layouts. This easy-to-customize keyboard can be especially useful for those who have specific requirements for input devices.
A Left-Hand Advantage: Motor Asymmetry in Touchless Input
Touchless gesture is a common input type when interacting with large displays or virtual and augmented reality applications. In touchless input, users may alternate between hands or use bimanual gestures. But touchless performance in nondominant hands is little explored—even though cognitive science and neuroscience studies show cerebral hemispheric specialization causes performance differences between dominant and nondominant hands in lateralized individuals. Drawing on theories that account for between-hand differences in rapid-aimed movements, we characterize motor asymmetry in touchless input. Results from a controlled study (n = 20, right-handed) show freehand touchless input produces significantly smaller between-hand performance differences than a mouse in pointing and dragging. We briefly discuss the HCI implications of motor asymmetry in an input type.
Off-Surface Tangibles: Exploring the Design Space of Midair Tangible Interaction
Tangibles on interactive tabletops are tracked by the surface they are placed on and have been shown to benefit the interaction. However, they are tied to the surface. When picked up, they are no longer recognized and lose any connection to virtual objects shown by the table. We introduce the interaction concept of Off-Surface Tangibles that are tracked by the surface but continue to support meaningful interactions when lifted off the surface. We present a design space for Off-Surface Tangibles, and design considerations when employing them. We populate the design space with prior work and introduce possible interaction designs for further research.
Overgrown: Supporting Plant Growth with an Endoskeleton for Ambient Notifications
Ambient notifications are an essential element to support users in their daily activities. Designing effective and aesthetic notifications that balance the alert level while maintaining an unobtrusive dialog, require them to be seamlessly integrated into the user’s environment. In an attempt to employ the living environment around us, we designed Overgrown, an actuated robotic structure capable of supporting a plant to grow over itself. As a plant endoskeleton, Overgrown aims to engage human empathy towards living creatures to increase effectiveness of ambient notifications while ensuring better integration with the environment. In a focus group, Overgrown was identified with having personality, showed potential as a user’s ambient avatar, and was suited for social experiments.
Towards Data-Driven Sword Fighting Experiences in VR
We present a data-driven animated character capable of blocking attacks from a user in a VR sword fighting experience. The system uses motion capture data and a machine learning model to recreate a believable blocking behaviour, suggesting the viability of full-featured data-driven interactive characters in VR. Our work is part of a larger vision of VR interaction as a two-level problem, separating spatial details from design concerns. In this context, here we provide the designers of the experience with a character from which a “blocking” behaviour can be requested without further spatial specifications. This puts down a first building block in the construction of a controllable data-driven VR sword fighter capable of multiple behaviours.
JeL: Connecting Through Breath in Virtual Reality
We present JeL-a bio-responsive immersive installation for interpersonal synchronization through breathing. In JeL, two users are immersed in a virtual underwater environment, where their individual breathing controls the movement of a jellyfish. As users synchronize their breathing, a virtual glass sponge-like structure starts to grow, representing the users’ physiological synchrony. JeL explores a novel form of interpersonal interaction in virtual reality that aims to connect users to their physiological state through biofeedback, to each other through physiological synchronization, and to nature through connecting with a jellyfish and collaboratively growing a glass sponge-inspired sculpture. This form of immersive, bio-responsive interaction could ultimately be used to encourage self-awareness, a feeling of connectedness, and consequently pro-social and pro-environmental attitudes. Here, we describe the motivation, inspiration, design elements, and future work involved in bringing this system to fruition.
Towards a Framework for Validating the Matching Between Notifications and Scents in Olfactory In-Car Interaction
Olfactory notifications have been proven to have a positive impact on drivers. This has motivated the use of scents to convey driving-relevant information. Research has proposed the use of such scents as lemon, peppermint, lavender and rose for in-car notifications. However, there is no framework to identify which scent is the most suitable for every application scenario. In this paper, we propose an approach for validating a matching between scents and driving-relevant notifications. We suggest a study in which the olfactory modality is compared with a puff of clean air, visual, auditory, and tactile stimuli while performing the same driving task. For the data analysis, we suggest recording the lane deviation, speed, time required to recover from the error, as well as the perceived liking and comfort ratings. Our approach aims to help automotive UI designers make better decisions about choosing the most suitable scent, as well as possible alternative modalities.
Augmented Visuotactile Feedback Support Sensorimotor Synchronization Skill for Rehabilitation
Augmented visual-audio feedback supports rhythmic motor performance in both sports training and sensorimotor synchronization practise. In home-based rehabilitation for minor stroke patients, training on a fine motor skill using rhythms not only helps to recover sophisticated motion ability but also increases their confidence and mental health recovery. Auditory information has been shown to have advantages for improving rhythmic motion performance, but it can be masked by environmental noise and may be intrusive to non-stakeholders. Under these circumstances, patients may be reluctant to practice actively due to difficulties hearing the auditory stimuli or through a concern for disturbing others. To address this issue, we explored an inconspicuous way of providing vibrotactile feedback through wristband. In order to investigate the general feasibility of a sensorimotor synchronization task, we conducted a preliminary user study with 16 healthy participants, and compared the visual-tactile feedback with visual-audio, visual-audio-tactile and visual-only feedback. Results showed that rhythmic motion accuracy with visual-tactile feedback has the equivalent facilitatory effect with visual-audio feedback. In addition, visual-tactile feedback supports smoother movements than the visual-audio feedback. In the future, after refinement with stroke patients, the system could support customization for different levels of sensorimotor synchronization training.
Modulating Personal Audio to Convey Information
A long lasting problem in the design of auditory displays is how to design audio feedback that is aesthetically appealing and comfortable to listen to. Many systems focus solely on function and do not consider these other factors. This can lead to annoyance for users, or more extremely, abandonment of the system entirely. Instead of communicating information through sound which is built in to the system, an alternative method is to modulate acoustic parameters of a user’s own music to encode information. This method – music modulation – has successfully been used in systems for conveying navigational data while walking and listening to music. This paper discusses the potential of applying this method to other contexts and types of data. We present a number of acoustic parameters of music that could be used to encode information and discuss a number of factors affecting the design of sonification systems employing them, with the goal of inciting discussion and further research into this potentially effective method of conveying information through sound.
Designing a Mobile Game That Develops Emotional Resiliency in Indian Country
Communities in Indian Country experience severe behavioral health inequities [11, 12]. Based on recent research investigating scalable behavioral health interventions and therapeutic best practices for Native American (NA) communities, we propose ARORA, a social and emotional learning intervention delivered over a networked mobile game that uses geosocial gaming mechanisms enhanced with augmented reality technology. Focusing on the Navajo community, we take a community-based participatory research approach to include NA psychologists, community health workers, and educators as co-designers of the intervention activities and gaming mechanisms. Critical questions involve the operation of the application across low-infrastructure landscapes as well scalability of design practices to be inclusive of the many diverse NA cultural communities in Indian Country.
TurtleTalk: An Educational Programming Game for Children with Voice User Interface
Interest in programming education for children is growing. This research explores the possibilities of utilizing voice user interface (VUI) in children’s programming education. We designed an interactive educational programming game called TurtleTalk, which converts the various utterances of children into code using a neural network and displays the results on a screen (Figure 1). Through VUI, children can move the turtle, the voice agent of the game, to the target location and learn the basic programming concepts of “sequencing” and “iteration.” We conducted a preliminary user study where eight children played the game and took part in a posthoc interview. The results showed that voice interaction with TurtleTalk led children to be more immersed in the game and understand the elements of programming with ease and confidence.
OrigamiSpeaker: Handcrafted Paper Speaker with Silver Nano-Particle Ink
In this study, we present an OrigamiSpeaker which can be handcrafted with silver nano-particle ink on a paper substrate. The OrigamiSpeaker is based on the “electrostatic loudspeaker” technique. The audio signal is amplified to a high voltage and applied to an electrode that vibrates to generate sound. By using Origami techniques, users are able to design various shapes of OrigamiSpeaker.
3DTactileDraw: A Tactile Pattern Design Interface for Complex Arrangements of Actuators
Creating tactile patterns for a grid or a 3D arrangement of a large number of actuators presents a challenge as the design space is huge. This paper explores two different possibilities of implementing an easy-to-use interface for tactile pattern design on a large number of actuators around the head. Two user studies were conducted in order to iteratively improve the prototype to fit user needs.
Audio-visual AR to Improve Awareness of Hazard Zones Around Robots
Navigating a space populated by fenceless industrial robots while carrying out other tasks can be stressful, as the worker is unsure about when she is invading the area of influence of a robot, which is a hazard zone. Such areas are difficult to estimate and standing in one may have consequences for worker safety and for the productivity of the robot. We investigate the use of multimodal (auditory and/or visual) head-mounted AR displays to warn about entering hazard zones while performing an independent navigation task. As a first step in this research, we report a design-research study (including a user study), conducted to obtain a visual and an auditory AR display subjectively judged to approach equivalence. The goal is that these designs can serve as the basis for a future modality comparison study.
First Steps Towards Walk-In-Place Locomotion and Haptic Feedback in Virtual Reality for Visually Impaired
This paper presents the first results on a user study in which people with visual impairments (PVI) explored a virtual environment (VE) by walking in a virtual reality (VR) treadmill. As recently suggested, we have now acquired first results from our feasibility study investigating this walk-in-place interaction. This represents a new, more intuitive way of for example virtually exploring unknown spaces in advance. Our prototype consists of off-the-shelf VR components (i.e., treadmill, headphones, glasses, and controller) providing a simplified white cane simulation and was tested by six visually impaired subjects. Our results indicate that this interaction is yet difficult, but promising and an important step to make VR more and better usable for PVIs. As an impact on the CHI community, we would like to make this research field known to a wider audience by sharing our intermediate results and suggestions for improvements, on some of which we are already working on.
FabAuth: Printed Objects Identification Using Resonant Properties of Their Inner Structures
We present a method we propose called FabAuth for identifying 3D-printed objects, which utilizes the differences in the resonant properties of such objects. We focus on changing the internal structures of each object made through a 3D printing process to assign a unique resonant property to it even if multiple objects have the same appearance. To identify the objects, the method identifies resonant property differences by using vibration that can pass through 3D-printed objects. The method can be applied even to low-filled 3D-printed objects as long as an acoustic wave can travel through the objects from one sensor to another. To validate the method’s feasibility, we conducted a preliminary experiment to confirm whether it can be applied to low-filled 3D-printed objects and found that its average classification accuracy reached 92.2%.
FingMag: Finger Identification Method for Smartwatch
Interacting with a smartwatch is difficult owing to its small touchscreen. A general strategy to overcome the limitations of the small screen is to increase the input vocabulary. A popular approach to do this is to distinguish fingers and assign different functions to them. As a finger identification method for a smartwatch, we propose FingMag, a machine-learning-based method that identifies the finger on the screen with the help of a ring. For this identification, the finger’s touch position and the magnetic field from a magnet embedded in the ring are used. In an offline evaluation using data collected from 12 participants, we show that FingMag can identify the finger with an accuracy of 96.21% in stationary geomagnetic conditions.
Improving Two-Thumb Touchpad Typing in Virtual Reality
Two-Thumb Touchpad Typing (4T) using hand-held controllers is one of the common text entry techniques in Virtual Reality (VR). However, its performance is far below that of two-thumb typing on a smartphone. We explored the possibility of improving its performance focusing on the following two factors: the visual feedback of hovering thumbs and the grip stability of the controllers. We examined the effects of these two factors on the performance of 4T in VR in user experiments. Their results show that hover feedback had a significant main effect on the 4T performance, but grip stability did not. We then investigated the achievable performance of the final 4T design in a longitudinal study, and its results show that users could achieve a typing speed over 30 words per minute after two hours of practice.
Chinese Character Learning System
As the distinctions of formation structure between Chinese and Western languages, learners take more effort in grasping correct formation structure and pronunciation when they are studying Chinese character. Various Chinese character learning systems have already been proposed in order to help the learners recognize the character, but ignore its handwriting and cultural background. We here present a learning system for learners to study Chinese character better. To increase the effectiveness in learning Chinese character, we combine pronunciation and character writing with the integrated use of computer, projector and camera in this design. This system eases the learners to understand the meaning, cultural background and formation structure of character by using the morphological and phonetic animation projection and handwriting instruction. In comparison with screen-writing systems, this system provides a more authentic learning experience for learners through simulating the learning process of writing on real paper. We anticipate our system to be a starting point to explore the instruction in Chinese character in the field of its formation structure, pronunciation and handwriting, and to be used in the classroom or at home in the future.
“It sounds like she is sad”: Introducing a Biosensing Prototype that Transforms Emotions into Real-time Music and Facilitates Social Interaction
This paper introduces a biosensing prototype that transforms emotions into music, helping people recognize and understand their own feelings and actions and those of other people. This study presents a series of three experiments with 20 participants in four emotional states: happiness, sadness, anger, and neutral state. Their real-time emotions were captured through a wearable probe Audiolize Emotion that detects users’ EEG signals, composes data into audio files which are played to users themselves and others. At last, we conducted observations and interviews with participants to explore factors linked with social interaction, users’ perceptions of music, and the reflections on the use of audio form for self-expression or communication. We found that Audiolize Emotion prototype triggers communication and self-expression in two ways: building curiosity and supporting communication by extending expression form. Based on the results, we provide future directions to explore the field of emotion and communication further and plan to apply the knowledge into more fields of VR game and accessibility.
Vibrotactile Wristband for Warning and Guiding in Automated Vehicles
In this paper, we introduce a vibrotactile wristband for warning and guiding the driver based on the road condition in automated vehicles. The vibrotactile wristband can receive the command from the host computer in the vehicles via Bluetooth and generate the corresponding vibration patterns with six vibration motors. 3 vibration patterns are designed to guide the driver to the right direction in the artificial driving state and 8 vibration patterns are designed to warn the driver about the problems which the driving support system can’t solve in the automatic driving state. Based on tactile illusions, we convert the graphical markers into the vibration patterns to reduce the driver’s memory burden and improve the accuracy of recognizing the patterns. In order to evaluate the performance of the vibrotactile wristband, a virtual driving environment is developed and the subject can experience the vibration patterns when he/she drives the virtual vehicle.
Usability of Code Voting Modalities
Internet voting has promising benefits, such as cost reduction, but it also introduces drawbacks: the computer, that is used for voting, learns the voter’s choice. Code voting aims to protect the voter’s choice by the introduction of voting codes that are listed on paper. To cast a vote, the voters need to provide the voting code belonging to their choice. The additional step influences the usability. We investigate three modalities for entering voting codes: manual, QR-codes and tangibles. The results show that QR-codes offer the best usability while tangibles are perceived as the most novel and fun.
Towards Augmenting IVR Communication with Physiological Sensing Data
Immersive Virtual Reality (IVR) does not afford social cues for communication, such as sweaty palms to indicate stress, as users can only see an avatar of their collaborator. Prior work has shown that this data is necessary for successful collaboration, which is why we propose to augment IVR communication by (1) real-time capturing of physiological senses and (2) leveraging the unlimited virtual space to display these. We present the results of a focus group (N=7) and a preliminary study (N=32) that investigate how this data may be visualized in a playful interaction and the effects they have on the performances of the collaborators.
Byte.it: Discreet Teeth Gestures for Mobile Device Interaction
Byte.it is an exploration of the feasibility of using miniaturized, discreet hardware for teeth-clicking as hands-free input for wearable computing. Prior work has been able to identify teeth-clicking of different teeth groups. Byte.it expands on this work by exploring the use of a smaller and more discreetly positioned sensor suite (accelerometer and gyroscope) for detecting four different teeth-clicks for everyday human-computer interaction. Initial results show that an unobtrusive position on the lower mastoid and mandibular condyle can be used to classify teeth-clicking of four different teeth groups with an accuracy of 89%.
Alexa, Can You Help Us Solve This Problem?: How Conversations With Smart Personal Assistant Tutors Increase Task Group Outcomes
Despite a growing body of research about the design and use of Smart Personal Assistants, existing work has mainly focused on their use as task support for individual users in rather simple problem scenarios. Less is known about their ability to improve collaboration among multiple users in more complex problem settings. In our study, we directly compare 21 groups who either use a Smart Personal Assistant tutor or a human tutor when solving a problem task. The results indicate that groups interacting with Smart Personal Assistant tutors show significantly higher task outcomes and higher degrees of collaboration quality compared to groups interacting with human tutors. The results are used to suggest areas for future research in the field of computer-supported collaboration.
"Watch Out!": Semi-Autonomous Vehicles Using Assertive Voices to Grab Distracted Drivers’ Attention
Semi-autonomous vehicles are gradually appearing on our roads, and have already been involved in several high-profile accidents. These accidents usually occurred because the driver did not intervene in time when the automated system failed. An important issue for the design of future semi-autonomous vehicles is identifying effective methods for alerting drivers to critical events that require their intervention. To investigate this, we report the results of a lab-based simulator study in which participants had to respond to driving events while also playing an immersive mobile game on a phone. Results show that a more assertive voice alerting the driver to driving events resulted in faster reaction times and was perceived as more urgent than a less assertive voice, regardless of how immersed the driver was in the mobile game. These results suggest that the designers of future semi-autonomous vehicles should use assertive voice commends to alert drivers to critical events that require their intervention.
Bubble: Wearable Assistive Grasping Augmentation Based on Soft Inflatables
We present Bubble, a pneumatically actuated wearable device that enables people with hand disabilities to use their own hands to grasp objects without fully bending their fingers. Bubble offers a novel approach to grasping, where slim, ultra-lightweight silicone actuators are attached to the fingers. When the user wishes to grasp an object, the silicone units inflate pneumatically to fill the available space around the object. The inflatable units are interchangeable, can be independently inflated, and can be positioned anywhere on the fingers in any orientation, thereby enabling a wide variety of grasping gestures including the palmar grasp, pinch, etc. In this paper, we describe the implementation of our current prototype, the fabrication process of the soft inflatable units, as well as our preliminary study to evaluate our system’s grasping capability.
Design and Evaluation of a Texture Rendering Method for Electrostatic Tactile Display
Experiencing a sense of touching the displayed objects is the common goal of researchers and users. In this paper, a new texture rendering method for electrostatic tactile display is proposed, through which lateral force to the moving finger is calculated and generated by electrostatic attraction force based on the recorded data such as acceleration signals and friction properties of actual materials. The electrostatic attraction force is adjusted according to the exploring speeds of user’s finger. User studies of adjective and similarity ratings on roughness and stickiness of virtual materials are designed, and the results demonstrate that the sense of touching the rendered materials is similar to that of the real materials, which proves that the proposed texture rendering method can be applied to display tactile information in an electrostatic tactile display.
Cognitive Aid: Task Assistance Based On Mental Workload Estimation
In this work, we evaluate the potential of using wearable non-contact (infrared) thermal sensors through a user study (N=12) to measure mental workload. Our results indicate the possibility of mental workload estimation through the temperature changes detected using the prototype as participants perform two task variants with increasing difficulty levels. While the sensor accuracy and the design of the prototype can be further improved, the prototype showed the potential of building AR-based systems with cognitive aid technology for ubiquitous task assistance from the changes in mental workload demands. As such, we demonstrate our next steps by integrating our prototype into an existing AR headset (i.e. Microsoft HoloLens).
An Exploratory Study for Evaluating the Use of Floor Visualisations in Navigation Decisions
Different environmental cues can influence our spatial behaviour when we explore unfamiliar spaces. Research shows that the presence of other people affects our navigation decisions. To investigate the use of this environmental cue as a navigation aid in novel environment, we first explore visualisations that represent historical presence of people. We carried out an exploratory study (n=12) to examine whether and how people understand and use floor visualisations to make their navigational choices. Results suggest that floor visualisations have influenced participants’ navigation decisions. Our findings showed that implicit visualisations were difficult to interpret compared to explicit visualisations. Thematic analysis of participants’ interpretations revealed a contextual interpretation of explicit visualisations and non-contextual interpretation of implicit visualisations. Additionally, thematic analysis revealed that spatial behaviour is influenced by several factors including self-centredness, environmental features and the presence of others. These design insights will inform the design of history-enriched floor interfaces that direct people in the built environment.
The Challenges of Creating Engaging Content: Results from a Focus Group Study of a Popular News Media Organization
The process of content creation for distribution via social media platforms is not a trivial one for social media editors as the goal of creating both serious and engaging content is challenging, with no clear or differing guidelines or rules across and between platforms. For creators of serious content, such as news organizations, advertisers, or educational institutions, engagement has a deeper meaning beyond likes, shares, etc. that is aimed at the audience actually processing the underlying content associated with a social media post. In this research, we report findings from a group study that aimed to understand the process and challenges of creating engaging content across three social media platforms in a major news organization. The findings from the study indicate that creating engaging content is effort- and time-consuming, and they highlight the need to support the process of creating engaging content across multiple social media platforms. Our longer-term goal is to develop a system design to support social media editors’ creation of engaging content with which they can select engaging passages from news articles and select platforms on which to publish the content.
FlexPass: Symbiosis of Seamless User Authentication Schemes in IoT
This paper presents a new user authentication paradigm which is based on a flexible user authentication method, namely FlexPass. FlexPass relies on a single, user-selected secret that can be reflected in both textual and graphical authentication secrets. Such an approach facilitates adaptability in nowadays ubiquitous user interaction contexts within the Internet of Things (IoT), in which end-users authenticate multiple times per day through a variety of interaction device types. We present an initial evaluation of the new authentication method based on an in-lab experiment with 32 participants. Analysis of results reveal that the FlexPass paradigm is memorable and that users like the adaptable perspective of the new approach. Findings are expected to scaffold the design of more user-centric knowledge-based authentication mechanisms within nowadays ubiquitous computation realms.
Navigating Uncertainty in the Future of Work: Information-Seeking and Critical Events Among Online Freelancers
Online freelancer marketplaces offer workers the flexibility and control they desire. However, workers also struggle with the uncertainty resulting from these benefits. In traditional brick-and-mortar workplaces, workers who experience uncertainty during specific phases of their assimilation into a new role or organization engage in information-seeking behaviors. Understanding these phases of heightened uncertainty helps organizations better cater to workers’ informational needs e.g. through mentorship programs. While understanding the uncertainty that online workers experience as they assimilate into their career is critical to understanding online workers’ needs, such an understanding is currently severely limited. thus, we conducted semi-structured interviews with 29 online freelancers to investigate critical events that contribute to uncertainty early in their online careers. We situate these critical events within the context of organizational assimilation, and how participants employ diverse information-seeking tactics.
Human-Data Interaction in the Context of Care: Co-designing Family Civic Data Interfaces and Practices
By storing data about citizens for the purposes of service provision, private and public organizations have disempowered the people they serve, shifting the balance of power toward themselves as data holders. Through three co-production engagements involving families receiving “early help” support from their local authority and support workers involved in supplying this care, we have identified existing data usage practices, explored the impact of those practices upon the supported families, and co-designed new and improved approaches – both technological and practice-based – that are perceived to offer families fairer treatment, greater influence, and to benefit from better decision-making. Our findings show that by applying Human-Data Interaction and giving supported families direct access to see and manipulate their own data, both during and outside of the support engagement, the locus of decision-making could be shifted towards the data subject.
Affective Assistants: A Matter of States and Traits
This work presents a model for the development of affective assistants based on the pillars of user states and traits. Traits are defined as long-term qualities like personality, personal experiences, preferences, and demographics, while the user state comprises cognitive load, emotional states, and physiological parameters. We discuss useful input values and the necessary developments for an advancement of affective assistants with the example of an affective in-car voice assistant. With our work we help to shape the vision of our community regarding affective interfaces, not just in the automotive domain but also for other application areas.
Implicit User Calibration for Model-based Gaze-tracking System using Face Detection around Optical Axis of Eye
In recent studies of gaze tracking system using 3D model-based methods, the optical axis of the eye is estimated without user calibration. The remaining problem for achieving implicit user calibration is to estimate the difference between the optical axis and visual axis of the eye (angle kappa). In this paper, we propose an implicit user calibration method using face detection around the optical axis of the eye. We assume that the peak of the average of face region images indicates the visual axis of the eye in the eye coordinate system. The angle kappa is estimated as the difference between the optical axis of the eye and the peak of the average of face region images. We developed a prototype system with two cameras and two IR-LEDs. The experimental results showed that the proposed method can estimate the angle kappa more accurately than the method that uses Itti’s saliency map instead of face detection.
“I Have a Life”: Teacher Communication & Management Outside the Classroom
Over the past decade, there has been an increase in educational software use within classrooms as well as continuing demand on K-12 teachers extending beyond in-class activities. Yet, we still do not have a deep understanding of current teacher behaviors outside the classroom. Our paper presents insights on how to better design for technology use in this space by reporting on key themes such as communication, privacy and student technology at home. These findings translate into design implications to increase transparency with student data, the need to design first for technology students have access to in the home (e.g. mobile) and designing for the teacher need of setting personal boundaries within communication tools.
Modeling Error Rates in Spatiotemporal Moving Target Selection
When we try to acquire a moving target such as hitting a virtual tennis in a computer game, we must hit the target instantly when it flies over our hitting range. In other words, we have to acquire the target in spatial and temporal domains simultaneously. We call this type of task spatiotemporal moving target selection, which we find is common yet less studied in HCI. This paper presents a tentative model for predicting the error rates in spatiotemporal moving target selection. Our model integrates two latest models, the Ternary-Gaussian model and the Temporal Pointing model, to explain the influence of spatial and temporal constraints on pointing errors. In a 12-subject pointing experiment with a computer mouse, our model shows high fitting results with 0.904 R2. We discuss future research directions on this topic and how it could potentially help the design in dynamical user interfaces.
Trajectory Prediction Model for Crossing-Based Target Selection
In some cases, crossing-based target selection motion may gain a less error rates and a higher interactive speed. Most of the research in target selection fields focused on the analysis of interaction results. Moreover, trajectories play a much more important role in crossing-based target selection comparing to the other interactive techniques. And an ideal model for trajectories may help computer designers make predictions about interaction results during the process of target selection rather than at the end of the whole process. We proposed a trajectory prediction model for crossing-based target selection tasks referring to dynamic model theory. Simulation results show that our model performed well in the prediction of the trajectories, endpoints and the hitting time for target-selection motion, and the average error of trajectories, endpoints and hitting time values as 17.28%, 18.17 pixel and 11.50%.
It Sounds Like A Woman: Exploring Gender Stereotypes in South Korean Voice Assistants
For the past few years, voice assistant (VA) has been widely used around the world. Current voice assistants provide a gendered voice to sound more natural and life-like, but most of them have a female voice as a default voice setting. Our study explored how gender stereotypes of women are reflected in voice assistants with female voices through analyzing five South Korean VAs. We collected 1,602 responses from VAs and conducted a thematic analysis to examine the patterns of the gender stereotype. As a result, we have categorized three distinct characteristics: (1) bodily display, (2) subordinate attitude, (3) sexualization. We suggested that these stereotypical traits could create a power dynamic between users and female agents.
Is Seeing Believing?: The Effect of Morphological Congruent Visual Feedback on Mediated Touch Experience
Mediated social touch (MST) provides physical contact over a distance for geographically separated individuals. Despite advances in actuator technologies, it remains difficult to recreate the feel and sensation of natural human touch. Combining touch with morphologically congruent visual feedback may overcome limitations related to the low fidelity of current-day tactile displays. Being able to both feel and see the touch act being initiated on an input device could enhance the perceived realism of the touches. In two studies, we test the effects of such visual feedback on self-reported naturalness of touch, social presence, and emotional experiences.
Personal Safety App Effectiveness
We present the results of our study of people’s responses to unsafe scenarios with personal safety apps. Several such apps have been developed, offering features such as a location-sharing panic button. However, there is little research into how people might respond in different personal safety situations, and how such apps might contribute to their response. We performed a lab study with 30 participants and used semi-structured interviews to gather responses to a set of three increasingly risky scenarios, both before and after the installation of a personal safety app. From our results, participants stated that they would use mobile phones and personal safety apps most often to support “collective” responses, with calls to others for assistance. Further, while collective responses were often combined with “avoidance” or “protective” responses, when using a personal safety app, collective responses were less often combined with other reaction types. Overall, our results suggest some potential benefit from personal safety apps, though more study is required.
Perception of Emotion in Body Expressions from Gaze Behavior
Developing affectively aware technologies is a growing industry. To build them effectively, it is important to understand the features involved in discriminating between emotions. While many technologies focus on facial expressions, studies have highlighted the influence of body expressions over other modalities for perceiving some emotions. Eye tracking studies have evaluated the combination of face and body to investigate the influence of each modality, however, few to none have investigated the perception of emotion from body expressions alone. This exploratory study aimed to evaluate the discriminative importance of dynamic body features for decoding emotion. Eye tracking was used to monitor participants’ eye gaze behavior while viewing clips of non-acted body movements to which they associated an emotion. Preliminary results indicate that the two primary regions attended to most often and longest were the torso and the arms. Further analysis is ongoing, however initial results independently confirm prior studies without eye tracking.
Public Speaking Anxiety in a Real Classroom: Towards Developing a Reflection System
Public speaking is recognized as an important skill for success in learning and education. However, the mere thought of public speaking elicits anxiety in many people. This anxiety may manifest in a student’s nonverbal behaviors and physiological responses which can negatively affect both performance and evaluation. While public speaking training systems have employed a variety of speaker cues to automatically evaluate and score public speaking performance, many are built on data collected in a lab setting. However, it is difficult to achieve the same level of anxiety in these environments. We posit that students would benefit from a system that provides the ability to reflect on and practice public speaking presentation skills. This preliminary study explores public speaking anxiety from physiological responses and nonverbal behaviors of English language learners in-situ as a first step toward the design and development of a public speaking practice and reflection system.
(Over)Trust in Automated Driving: The Sleeping Pill of Tomorrow?
Both overtrust in technology and drowsy driving are safety-critical issues. Monitoring a system is a tedious task and overtrust in technology might also influence drivers’ vigilance, what in turn could multiply the negative impact of both issues. The aim of this study was to investigate if trust in automation affects drowsiness. 30 participants in two age groups conducted a 45-minute ride in a level-2 vehicle on a real test track. Trust was assessed before and after the ride with a subjective trust scale. Drowsiness was captured during the experiment using the Karolinska Sleepiness Scale. Results depict, that even a short initial system exposure significantly increases trust in automated driving. Drivers who trust the automated vehicles more show larger signs of drowsiness what may negatively impact the monitoring behavior. Drowsiness detection is important for automated vehicles, and the behavior of drowsy drivers might help to infer trust in an unobtrusively way.
Infrastructure vs. Community: Co-spaces Confront Digital Nomads’ Paradoxical Needs
Co-working and co-living companies are rising globally and the increasing participation within the gig economy has extended the range of users of community-based spaces (co-spaces) and raised a set of different community models in considering how to support them. In this paper, we specifically focus on the needs of digital nomads in co-spaces who struggle to pursue their personal and professional freedom. In so doing, we raise awareness of existing tensions that currently hinder the social engagement of these individuals in co-space settings.
Cognitive Learning: How to Become William Shakespeare
Writing is a fundamental task in our daily life. Existing writing improvement tools mostly focus on low-level grammar error correction, rather than enhancing users’ writing styles at the cognitive level. In this work, we present a computational approach that allows learners to have fast but effective learning experience with the help of automatic style transfer, visual stylometry analytics, machine teaching and practice. Our system provides a perfect fusion of vividly visualized style features and principles along with informative examples, which together can shape and drive personalized cognitive learning experience. We demonstrate the effectiveness of our system in a scenario of learning from William Shakespeare.
The Helpless Soft Robot – Stimulating Human Collaboration through Robotic Movement
Soft robotics have a set of unique traits, such as excelling at grabbing fragile objects which stem from the highly compliant materials used to produce them. However, very little research has so far focused on the interplay of the different interaction partners in a human – soft-robot collaboration. In this paper, we present the results of our investigation of the influence of two movement patterns on the willingness of random passersby to assist a soft robot in completing a task.
Should an Agent Be Ignoring It?: A Study of Verbal Abuse Types and Conversational Agents’ Response Styles
Verbal abuse is a hostile form of communication ill-intended to harm the other person. With a plethora of AI solutions around, the other person being targeted may be a conversational agent. In this study, involving 3 verbal abuse types (Insult, Threat, Swearing) and 3 response styles (Avoidance, Empathy, Counterattacking), we examine whether a conversational agent’s response style under varying abuse types influences those emotions found to mitigate people’s aggressive behaviors. Sixty-four participants, assigned to one of the abuse type conditions, interacted with the three conversational agents in turn and reported their feelings about guiltiness, anger, and shame after each session. Our study results show that, regardless of the abuse type, the agent’s response style has a significant effect on user emotions. Participants were less angry and more guilty with the empathetic agent than the other two agents. Our study findings have direct implications for the design of conversational agents.
A Gaze-Based Exploratory Study on the Information Seeking Behavior of Developers on Stack Overflow
Software developers use Stack Overflow on a daily basis to search for solutions to problems they encounter during bug fixing and feature enhancement. In prior work, studies have been done on mining Stack Overflow data such as for predicting unanswered questions or how and why people post. However, no work exists on how developers actually use, or more importantly, read the information presented to them on Stack Overflow. To better understand this behavior, we conduct an eye tracking study on how developers seek for information on Stack Overflow while tasked with creating human-readable summaries of methods and classes in large Java projects. Eye gaze data is collected on both the source code elements and Stack Overflow document elements at a fine token-level granularity using iTrace, our eye tracking infrastructure. We found that developers look at the text more often than the title in posts. Code snippets were the second most looked at element. Tags and votes are rarely looked at. When switching between Stack Overflow and the Eclipse Integrated Development Environment (IDE), developers often looked at method signatures and then switched to code and text elements on Stack Overflow. Such heuristics provide insight to automated code summarization tools as they decide what to give more weight to while generating summaries.
Supporting Data Workers To Perform Exploratory Programming
Data science is an open-ended task in whichexploratory programming is a common practice. Data workers often need faster and easier ways to explore alternative approaches to obtain insights from data, which frequently compromises code quality. To understand how well current IDEs support this exploratory workflow, we conducted an observational study with 19 data workers. In this paper, we present two significant findings from our analysis that highlight issues faced by data workers: (a)code hoarding and (b) excessivetask switching andcode cloning. To mitigate these issues, we provide design recommendations based on existing work, and propose to augment IDEs with an interactive visual plugin. This plugin parses source code to identify and visualize high-level task details. Data workers can use the resulting visualization to better understand and navigate the source code. As a realization of this idea, we presentAddIn, an add-in for RStudio that identifies and visualizes the hypotheses that a data worker is testing for statistical significance through her source code.
Where in the Cloud is my Data?: Location and Brand Effects on Trust in Cloud Services
We all hold stereotypes about different locations across the world. Do these stereotypes affect our attitudes toward cloud services when we are told the location of the servers storing our data? And, does it matter if the cloud services are provided by a well-known brand? To answer these questions, a 2 X 11 experiment was conducted to examine the effects of location and brand cues on users’ reaction to cloud services. Brand authority of the hosting company had a positive effect. More importantly, location of the cloud servers also affected outcomes, in that users tended to prefer some locations (e.g., US, Europe, Oceania, China) over others (e.g., Russia) for storing their cloud data. These findings have theoretical implications as well as design suggestions.
Acceptance of Self-Driving Cars: Does Their Posthuman Ability Make Them More Eerie or More Desirable?
The arrival of self-driving cars and smart technologies is fraught with controversy, as users hesitate to cede control to machines for vital tasks. While advances in engineering have made such autonomous technology a reality, considerable design work is needed to motivate their mass adoption. What are the key predictors of people’s acceptance of self-driving cars? Is it the ease of use or coolness aspect? Is it the degree of perceived control for users? We decided to find out with a survey (N = 404) assessing acceptance of self-driving cars, and discovered that the strongest predictor is “posthuman ability,” suggesting that individuals are much more accepting of technology that can clearly outclass human abilities.
Configuring Personal Data for a Quantified-Self Archive
An audience exists for personal information, including quantified-self data, beyond an individual’s social network and social communities. In the era of big data, the research and policy arenas are two areas where up-to-date assemblages of personal information have market value. In this ongoing study, we examine the long-term value of small data [2], acknowledging that there is also a societal need and an audience for rich, personalized collections of digital self-tracking records. Using qualitative research methods, we interviewed 18 people to explore the nature of self-tracking data that exists as a byproduct of daily life, and their sense of why and how their data could be archived for posterity. In the process, the intellectual and design challenges of a digital quantified-self archive are explored.
Towards a Typology of Self-Tracking Gaps
This paper introduces an emerging typology of the ‘absences’ that confound the study of self-tracking. A review of the literature, and the ongoing work of the authors on the long-term value of self-tracking data, is used as a resource to develop descriptions of levels and types of ?gaps’ that emerge as part of the activities, behaviors, technologies, and data practices of self-tracking. Such gaps are shown to be both common and insightful, highlighting the economic, social, behavioral, and psychological layers that undergird self-tracking.
Information-theoretic Sensorimotor Foundations of Fitts’ Law
We propose a novel, biologically plausible cost/fitness function for sensorimotor control, formalized with the information-theoretic principle of empowerment, a task-independent universal utility. Empowerment captures uncertainty in the perception-action loop of different nature (e.g. noise, delays, etc.) in a single quantity. We present the formalism in a Fitts’ law type goal-directed arm movement task and suggest that empowerment is one potential underlying determinant of movement trajectory planning in the presence of signal-dependent sensorimotor noise. Simulation results demonstrate the temporal relation of empowerment and various plausible control strategies for this specific task.
Discovering Users for Technical Innovations through Systematic Matchmaking
Every year Human-Computer Interaction (HCI) researchers create new technical innovations. Unfortunately, the User-Centered Design (UCD) processes used by most designers in HCI does not help much when the challenge is to find the best users for these innovations. We augmented the matchmaking design method, making it more systematic in considering potential users by using a list of 399 occupation groups and by incorporating the customer discovery interviews from the Lean Startup. We then assessed our new design method by searching for users who might benefit from two different technical innovations: ViBand and PaperID. We found that matchmaking with the list of occupation groups helped surface users we would likely have not considered. In addition, the customer discovery interviews helped to generate better applications and additional target users for the innovations. This paper documents our process, the design method, and insights we gained from using it.
Using Reinforced Implementation Intentions to Support Habit Formation
Despite promising results, the psychological approach of implementation intentions remain underused in ‘in-the-wild’ habit formation apps. The majority of existing apps focus on using self-tracking and reminders but these hinder the development of habit. This study proposes a new mechanism to support habit formation by using reinforced implementation intentions. Our findings suggest that adding reinforcement is indeed useful to maintain the level of compliance but it is not necessarily the same in terms of automaticity. We also discuss how the potential use of reinforcement can be improved in the future.
Modeling Drone Pointing Movement with Fitts’ Law
Although numerous studies have focused on interfaces for maneuvering drones, a method for evaluating these interfaces has not been established. A pointing experiment was carried out with a drone in this study. The results indicate that the target distance and target width affect the movement time and error rate while maneuvering. This is consistent with the results of previous pointing studies. Fitts’ law was not a good fit (R^2 = 0.672), while the data fit well to a two-part model (R^2 = 0.993). Based on these results, we propose future experimental work that could contribute to improving drone interfaces.
Using Screenshots to Predict Task Switching on Smartphones
Mobile phone use is pervasive, yet little is known about task switching on digital platforms and applications. We propose an unobtrusive experience sampling method to observe how individuals use their smartphones by taking screenshots every 5 seconds when the device is on. The purpose of this paper is to incorporate the psychological process into feature extraction, and use these features to effectively predict the task switching behavior on smartphones. Features are extracted from the sequence of screenshots, gauging visual stimulation, cognitive load, velocity and accumulation, sentiment, and time-related factors. Labels of task switching behavior were manually tagged for 87,182 screenshots from 60 subjects. Using random forest, we demonstrate that we can correctly infer a user’s task switching behavior from unstructured data in screenshots with up to 77% accuracy, demonstrating it is a viable option to use features of the screenshots to predict task switching behavior.
Understanding Social Costs in Online Question Asking
Various social media sites and online communities provide new channels for people in need to ask questions and seek help. However, individuals may still encounter psychological barriers that deter solicitation for assistance, which is more formally described as“social costs”. For example, it can be %concerns people have when seeking help can be the concerns of burdening others, the obligation of reciprocation, etc. To understand what could reduce social costs, we conducted a study in the context of Question-Answering (QA) and investigated the following three factors inspired by literature: anonymity (posting a question anonymously), recommendation (having the system handle the question routing), and ephemerality (allowing questions to be visible only for a short period). We built a QA platform to support these three features and conducted a randomized within-subject experiment to test their effects on social costs of posting questions. Results suggest the presence of anonymity, recommendation, and ephemerality reduces the social costs which provides design implications for future community building.
fNIRS and Neurocinematics
In the overlap between Human-Computer Interaction (HCI) and Cinematics, sits an interest in physiological responses to experiences. Focusing particularly on brain data, Neurocinematics has emerged as a research field using Brain-Computer Interface (BCI) sensors. Where previous work found inter subject correlations (ISC) between brain measurements of people watching movies in constrained conditions using functional magnetic resonance imaging (fMRI), we seek to examine similar responses in more naturalistic settings using functional Near Infrared Spectroscopy (fNIRS). fNIRS has been shown to be highly suitable for HCI studies, being more portable than fMRI and more tolerant of many natural movements than Electroencephalography (EEG). Early results found significant ISC, which gives a lot of hope and potential for using fNIRS in Neurocinematics.
Tell Me What You Know: GDPR Implications on Designing Transparency and Accountability for News Recommender Systems
The GDPR has a significant impact on the way users interact with technologies, especially the everyday platforms used to personalize news and related forms of information. This paper presents the initial results from a study whose primary objective is to empirically test those platforms’ level of compliance with the so-called ‘right to explanation’. Four research topics considered as gaps in existing legal and HCI scholarship originated from the project’s initial phase, namely (1) GDPR compliance through user-centered design; (2) the inclusion of values in the system; (3) design considerations regarding interaction strategies, algorithmic experience, transparency, and explanations; and (4) technical challenges. The second phase is currently ongoing and allows us to make some observations regarding the registration process and the privacy policies of three categories of news actors: first-party content providers, news aggregators and social media platforms.
“Woe is me”: Examining Older Adults’ Perceptions of Privacy
We conducted a study of n = 20 older adults to better understand their mental models for what the term ‘privacy’ means to them in both digital and non-digital contexts. Participants were asked to diagrammatically represent this information and describe their drawings in a semi-structured interview setting. Preliminary coding analysis revealed participants’ frustrations with available methods for addressing privacy violations. While some asserted that there are both good and bad uses of private data, others avoided technology as a whole out of privacy fears or ambivalence towards using web-based banking and social media services. Some participants described fighting back against privacy attacks, while others felt resigned altogether. Our study provides initial steps towards illuminating privacy perceptions of older adults and can have impacts on training and tailor design for this important demographic.
Security Patterns for Webdesign: a Hierarchical Structure Approach
In today’s age, a wide range of individuals create their own web presence. Thanks to modern tools, creating a website is easier than ever. In order to make sure that this increased accessibility does not come at the cost of decreased security, the respective web design knowledge should become more accessible as well. We created 16 security patterns for web design based on expert knowledge. We present the solution hierarchy of these patterns and how they might be applied by non-expert users.
Towards Better Security Decisions: Applying Prospect Theory to Cybersecurity
Normal users are usually not good at making decisions about cybersecurity, being easily attacked by hackers. Quite a few tools have been devised and implemented to help, but they can not balance security and usability well. To solve the problem, this paper explores the application of prospect theory to security recommendations. We conducted online surveys (n=61) and a between-subjects experiment (n=106) in six conditions to investigate the issues. In the experiment, we provided different security recommendations about two-factor-authentication (2FA) to participants in different conditions and recorded their decisions about enabling it. Results show that participants in the condition “Disadvantage” were willing to adopt 2FA the most. The findings indicate that showing disadvantages can be useful to persuade users into better security decisions.
UltraSonic Watch: Seamless Two-Factor Authentication through Ultrasound
Two-factor authentication (2FA) provides an extra layer of security as it requires two pieces of evidence to be provided to an authentication mechanism before granting access to a user. However, people do not prefer 2FA; a reason for this is that 2FA requires performing extra actions. In this late-breaking work, we present UltraSonic Watch which provides a seamless 2FA through ultrasound. We report a small-scale within-subjects study (N=30) which investigates the performance of UltraSonic Watch and the participants’ experience (in terms of perception, preference, and willingness to adopt). The results are promising as they revealed that ultrasound can be used to provide an efficient 2FA mechanism, transparent to the users, who are positive in adopting such an approach to increase the security of the authentication process.
VRIA – A Framework for Immersive Analytics on the Web
We report on the design, implementation and evaluation of
Online Privacy in Public Places: How do Location, Terms and Conditions and VPN Influence Disclosure?
Do we disclose more information online when we access Wi-Fi from home compared to the University, or an Airbnb rental, or a coffee shop? Does it matter if we are shown terms and conditions (T&C) before getting online? Will signing into a virtual private network (VPN) affect our disclosure? We conducted an experiment (N = 276) to find out. Our results suggest that while VPN promotes disclosure of personal information and unethical behaviors in an Airbnb network, the provision of T&C inhibits this disclosure. Conversely, in a University network, provision of terms and conditions encourages disclosure of unethical behavior, but the presence of VPN cue inhibits it. Further, a user’s belief in the publicness heuristic (public networks are risky) dictate how much users reveal in various locations based on their perceptions of relative security of accessing Wi-Fi from those locations.
Exploring Parallel Coordinates Plots in Virtual Reality
Parallel Coordinates Plots (PCP) are a widely used approach to interactively visualize and analyze multidimensional scientific data in a 2D environment. In this paper, we explore the use of Parallel Coordinates in an immersive Virtual Reality (VR) 3D visualization environment as a means to support the decision-making process in engineering design processes. We evaluate the potential of VR PCP using a formative qualitative study with seven participants. In a task involving 54 points with 29 dimensions per point, we found that participants were able to detect patterns in the dataset compared with a previously published study with two expert users using traditional 2D PCP, which acts as the gold standard for the dataset. The dataset describes the Pareto front for a three-objective aerodynamic design optimization study in turbomachinery.
An Observational Investigation of Reverse Engineers’ Process and Mental Models
Reverse engineering is a complex task essential to several software security jobs like vulnerability discovery and malware analysis. While traditional program comprehension tasks (e.g., program maintenance or debugging) have been thoroughly studied, reverse engineering diverges from these tasks as reverse engineers do not have access to developers, source code, comments, or internal documentation. Further, reverse engineers often have to overcome countermeasures employed by the developer to make the task harder (e.g., symbol stripping, packing, obfuscation). Significant research effort has gone into providing program analysis tools to support reverse engineers. However, little work has been done to understand the way they think and analyze programs, potentially leading to the lack of adoption of these tools among practitioners. This paper reports on a first step toward better understanding the reverse engineer’s process and mental models and provides directions for improving program analysis tools to better fit their users. We present the initial results of a semi-structured, observational interview study of reverse engineers (n=15). Each interview investigated the questions they asked while probing the program, how they answered these questions, and the decisions made throughout. Our initial observations suggest that reverse engineers rely on a variety of reference points in both the program text and structure as well as its dynamic behavior to build hypotheses about the program’s function and identify points of interest for future exploration. In most cases, our reverse engineers used a mix of static and dynamic analysis—mostly manually—to verify these hypotheses. Throughout, they rely on intuition built up over past experience. From these observations, we provide recommendations for user interface and program analysis improvements to support the reverse engineer.
Exploring the Impact of Network Impairments on Remote Collaborative Augmented Reality Applications
Our research explores the impact of network impairments on remote augmented reality (AR) collaborative tasks, and possible strategies to improve user experience in these scenarios. Using a simple AR task, under a controlled network environment, our preliminary user study highlights the impact of network outages on user workload and experience, and how user roles and learning styles play a role in this regard.
Paired Conversational Agents for Easy-to-Understand Instruction
Conversational agents such as those hosted by the ‘smart speakers’ have become popular over the last few years. Although users can accomplish tasks as if they were asking a person, users still have problems in utilizing conversational agents effectively. To address this problem, some proposals explain how to input agent requests by using visual information such as instruction manuals and displays. However, such instructions create problems such as occupying the hands and eyes. The purpose of this study is to effectively enhance request entry by issuing instructions for use in an easy-to-understand manner without using visual information. Our proposal uses a pair of conversational agents, one is called the main agent, and the other is called the sub-agent, that have different voice types. Experiments show that agent pairing yields easier to understand instructions than using just the main agent. Furthermore, experiments also show that use instructions are easier to understand if the sub-agent reads aloud specific examples of use.
Wizard of Oz Prototyping for Machine Learning Experiences
Machine learning is being adopted in a wide range of products and services. Despite its adoption, design and research processes for machine learning experiences have yet to be cemented in the user experience community. Prototyping machine learning experiences is noted to be particularly challenging. This paper suggests Wizard of Oz prototyping to help designers incorporate human-centered design processes into the development of machine learning experiences. This paper also surfaces a set of topics to consider in evaluating Wizard of Oz machine learning prototypes.
Gender Effects on Collaborative Online Brainstorming Teamwork
It is common for individuals with diverse demographic backgrounds to collaborate through computer-mediated communication (CMC) technologies. Groups with internal diversity are typically considered to be advantageous to group performance due to the presence of different perspectives and the potential to stimulate new ideas. However, intergroup conflicts can also occur in diverse groups, especially for groups with imbalanced composition. Previous studies have pointed out that minority members often suffer from unequal participation and performance pressure, which may further decrease group outcome. Since CMC tools facilitate online collaboration, it is necessary to understand how group composition interacts with the affordances of CMC tools on group collaboration. In this study, we tested three gender compositions (female-majority, equal-gender-composition, male-majority) with two communication contexts (video-text, text-only) and found that both gender composition and communication medium influenced group collaboration. Design implications for online collaboration are provided based on our findings.
Virtual Observation of Virtual Reality Simulations
Unlike conventional desktop simulations which have constrained interaction, immersive Virtual Reality (VR) allows users to freely move and interact with objects. In this paper we discuss a work-in-progress system that ‘virtually’ records participants movement and actions within a simulation. This system recovers and rebuilds recorded data on request, accurately replaying individual participants motions and actions in the simulation. Observers can review this reconstruction using an unrestricted virtual camera and if necessary, observe changes from recorded input devices. Reconstruction of each participants’ skeleton structure was created using tracked input devices. We conclude that our system offers detailed recreation of high-level knowledge and visual information of participant actions during simulations.
On How Users Edit Computer-Generated Visual Stories
A significant body of research in Artificial Intelligence (AI) has focused on generating stories automatically, either based on prior story plots or input images. However, literature has little to say about how users would receive and use these stories. Given the quality of stories generated by modern AI algorithms, users will nearly inevitably have to edit these stories before putting them to real use. In this paper, we present the first analysis of how human users edit machine-generated stories. We obtained 962 short stories generated by one of the state-of-the-art visual storytelling models. For each story, we recruited five crowd workers from Amazon Mechanical Turk to edit it. Our analysis of these edits shows that, on average, users (i) slightly shortened machine-generated stories, (ii) increased lexical diversity in these stories, and (iii) often replaced nouns and their determiners/articles with pronouns. Our study provides a better understanding on how users receive and edit machine-generated stories, informing future researchers to create more usable and helpful story generation systems.
Proxemo or How to Evaluate User Experience for People with Dementia
Most user experience (UX) evaluation tools require users to self-reflect and to communicate their thoughts (e.g. thinking aloud, retrospective interviews, questionnaires). In the context of designing for people with dementia, however, conditions like aphasia and general cognitive decline restrict the applicability of these methods. In this paper, we report on the iterative design of Proxemo, a smartwatch app for the documentation of observed emotions in people with dementia. Evaluations of Proxemo in dementia care facilities showed that observers considered Proxemo easy to use and preferred it over note-taking on paper. The agreement between different coders was substantial (k = .71). We conclude that Proxemo is a promising tool for UX evaluations in the dementia context – and possibly beyond, but further research on the analysis of its generated data is required.
Creating Manageable Persona Sets from Large User Populations
Creating personas from actual online user information is an advantage of the data-driven persona approach. However, modern online systems often provide big data from millions of users that display vastly different behaviors, resulting in possibly thousands of personas representing the entire user population. We present a technique for reducing the number of personas to a smaller number that efficiently represents the complete user population, while being more manageable for end users of personas. We first isolate the key user behaviors and demographical attributes, creating thin personas, and we then apply an algorithmic cost function to collapse the set to the minimum needed to represent the whole population. We evaluate our approach on 26 million user records of a major international airline, isolating 1593 personas. Applying our approach, we collapse this number to 493, a 69% decrease in the number of personas. Our research findings have implications for organizations that have a large user population and desire to employ personas.
Personas Changing Over Time: Analyzing Variations of Data-Driven Personas During a Two-Year Period
One of the critiques of personas is that the underlying data that they are based on may stale, requiring further rounds of data collection. However, we could find no empirical evidence for this criticism. In this research, we collect monthly demographic data over a two-year period for a large online content publisher and generate fifteen personas each month following an identical algorithmic approach. We then compare the sets of personas month-over-month, year-over-year, and over the whole two-year period. Findings show that there is an average 18.7% change in personas monthly, a 23.3% change yearly, and a 47% change over the entire period. Findings support the critique that personas do change over time and also highlight that changes in the underlying data can occur within a relatively short period. The implication is that organizations using personas should employ ongoing data collection to detect possible persona changes.
Exploring Effects of Conversational Fillers on User Perception of Conversational Agents
Through technological advancements in various areas of our lives, Conversational Agents progressed in their human-likeness. In the field of HCI, however, the use of conversational fillers (e.g., “um,” “uh,” etc.) by Conversational Agents have not been fully explored in an experimental setting. We observed the effects on user perceptions of Intelligence, Human-likeness and Likability of Conversational Agents by a 2 x 2 experimental design. From the results of 26 total participants, we concluded that 1) the use of fillers by Conversational Agents are perceived as less intelligent and less likable in task-oriented conversations, 2) and the fillers did not have any statistically significant change in perception of human-likeness. However, further examination showed that users reported filler-speaking agents as more entertaining for social-oriented conversations. With these findings, we discuss design implications for voice-based Conversational Agents.
The Effectiveness of Nudging in Commercial Settings and Impact on User Trust
Persuasive technologies and nudging are increasingly used to shape user behaviors in applications ranging from health and the environment to business. A thorough understanding of the effectiveness of nudges across different contexts and whether they affect user perception of a system is still lacking. We report the results of a controlled, quantitative study with 20 participants which focused on testing the effectiveness of three different nudges in an e-commerce environment and whether their use has an impact on participants’ trust. We found that products nudged via an anchoring effect were more frequently “bought” by participants, and that while participants deemed a store version implementing nudges and one which did not to be equally trustworthy, they perceived the former as technically inferior. Overall we found the effects of nudging to be less dominant than reported in previous studies.
Tell Me More: Understanding User Interaction of Smart Speaker News Powered by Conversational Search
In this study, we apply a “conversational search” to the design of the news service of smart speakers so that users can actively get richer information while listening to the news. We designed a research prototype called “Anchor,” where a smart speaker news assistant provides users with news about specific keywords and responds to users’ questions. We recruited 21 participants and conducted a user study where they consumed the news with Anchor, followed by post hoc interviews. The results of the qualitative analysis revealed the following. (1) People preferred interactive news to news briefings. (2) People found it useful to get answers on their questions by talking with the assistant. (3) Although users were allowed to ask questions anytime, they often hesitated, as they did not want to miss the whole flow of the news. (4) However, they had difficulty recalling the questions they had not asked. Based on these findings, we discuss the implications for news design in a voice-only user interface.
Cross-study Reliability of the Open Card Sorting Method
Information architecture forms the foundation of users’ navigation experience. Open card sorting is a widely-used method to create information architectures based on users’ groupings of the content. However, little is known about the method’s cross-study reliability: Does it produce consistent content groupings for similar profile participants involved in different card sort studies? This paper presents an empirical evaluation of the method’s cross-study reliability. Six card sorts involving 140 participants were conducted: three open sorts for a travel website, and three for an eshop. Results showed that participants provided highly similar card sorting data for the same content. A rather high agreement of the produced navigation schemes was also found. These findings provide support for the cross-study reliability of the open card sorting method.
Synaesthetic-Translation Tool: Synaesthesia as an Interactive Material for Ideation
While the subject of synaesthesia has inspired various practitioners and has been utilized as a design material in different formats, research has not so far presented a way to apply this captivating phenomenon as a source of design material in HCI. The purpose of this paper is to explore the translative property of synaesthesia and introduce a tangible way to use this intangible phenomenon as an interactive design material source in HCI and design. This paper shares a card-based tool that enables practitioners to use the translative property of synaesthesia for the sake of ideation. It further introduces a potential area of where this tool may be utilized for exploring user experiences. This work has implications for the CHI community as it attempts to share a practical way of using the intangible property of synaesthesia to explore potential user experiences.
How Performing an Activity Makes Meaning
The analysis of tasks and workflows is a longstanding tradition in Human-Computer Interaction (HCI). In many cases, it provides a crucial basis for the usable design of interactive systems. However, established tools almost exclusively focus on task content and structure, thereby ignoring the more “experiential” aspects of task performance. To fill this gap, we combined Hierarchical Task Analysis (HTA) with the analysis of subjective accounts of meaning. Our explorative study (N=4) suggests that objective descriptions resulting from HTA and subjective experience of one and the same activity differ. People tend to subsume experientially unimportant sequences or even ignore these within their subjective experience. Furthermore, people are able to name experientially important sequences and to relate these to feelings and thoughts (i.e., meaning). In the future, more refined versions of our approach may support practitioners with the design of meaningful interaction and activities.
Effect of Personality Traits on UX Evaluation Metrics: A Study on Usability Issues, Valence-Arousal and Skin Conductance
Personality affects the way someone feels or acts. This paper examines the effect of personality traits, as operationalized by the Big-five questionnaire, on the number, type and severity of identified usability issues, physiological signals (skin conductance), and subjective emotional ratings (valence-arousal). Twenty-four users interacted with a web service and then participated in a retrospective thinking aloud session. Results revealed that the number of usability issues is significantly affected by the Openness trait. Emotional Stability significantly affects the type of reported usability issues. Problem severity is not affected by any trait. Valence ratings are significantly affected by Conscientiousness, whereas Agreeableness, Emotional Stability and Openness significantly affect arousal ratings. Finally, Openness has a significant effect on the number of detected peaks in user’s skin conductance.
Creating Positive Experiences with Digital Companions
Over the last decade, advances in machine learning have multiplied the possibilities for applications of artificial intelligence. One of these applications are digital companions that assist their users in tasks and activities. In this study, we wanted to evaluate whether digital companions can be designed to create possibilities for positive experiences in work contexts and also be perceived as such using a Wizard-of-Oz prototype of a companion that supports workshop planning. We find that the work is perceived as more positive, more natural, and the content that is presented as more positive after the interaction with a companion that has been designed for positive experience compared to a neutral companion with the same functionality.
Minimalistic Explanations: Capturing the Essence of Decisions
The use of complex machine learning models can make systems opaque to users. Machine learning research proposes the use of post-hoc explanations. However, it is unclear if they give users insights into otherwise uninterpretable models. One minimalistic way of explaining image classifications by a deep neural network is to show only the areas that were decisive for the assignment of a label. In a pilot study, 20 participants looked at 14 of such explanations generated either by a human or the LIME algorithm. For explanations of correct decisions, they identified the explained object with significantly higher accuracy (75.64 % vs. 18.52 %). We argue that this shows that explanations can be very minimalistic while retaining the essence of a decision, but the decision-making contexts that can be conveyed in this manner is limited. Finally, we found that explanations are unique to the explainer and human-generated explanations were assigned 79 % higher trust ratings. As a starting point for further studies, this work shares our first insights into quality criteria of post-hoc explanations.
Studying User Experience of a Hybrid Location Sensing System
The HCI community has extensively studied location-based systems that applied various location sensing technologies. However, there is a lack of user experience (UX) studies of systems where hybrid location sensing approaches are provided. In this work, we designed, implemented and studied a hybrid location sharing system that offers automatic location sensing through Bluetooth Low Energy (BLE) beacons and participatory location sharing using GPS. Our findings provide design implications to future location-based systems that apply such a hybrid location sensing design.
Effects of Influence on User Trust in Predictive Decision Making
This paper introduces fact-checking into Machine Learning (ML) explanation by referring training data points as facts to users to boost user trust. We aim to investigate what influence of training data points, and how they affect user trust in order to enhance ML explanation and boost user trust. We tackle this question by allowing users check the training data points that have the higher influence and the lower influence on the prediction. A user study found that the presentation of influences significantly increases the user trust in predictions, but only for training data points with higher influence values under the high model performance condition, where users can justify their actions with more similar facts.
Do We Need Natural Language?: Exploring Restricted Language Interfaces for Complex Domains
Natural language interfaces (NLIs) that aim to understand arbitrary language are not only difficult to engineer; they can also create unrealistic expectations of the capabilities of the system, resulting in user confusion and disappointment. We use an interactive language learning game in a 3D blocks world to examine whether limiting a user’s communication to a small set of artificial utterances is an acceptable alternative to the much harder task of accepting unrestricted language. We find that such a restricted language interface provides same or better performance on this task while improving user experience indices. This suggests that some NLIs can restrict user languages without sacrificing user experience and highlights the importance of conveying NLI limitations to users.