- Paper Proceedings
- Extended Abstract Proceedings
The full proceedings are available in the ACM Digital Library .
Click a paper title to access the full text in the ACM Digital Library. Papers are free to access for a one year period, starting from the beginning of the CHI 2019 conference.
Using scientific discoveries to inform design practice is an important, but difficult, objective in HCI. In this paper, we provide an overview of Translational Science in HCI by triangulating literature related to the research-practice gap with interview data from many parties engaged (or not) in translating HCI knowledge. We propose a model for Translational Science in HCI based on the concept of a continuum to describe how knowledge progresses (or stalls) through multiple steps and translations until it can influence design practice. The model offers a conceptual framework that can be used by researchers and practitioners to visualize and describe the progression of HCI knowledge through a sequence of translations. Additionally, the model may facilitate a precise identification of translational barriers, which allows devising more effective strategies to increase the use of scientific findings in design practice.
South Asia faces one of the largest gender gaps online globally, and online safety is one of the main barriers to gender-equitable Internet access [GSMA, 2015]. To better understand the gendered risks and coping practices online in South Asia, we present a qualitative study of the online abuse experiences and coping practices of 199 people who identified as women and 6 NGO staff from India, Pakistan, and Bangladesh, using a feminist analysis. We found that a majority of our participants regularly contended with online abuse, experiencing three major abuse types: cyberstalking, impersonation, and personal content leakages. Consequences of abuse included emotional harm, reputation damage, and physical and sexual violence. Participants coped through informal channels rather than through technological protections or law enforcement. Altogether, our findings point to opportunities for designs, policies, and algorithms to improve women’s safety online in South Asia.
Advances in artificial intelligence (AI) frame opportunities and challenges for user interface design. Principles for human-AI interaction have been discussed in the human-computer interaction community for over two decades, but more study and innovation are needed in light of advances in AI and the growing uses of AI technologies in human-facing applications. We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation including a user study with 49 design practitioners who tested the guidelines against 20 popular AI-infused products. The results verify the relevance of the guidelines over a spectrum of interaction scenarios and reveal gaps in our knowledge, highlighting opportunities for further research. Based on the evaluations, we believe the set of design guidelines can serve as a resource to practitioners working on the design of applications and features that harness AI technologies, and to researchers interested in the further development of human-AI interaction design principles.
Machine learning (ML) is increasingly being used in image retrieval systems for medical decision making. One application of ML is to retrieve visually similar medical images from past patients (e.g. tissue from biopsies) to reference when making a medical decision with a new patient. However, no algorithm can perfectly capture an expert’s ideal notion of similarity for every case: an image that is algorithmically determined to be similar may not be medically relevant to a doctor’s specific diagnostic needs. In this paper, we identified the needs of pathologists when searching for similar images retrieved using a deep learning algorithm, and developed tools that empower users to cope with the search algorithm on-the-fly, communicating what types of similarity are most important at different moments in time. In two evaluations with pathologists, we found that these tools increased the diagnostic utility of images found and increased user trust in the algorithm. The tools were preferred over a traditional interface, without a loss in diagnostic accuracy. We also observed that users adopted new strategies when using refinement tools, re-purposing them to test and understand the underlying algorithm and to disambiguate ML errors from their own errors. Taken together, these findings inform future human-ML collaborative systems for expert decision-making.
This paper presents the GIFT smartphone app, an artist-led Research through Design project benefitting from a three-day in-the-wild deployment. The app takes as its premise the generative potential of combining the contexts of gifting and museum visits. Visitors explore the museum, searching for objects that would most appeal to the gift-receiver they have in mind, then photographing those objects and adding audio messages for their receivers describing the motivation for their choices. This paper charts the designers’ key aim of creating a new frame of mind using voice, and the most striking findings discovered during in-the-wild deployment in a museum — ‘seeing with new eyes’ and fostering personal connections. We discuss empathy, motivation, and bottom-up personalisation in the productive space revealed by this combination of contexts. We suggest that this work reveals opportunities for designers of gifting services as well as those working in cultural heritage.
We make theoretical and methodological contributions to the CHI community by introducing comparisons between contemporary Critical Heritage research and some forms of experimental design practice. Beginning by identifying three key approaches in contemporary heritage research: Critical Heritage, Plural Heritages and Future Heritage we introduce these in turn, while exploring their significance for thinking about design, knowledge and diversity. We discuss our efforts to apply ideas integrating Critical Heritage and design through the adoption of known Research through Design techniques in a research project in Istanbul, Turkey describing the design of our study and how this was productive of sensory and speculative reflection on the past. Finally, we reflect on the usefulness of such methods in developing new interactive technologies in heritage contexts and go on to propose a series of recommendations for a future Critical Heritage Design practice.
Connect-to-Connected Worlds: Piloting a Mobile, Data-Driven Reflection Tool for an Open-Ended Simulation at a Museum
Immersive open-ended museum exhibits promote ludic engagement and can be a powerful draw for visitors, but these qualities may also make learning more challenging. We describe our efforts to help visitors engage more deeply with an interactive exhibit’s content by giving them access to visualizations of data skimmed from their use of the exhibit. We report on the motivations and challenges in designing this reflective tool, which positions visitors as a “human in the loop” to understand and manage their engagement with the exhibit. We used an iterative design process and qualitative methods to explore how and if visitors could (1) access and (2) comprehend the data visualizations, (3) reflect on their prior engagement with the exhibit, (4)plan their future engagement with the exhibit, and (5) act on their plans. We further discuss the essential design challenges and the opportunities made possible for visitors through data-driven reflection tools.
Anchored Audio Sampling: A Seamless Method for Exploring Children’s Thoughts During Deployment Studies
Many traditional HCI methods, such as surveys and interviews, are of limited value when working with preschoolers. In this paper, we present anchored audio sampling (AAS), a remote data collection technique for extracting qualitative audio samples during field deployments with young children. AAS offers a developmentally sensitive way of understanding how children make sense of technology and situates their use in the larger context of daily life. AAS is defined by an anchor event, around which audio is collected. A sliding window surrounding this anchor captures both antecedent and ensuing recording, providing the researcher insight into the activities that led up to the event of interest as well as those that followed. We present themes from three deployments that leverage this technique. Based on our experiences using AAS, we have also developed a reusable open-source library for embedding AAS into any Android application.
To Asymmetry and Beyond!: Improving Social Connectedness by Increasing Designed Interdependence in Cooperative Play
Social play can have numerous health benefits but research has shown that not all multiplayer games are effective at promoting social engagement. Asymmetric cooperative games have shown promise in this regard but the design and dynamics of this unique style of play is not yet well understood. To address this, we present the results of two player experience studies using our custom prototype game Beam Me ‘Round, Scotty! 2: the first comparing symmetric cooperative play (e.g., where players have the same interface, goals, mechanics, etc.) to asymmetric cooperative play (e.g., where players have differing roles, abilities, interfaces, etc.) and the second comparing the effect of increasing degrees of interdependence between play partners. Our results not only indicate that asymmetric cooperative games may enhance players’ perceptions of connectedness, social engagement, immersion, and comfort with a game’s controls, but also demonstrate how to further improve these outcomes via deliberate mechanical design changes, such as changes in cooperative action timing and direction of dependence.
DesignABILITY: Framework for the Design of Accessible Interactive Tools to Support Teaching to Children with Disabilities
Developing educational tools aimed at children with disabilities is a challenging process for designers and developers because existing methodologies or frameworks do not provide any pedagogical information and/or do not take into account the particular needs of users with some type of impairment. In this study, we propose a framework for the design of tools to support teaching to children with disabilities. The framework provides the necessary stages for the development of tools (hardware-based or software-based) and must be adapted for a specific disability and educational goal. For this study, the framework was adapted to support literacy teaching and contributes to the design of educational/interactive technology for deaf people while making them part of the design process and taking into account their particular needs. The experts’ evaluation of the framework shows that it is well structured and may be adapted for other types of disabilities.
Transcalibur: A Weight Shifting Virtual Reality Controller for 2D Shape Rendering based on Computational Perception Model
Humans can estimate the shape of a wielded object through the illusory feeling of the mass properties of the object obtained using their hands. Even though the shape of hand-held objects influences immersion and realism in virtual reality (VR), it is difficult to design VR controllers for rendering desired shapes according to the perceptions derived from the illusory effects of mass properties and shape perception. We propose Transcalibur, which is a hand-held VR controller that can render a 2D shape by changing its mass properties on a 2D planar area. We built a computational perception model using a data-driven approach from the collected data pairs of mass properties and perceived shapes. This enables Transcalibur to easily and effectively provide convincing shape perception based on complex illusory effects. Our user study showed that the system succeeded in providing the perception of various desired shapes in a virtual environment.
LightBee is a novel “hologrammatic” telepresence system featuring a self-levitating light field display. It consists of a drone that flies a projection of a remote user’s head through 3D space. The movements of the drone are controlled by the remote user’s head movements, offering unique support for non-verbal cues, especially physical proxemics. The light field display is created by a retro-reflective sheet that is mounted on the cylindrical quadcopter. 45 smart projectors, one per 1.3 degrees, are mounted in a ring, each projecting a video stream rendered from a unique perspective onto the retroreflector. This creates a light field that naturally provides motion parallax and stereoscopy without requiring any headset nor stereo glasses. LightBee allows multiple local users to experience their own unique and correct perspective of the remote user’s head. The system is currently one-directional: 2 small cameras mounted on the drone allow the remote user to observe the local scene.
Complex virtual reality (VR) tasks, like 3D solid modelling, are challenging with standard input controllers. We propose exploiting the affordances and input capabilities when using a 3D-tracked multi-touch tablet in an immersive VR environment. Observations gained during semi-structured interviews with general users, and those experienced with 3D software, are used to define a set of design dimensions and guidelines. These are used to develop a vocabulary of interaction techniques to demonstrate how a tablet’s precise touch input capability, physical shape, metaphorical associations, and natural compatibility with barehand mid-air input can be used in VR. For example, transforming objects with touch input, “cutting” objects by using the tablet as a physical “knife”, navigating in 3D by using the tablet as a viewport, and triggering commands by interleaving bare-hand input around the tablet. Key aspects of the vocabulary are evaluated with users, with results validating the approach.
We propose RotoSwype, a technique for word-gesture typing using the orientation of a ring worn on the index finger. RotoSwype enables one-handed text-input without encumbering the hand with a device, a desirable quality in many scenarios, including virtual or augmented reality. The method is evaluated using two arm positions: with the hand raised up with the palm parallel to the ground; and with the hand resting at the side with the palm facing the body. A five-day study finds both hand positions achieved speeds of at least 14 words-per-minute (WPM) with uncorrected error rates near 1%, outperforming previous comparable techniques.
BeamBand is a wrist-worn system that uses ultrasonic beamforming for hand gesture sensing. Using an array of small transducers, arranged on the wrist, we can ensem-ble acoustic wavefronts to project acoustic energy at spec-ified angles and focal lengths. This allows us to interro-gate the surface geometry of the hand with inaudible sound in a raster-scan-like manner, from multiple view-points. We use the resulting, characteristic reflections to recognize hand pose at 8 FPS. In our user study, we found that BeamBand supports a six-class hand gesture set at 94.6% accuracy. Even across sessions, when the sensor is removed and reworn later, accuracy remains high: 89.4%. We describe our software and hardware, and future ave-nues for integration into devices such as smartwatches and VR controllers.
People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports.
Effects of Moderation and Opinion Heterogeneity on Attitude towards the Online Deliberation Experience
Online deliberation offers a way for citizens to collectively discuss an issue and provide input for policymakers. The overall experience of online deliberation can be affected by multiple factors. We decided to investigate the effects of moderation and opinion heterogeneity on the perceived deliberation experience, by running the first online deliberation experiment in Singapore. Our study took place in three months with three phases. In phase 1, our 2,006 participants answered a survey, that we used to create groups of different opinion heterogeneity. During the second phase, 510 participants discussed about the population issue on the online platform we developed. We gathered data on their online deliberation experience during phase 3. We found out that higher levels of moderation negatively impact the experience of deliberation on perceived procedural fairness, validity claim and policy legitimacy; and that high opinion heterogeneity is important in order to get a fair assessment of the deliberation experience.
Recent years have seen interest in device tracking and localization using acoustic signals. State-of-the-art acoustic motion tracking systems however do not achieve millimeter accuracy and require large separation between microphones and speakers, and as a result, do not meet the requirements for many VR/AR applications. Further, tracking multiple concurrent acoustic transmissions from VR devices today requires sacrificing accuracy or frame rate. We present MilliSonic, a novel system that pushes the limits of acoustic based motion tracking. Our core contribution is a novel localization algorithm that can provably achieve sub-millimeter 1D tracking accuracy in the presence of multipath, while using only a single beacon with a small 4-microphone array.Further, MilliSonic enables concurrent tracking of up to four smartphones without reducing frame rate or accuracy. Our evaluation shows that MilliSonic achieves 0.7mm median 1D accuracy and a 2.6mm median 3D accuracy for smartphones, which is 5x more accurate than state-of-the-art systems. MilliSonic enables two previously infeasible interaction applications: a) 3D tracking of VR headsets using the smartphone as a beacon and b) fine-grained 3D tracking for the Google Cardboard VR system using a small microphone array.
Microtasks enable people with limited time and context to contribute to a larger task. In this paper we explore casual microtasking, where microtasks are embedded into other primary activities so that they are available to be completed when convenient. We present a casual microtasking experience that inserts writing microtasks from an existing microwriting tool into the user’s Facebook feed. From a two-week deployment of the system with nine people, we observe that casual microtasking enabled participants to get things done during their breaks, and that they tended to do so only after first engaging with Facebook’s social content. Participants were most likely to complete the writing microtasks during periods of the day associated with low focus, and would occasionally use them as a springboard to open the original document in Word. These findings suggest casual microtasking can help people leverage spare micromoments to achieve meaningful micro-goals, and even encourage them to return to work.
While there is widespread recognition of the need to provide people with vision impairments (PVI) equitable access to cultural institutions such as art galleries, this is not easy. We present the results of a collaboration with a regional art gallery who wished to open their collection to PVIs in the local community. We describe a novel model that provides three different ways of accessing the gallery, depending upon visual acuity and mobility: virtual tours, self-guided tours and guided tours. As far as possible the model supports autonomous exploration by PVIs. It was informed by a value sensitive design exploration of the values and value conflicts of the primary stakeholders.
Co-Design Beyond Words: ‘Moments of Interaction’ with Minimally-Verbal Children on the Autism Spectrum
Existing co-design methods support verbal children on the autism spectrum in the design process, while their minimally-verbal peers are overlooked. We describe Co-Design Beyond Words (CDBW), an approach which merges existing co-design methods with practice-based methods from Speech and Language Therapy which are child-led and interests-based. These emphasise the rich detail that can be conveyed in the moment, through recognising occurrences of, for example, Joint Attention, Turn Taking and Imitation. We worked in an autism-specific primary school over 20 weeks with ten children, aged 5 to 8. We co-designed a playful prototype, the TangiBall, using the three iterative phases of CDBW; the Foundation Phase (preparation for interaction), the Interaction Phase (designing-and-reflecting in the moment) and the Reflection Phase (reflection-on-action). We contribute a novel co-design approach and present moments of interaction, the micro instances in design in which minimally-verbal children on the spectrum can convey meaning beyond words, through their actions, interactions, and attentional foci. These moments of interaction provide design insight, shape design direction, and reveal unique strengths, interests, and abilities.
We report a laboratory study (N=53) in which participants browsed their own Facebook news feeds for 10-15 minutes, choosing exactly when to quit, and later rated the overall emotional utility of the episode before attempting to recall threads. Finally, the emotional utility of each encountered thread was rated while looking over a recording of the interaction. We report that Facebook browsing was, overall, an emotionally positive experience; that recall of threads exhibited classic primacy and recency serial order effects; that recalled threads were both more positive and more valenced (less neutral) on average, than forgotten threads; and that overall emotional valence judgments were predicted, statistically, by the peak and end thread judgments. We find no evidence that local quit decisions were driven by the emotional utility of threads. In the light of these findings, we discuss the suggestion that emotional utility might partly explain the attractiveness of reading the news feed, and that an emotional memory bias might further increase the attractiveness of the newsfeed in prospect.
The Breaking Hand: Skills, Care, and Sufferings of the Hands of an Electronic Waste Worker in Bangladesh
While repair work has recently been getting increasing attention in HCI, recycling practices have still remained relatively understudied, especially in the context of the Global South. To this end, building on our eight-month-long ethnography, this paper reports the electronic waste (`e-waste’, henceforth) recycling practices among the e-waste recycler (`bhangari’) communities in Dhaka, Bangladesh. In doing so, this paper offers the work of the bhangaris through an articulation of their hands and their uses. Drawing from a rich body of scholarly work on social science, we define and contextualize three characteristics of the hand of a bhangari: knowledge, care, and skills and collaboration. Our study also highlights the pains and sufferings involved in this profession. By explaining bhangari work through the hand, we also discuss its implications for design, and its connection to HCI’s broader interest in sustainability.
Interacting with a smartphone using touch input and speech output is challenging for visually impaired people in mobile and public scenarios, where only one hand may be available for input (e.g., while holding a cane) and using the loudspeaker for speech output is constrained by environmental noise, privacy, and social concerns. To address these issues, we propose EarTouch, a one-handed interaction technique that allows the users to interact with a smartphone using the ear to perform gestures on the touchscreen. Users hold the phone to their ears and listen to speech output from the ear speaker privately. We report how the technique was designed, implemented, and evaluated through a series of studies. Results show that EarTouch is easy, efficient, fun and socially acceptable to use.
In countries where languages with non-Latin characters are prevalent, people use a keyboard with two language modes namely, the native language and English, and often experience mode errors. To diagnose the mode error problem, we conducted a field study and observed that 78% of the mode errors occurred immediately after application switching. We implemented four methods (Auto-switch, Preview, Smart-toggle, and Preview & Smart-toggle) based on three strategies to deal with the mode error problem and conducted field studies to verify their effectiveness. In the studies considering Korean-English dual input, Auto-switch was ineffective. On the contrary, Preview significantly reduced the mode errors from 75.1% to 41.3%, and Smart-toggle saved typing cost for recovering from mode errors. In Preview & Smart-toggle, Preview reduced mode errors and Smart-toggle handled 86.2% of the mode errors that slipped past Preview. These results suggest that Preview & Smart-toggle is a promising method for preventing mode errors for the Korean-English dual-input environment.
Public sharing is integral to online platforms. This includes the popular multimedia messaging application Snapchat, on which public sharing is relatively new and unexplored in prior research. In mobile-first applications, sharing contexts are dynamic. However, it is unclear how context impacts users’ sharing decisions. As platforms increasingly rely on user-generated content, it is important to also broadly understand user motivations and considerations in public sharing. We explored these aspects of content sharing through a survey of 1,515 Snapchat users. Our results indicate that users primarily have intrinsic motivations for publicly sharing Snaps, such as to share an experience with the world, but also have considerations related to audience and sensitivity of content. Additionally, we found that Snaps shared publicly were contextually different from those privately shared. Our findings suggest that content sharing systems can be designed to support sharing motivations, yet also be sensitive to private contexts.
Cluster Touch: Improving Touch Accuracy on Smartphones for People with Motor and Situational Impairments
We present Cluster Touch, a combined user-independent and user-specific touch offset model that improves the accuracy of touch input on smartphones for people with motor impairments, and for people experiencing situational impairments while walking. Cluster Touch combines touch examples from multiple users to create a shared user-independent touch model, which is then updated with touch examples provided by an individual user to make it user-specific. Owing to this combination, Cluster Touch allows people to quickly improve the accuracy of their smartphones by providing only 20 touch examples. In a user study with 12 people with motor impairments and 12 people without motor impairments, but who were walking, Cluster Touch improved touch accuracy by 14.65% for the former group and 6.81% for the latter group over the native touch sensor. Furthermore, in an offline analysis of existing mobile interfaces, Cluster Touch improved touch accuracy by 8.21% and 4.84% over the native touch sensor for the two user groups, respectively.
While prior research has revealed the promising impact of concept mapping on learning, few have comprehensively modeled different cognitive behaviors during concept mapping. In addition, existing concept mapping tools lack effective feedback to support better learning behaviors. This work presents MindDot, a concept map-based learning environment that facilitates the cognitive process of comparing and integrating related concepts via two forms of support. A hyperlink support and an expert template. Study results suggested that both types of support had positive impact on the development of comparative strategies and that hyperlink support enhanced learning. We further evaluated the cognitive learning progress at a fine-grained level with two forms of visualizations. We then extracted several behavioral patterns that provided insights about the cognitive progress in learning. Lastly, we derive design recommendations that we hope will inspire future intelligent tutoring systems that automatically evaluate students’ learning behaviors and foster them in developing effective learning behaviors
Conventional hearing aids frame hearing impairment almost exclusively as a problem. In the present paper, we took an alternative approach by focusing on positive future possibilities of ‘divergent hearing’. To this end, we developed a method to speculate simultaneously about not-yet-experienced positive meanings and not-yet-existing technology. First, we gathered already existing activities in which divergent hearing was experienced as an advantage rather than as a burden. These activities were then condensed into ‘Prompts of Positive Possibilities’ (PPP), such as ‘Creating a shelter to feel safe in”. In performative sessions, participants were given these PPPs and ‘Open Probes’ to enact novel everyday activities. This led to 26 possible meanings and according devices, such as “Being able to listen back into the past with a rewinder”. The paper provides valuable insights into the interests and expectations of people with divergent hearing as well as a methodological contribution to a possibility-driven design.
Failure is a common artefact of challenging experiences, a fact of life for interactive systems but also a resource for aesthetic and improvisational performance. We present a study of how three professional pianists performed an interactive piano composition that included playing hidden codes within the music so as to control their path through the piece and trigger system actions. We reveal how apparent failures to play the codes occurred for diverse reasons including mistakes in their playing, limitations of the system, but also deliberate failures as a way of controlling the system, and how these failures provoked aesthetic and improvised responses from the performers. We propose that creative and performative interfaces should be designed to enable aesthetic failures and introduce a taxonomy that compares human approaches to failure with approaches to capable systems, revealing new creative design strategies of gaming, taming, riding and serving the system.
The Channel Matters: Self-disclosure, Reciprocity and Social Support in Online Cancer Support Groups
People with health concerns go to online health support groups to obtain help and advice. To do so, they frequently disclose personal details, many times in public. Although research in non-health settings suggests that people self-disclose less in public than in private, this pattern may not apply to health support groups where people want to get relevant help. Our work examines how the use of private and public channels influences members’ self-disclosure in an online cancer support group, and how channels moderate the influence of self-disclosure on reciprocity and receiving support. By automatically measuring people’s self-disclosure at scale, we found that members of cancer support groups revealed more negative self-disclosure in the public channels compared to the private channels. Although one’s self-disclosure leads others to self-disclose and to provide support, these effects were generally stronger in the private channel. These channel effects probably occur because the public channels are the primary venue for support exchange, while the private channels are mainly used for follow-up conversations. We discuss theoretical and practical implications of our work.
Playful technology has the potential to support physical activity (PA) among wheelchair users, but little is known about design considerations for this audience, who experience significant access barriers. In this paper, we lever-age the Integrated Behavioural Model (IBM) to understand wheelchair users’ perspectives on PA, technology, and play.First, we present findings from an interview study with eight physically active wheelchair users. Second, we build on the interviews in a survey that received 44 responses from a broader group of wheelchair users. Results show that the anticipation of positive experiences was the strongest predictor of engagement with PA, and that accessibility concerns act as barriers both in terms of PA participation and technology use. We present four design goals – emphasizing enjoyment,involving others, building knowledge and enabling flexibility – to make our findings actionable for researchers and designers wishing to create accessible playful technology to support PA.
Vulnerability is a common experience in everyday life and is frequently perceived as a flaw to be excised in technology design. Yet, research indicates it is an essential aspect of wholehearted living among others. In this paper, we present the design and deployment of ‘True Colors’, a novel wearable device intended to support social interaction in a live action roleplay game (LARP) setting. We describe the Research-through-Design process that helped us to discover and articulate the possibility space of vulnerability in the design of social wearables, as support for producing a sense of social empowerment and connection among wearers within the LARP. We draw conclusions that may be of value to others designing wearables and related technologies aimed at supporting co-located social interaction in games/play.
Investigating Slowness as a Frame to Design Longer-Term Experiences with Personal Data: A Field Study of Olly
We describe the design and deployment of Olly, a domestic music player that enables people to re-experience digital music they listened to in the past. Olly uses its owner’s Last.FM listening history metadata archive to occasionally select a song from their past, but offers no user control over what is selected or when. We deployed Olly in 3 homes for 15 months to explore how its slow pace might support experiences of reflection and reminiscence. Findings revealed that Olly became highly integrated in participants lives with sustained engagement over time. They drew on Olly to reflect on past life experiences and reactions indicated an increase in perceived value of their Last.FM archive. Olly also provoked reflections on the temporalities of personal data and technology. Findings are interpreted to present opportunities for future HCI research and practice.
Notions of what counts as a contribution to HCI continue to be contested as our field expands to accommodate perspectives from the arts and humanities. This paper aims to advance the position of the arts and further contribute to these debates by actively exploring what a “non-contribution” would look like in HCI. We do this by taking inspiration from Fluxus, a collective of artists in the 1950’s and 1960’s who actively challenged and reworked practices of fine arts institutions by producing radically accessible, ephemeral, and modest works of “art-amusement.” We use Fluxus to develop three analogous forms of “HCI-amusements,” each of which shed light on dominant practices and values within HCI by resisting to fit into its logics.
About 18% of children in industrialized countries suffer from anxiety. We designed a mobile neurofeedback app, called Mind-Full, based on existing design guidelines. Our goal was for young children in lower socio-economic status schools to improve their ability to self-regulate anxiety by using Mind-Full. In this paper we report on quantitative outcomes from a sixteen-week field evaluation with 20 young children (aged 5 to 8). Our methodological contribution includes using a control group, validated measures of anxiety and stress, and assessing transfer and maintenance. Results from teacher and parent behavioral surveys indicated gains in children’s ability to self-regulate anxiety at school and home; a decrease in anxious behaviors at home; and cortisol tests showed variable improvement in physiological stress levels. We contribute to HCI for mental health with evidence that it is viable to use a mobile app in lower socio-economic status schools to improve children’s mental health.
Thermoplastic and Fused Deposition Modeling (FDM) based 4D printing are rapidly expanding to allow for space- and material-saving 2D printed sheets morphing into 3D shapes when heated. However, to our knowledge, all the known examples are either origami-based models with obvious folding hinges, or beam-based models with holes on the morphing surfaces. Morphing continuous double-curvature surfaces remains a challenge, both in terms of a tailored toolpath-planning strategy and a computational model that simulates it. Additionally, neither approach takes surface texture as a design parameter in its computational pipeline. To extend the design space of FDM-based 4D printing, in Geodesy, we focus on the morphing of continuous double-curvature surfaces or surface textures. We suggest a unique tool path – printing thermoplastics along 2D closed geodesic paths to form a surface with one raised continuous double-curvature tiles when exposed to heat. The design space is further extended to more complex geometries composed of a network of rising tiles (i.e., surface textures). Both design components and the computational pipeline are explained in the paper, followed by several printed geometric examples.
Collaboration is built on trust, and establishing trust with a creative Artificial Intelligence is difficult when the decision process or internal state driving its behaviour isn’t exposed. When human musicians improvise together, a number of extra-musical cues are used to augment musical communication and expose mental or emotional states which affect musical decisions and the effectiveness of the collaboration. We developed a collaborative improvising AI drummer that communicates its confidence through an emoticon-based visualisation. The AI was trained on musical performance data, as well as real-time skin conductance, of musicians improvising with professional drummers, exposing both musical and extra-musical cues to inform its generative process. Uni- and bi-directional extra-musical communication with real and false values were tested by experienced improvising musicians. Each condition was evaluated using the FSS-2 questionnaire, as a proxy for musical engagement. The results show a positive correlation between extra-musical communication of machine internal state and human musical engagement.
Collocated, face-to-face teamwork remains a pervasive mode of working, which is hard to replicate online. Team members’ embodied, multimodal interaction with each other and artefacts has been studied by researchers, but due to its complexity, has remained opaque to automated analysis. However, the ready availability of sensors makes it increasingly affordable to instrument work spaces to study teamwork and groupwork. The possibility of visualising key aspects of a collaboration has huge potential for both academic and professional learning, but a frontline challenge is the enrichment of quantitative data streams with the qualitative insights needed to make sense of them. In response, we introduce the concept of collaboration translucence, an approach to make visible selected features of group activity. This is grounded both theoretically (in the physical, epistemic, social and affective dimensions of group activity), and contextually (using domain-specific concepts). We illustrate the approach from the automated analysis of healthcare simulations to train nurses, generating four visual proxies that fuse multimodal data into higher order patterns.
This paper investigates personalized voice characters for in-car speech interfaces. In particular, we report on how we designed different personalities for voice assistants and compared them in a real world driving study. Voice assistants have become important for a wide range of use cases, yet current interfaces are using the same style of auditory response in every situation, despite varying user needs and personalities. To close this gap, we designed four assistant personalities (Friend, Admirer, Aunt, and Butler) and compared them to a baseline (Default) in a between-subject study in real traffic conditions. Our results show higher likability and trust for assistants that correctly match the user’s personality while we observed lower likability, trust, satisfaction, and usefulness for incorrectly matched personalities, each in comparison with the Default character. We discuss design aspects for voice assistants in different automotive use cases.
Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services
Algorithmic decision-making systems are increasingly being adopted by government public service agencies. Researchers, policy experts, and civil rights groups have all voiced concerns that such systems are being deployed without adequate consideration of potential harms, disparate impacts, and public accountability practices. Yet little is known about the concerns of those most likely to be affected by these systems. We report on workshops conducted to learn about the concerns of affected communities in the context of child welfare services. The workshops involved 83 study participants including families involved in the child welfare system, employees of child welfare agencies, and service providers. Our findings indicate that general distrust in the existing system contributes significantly to low comfort in algorithmic decision-making. We identify strategies for improving comfort through greater transparency and improved communication strategies. We discuss the implications of our study for accountable algorithm design for child welfare applications.
During sensemaking, people annotate insights: underlining sentences in a document or circling regions on a map. They jot down their hypotheses: drawing correlation lines on scatterplots or creating personal legends to track patterns. We present ActiveInk, a system enabling people to seamlessly transition between exploring data and externalizing their thoughts using pen and touch. ActiveInk enables the natural use of pen for active reading behaviors, while supporting analytic actions by activating any of these ink strokes. Through a qualitative study with eight participants, we contribute observations of active reading behaviors during data exploration and design principles to support sensemaking.
Evaluating the Effect of Feedback from Different Computer Vision Processing Stages: A Comparative Lab Study
Computer vision and pattern recognition are increasingly being employed by smartphone and tablet applications targeted at lay-users. An open design challenge is to make such systems intelligible without requiring users to become technical experts. This paper reports a lab study examining the role of visual feedback. Our findings indicate that the stage of processing from which feedback is derived plays an important role in users’ ability to develop coherent and correct understandings of a system’s operation. Participants in our study showed a tendency to misunderstand the meaning being conveyed by the feedback, relating it to processing outcomes and higher level concepts, when in reality the feedback represented low level features. Drawing on the experimental results and the qualitative data collected, we discuss the challenges of designing interactions around pattern matching algorithms.
What makes a city meaningful to its residents? What attracts people to live in a city and to care for it? Today, we might see such questions as concerns for HCI, given the emerging agendas of smart and connected cities, IoT, and ubiquitous computing: city residents’ perceptions of and attitudes towards smart city technologies will play a role in technology acceptance. Theories of “placemaking” from humanist geography and urban planning address themselves to such concerns, and they have been taken up in HCI and urban informatics research. This theory offers ideas for developing community attachment, heightening the legibility of the city, and intensifying lived experiences in the city. We add to this body of research with an analysis of several initiatives of City Yeast, a community-based design collective in Taiwan that proposes the metaphor of fermentation as an approach to placemaking. We unpack how this approach shapes their design practice and link its implications to urban informatics research in HCI. We suggest that smart cities can also be pursued by leveraging the knowledge of city residents and helping to facilitate their participation in acts of perceiving, envisioning, and improving their local communities, including but not limited to smart and connected technologies.
Through a design-led inquiry focused on smart home security cameras, this research develops three key concepts for research and design pertaining to new and emerging digital consumer technologies. Digital leakage names the propensity for digital information to be shared, stolen, and misused in ways unbeknownst or even harmful to those to whom the data pertains or belongs. Hole-and-corner applications are those functions connected to users’ data, devices, and interactions yet concealed from or downplayed to them, often because they are non-beneficial or harmful to them. Foot-in-the-door devices are product and services with functional offerings and affordances that work to normalize and integrate a technology, thus laying groundwork for future adoption of features that might have earlier been rejected as unacceptable or unnecessary. Developed and illustrated through a set of design studies and explorations, this paper shows how these concepts may be used analytically to investigate issues such as privacy and security, anticipatorily to speculate about the future of technology development and use, and generatively to synthesize design concepts and solutions.
Deaf and Hard-of-hearing Individuals’ Preferences for Wearable and Mobile Sound Awareness Technologies
To investigate preferences for mobile and wearable sound awareness systems, we conducted an online survey with 201 DHH participants. The survey explores how demographic factors affect perceptions of sound awareness technologies, gauges interest in specific sounds and sound characteristics, solicits reactions to three design scenarios (smartphone, smartwatch, head-mounted display) and two output modalities (visual, haptic), and probes issues related to social context of use. While most participants were highly interested in being aware of sounds, this interest was modulated by communication preference–that is, for sign or oral communication or both. Almost all participants wanted both visual and haptic feedback and 75% preferred to have that feedback on separate devices (e.g., haptic on smartwatch, visual on head-mounted display). Other findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
The Impact of User Characteristics and Preferences on Performance with an Unfamiliar Voice User Interface
Voice User Interfaces (VUIs) are increasing in popularity. However, their invisible nature with no or limited visuals makes it difficult for users to interact with unfamiliar VUIs. We analyze the impact of user characteristics and preferences on how users interact with a VUI-based calendar, DiscoverCal. While recent VUI studies analyze user behavior through self-reported data, we extend this research by analyzing both VUI usage data and self-reported data to observe correlations between both data types. Results from our user study (n=50) led to four key findings: 1) programming experience did not have a wide-spread impact on performance metrics while 2) assimilation bias did, 3) participants with more technical confidence exhibited a trial-and-error approach, and 4) desiring more guidance from our VUI correlated with performance metrics that indicate cautious users.
Difficulties in accessing, isolating, and iterating on the components and connections of a printed circuit board (PCB) create unique challenges in PCB debugging. Manual probing methods are slow and error prone, and even dedicated PCB testing equipment remains limited by its inability to modify the circuit during testing. We present Pinpoint, a tool that facilitates in-circuit PCB debugging through techniques such as programmatically probing signals, dynamically disconnecting components and subcircuits to test in isolation, and splicing in new elements to explore potential modifications. Pinpoint automatically instruments a PCB design and generates designs for a physical jig board that interfaces the user’s PCB to our custom testing hardware and to software tools. We evaluate Pinpoint’s ability to facilitate the debugging of various PCB issues by instrumenting and testing different classes of boards, as well as by characterizing its technical limitations and by soliciting feedback through a guided exploration with PCB designers.
The contours of user experience (UX) design practice have been shaped by a diverse array of practitioners and disciplines, resulting in a diffuse and decentralized body of UX-specific disciplinary knowledge. The rapidly shifting space that UX knowledge occupies, in conjunction with a long-existing research-practice gap, presents unique challenges and opportunities to UX educators and aspiring UX designers. In this paper, we analyzed a corpus of question and answer communication on UX Stack Exchange using a practice-led approach, identifying and documenting practitioners’ conceptions of UX knowledge over a nine year period. Specifically, we used natural language processing techniques and qualitative content analysis to identify a disciplinary vocabulary invoked by UX designers in this online community, as well as conceptual trajectories spanning over nine years which could shed light on the evolution of UX practice. We further describe the implications of our findings for HCI research and UX education.
It is often assumed that visual cues, which highlight specific parts of a visualization to guide the audience’s attention, facilitate visualization storytelling and presentation. This assumption has not been systematically studied. We present an in-lab experiment and a Mechanical Turk study to examine the effects of integral and separable visual cues on the recall and comprehension of visualizations that are accompanied by audio narration. Eye-tracking data in the in-lab experiment confirm that cues helped the viewers focus on relevant parts of the visualization faster. We found that in general, visual cues did not have a significant effect on learning outcomes, but for specific cue techniques (e.g. glow) or specific chart types (e.g heatmap), cues significantly improved comprehension. Based on these results, we discuss how presenters might select visual cues depending on the role of the cues and the visualization type.
Mobile self-reports are a popular technique to collect participant labelled data in the wild. While literature has focused on increasing participant compliance to self-report questionnaires, relatively little work has assessed response accuracy. In this paper, we investigate how participant context can affect response accuracy and help identify strategies to improve the accuracy of mobile self-report data. In a 3-week study we collect over 2,500 questionnaires containing both verifiable and non-verifiable questions. We find that response accuracy is higher for questionnaires that arrive when the phone is not in ongoing or very recent use. Furthermore, our results show that long completion times are an indicator of a lower accuracy. Using contextual mechanisms readily available on smartphones, we are able to explain up to 13% of the variance in participant accuracy. We offer actionable recommendations to assist researchers in their future deployments of mobile self-report studies.
We present an assistive suitcase system, BBeep, for supporting blind people when walking through crowded environments. BBeep uses pre-emptive sound notifications to help clear a path by alerting both the user and nearby pedestrians about the potential risk of collision. BBeep triggers notifications by tracking pedestrians, predicting their future position in real-time, and provides sound notifications only when it anticipates a future collision. We investigate how different types and timings of sound affect nearby pedestrian behavior. In our experiments, we found that sound emission timing has a significant impact on nearby pedestrian trajectories when compared to different sound types. Based on these findings, we performed a real-world user study at an international airport, where blind participants navigated with the suitcase in crowded areas. We observed that the proposed system significantly reduces the number of imminent collisions.
In recent years, research has revealed gender biases in numerous software products. But although some researchers have found ways to improve gender participation in specific software projects, general methods focus mainly on detecting gender biases — not fixing them. To help fill this gap, we investigated whether the GenderMag bias detection method can lead directly to designs with fewer gender biases. In our 3-step investigation, two HCI researchers analyzed an industrial software product using GenderMag; we derived design changes to the product using the biases they found; and ran an empirical study of participants using the original product versus the new version. The results showed that using the method in this way did improve the software’s inclusiveness: women succeeded more often in the new version than in the original; men’s success rates improved too; and the gender gap entirely disappeared.
How to Work in the Car of the Future?: A Neuroergonomical Study Assessing Concentration, Performance and Workload Based on Subjective, Behavioral and Neurophysiological Insights
Autonomous driving provides new opportunities for the use of time during a car ride. One such important scenario is working. We conducted a neuroergonomical study to compare three configurations of a car interior (based on lighting, visual stimulation, sound) regarding their potential to support productive work. We assessed participants? concentration, performance and workload with subjective, behavioral and EEG measures while they carried out two different concentration tasks during simulated autonomous driving. Our results show that a configuration with a large-area, bright light with high blue components, and reduced visual and auditory stimuli promote performance, quality, efficiency, increased concentration and lower cognitive workload. Increased visual and auditory stimulation paired with linear, darker light with very few blue components resulted in lower performance, reduced subjective concentration, and higher cognitive workload, but did not differ from a normal car configuration. Our multi-method approach thus reveals possible car interior configurations for an ideal workspace.
Many status-quo interfaces for tablets with pen + touch input capabilities force users to reach for device-centric UI widgets at fixed locations, rather than sensing and adapting to the user-centric posture. To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from. To achieve this, our system combines three sensing modalities: 1) raw capacitance touchscreen images, 2) inertial motion, and 3) electric field sensors around the screen bezel for grasp and hand proximity detection. We show how these sensors enable posture-aware pen+touch techniques that adapt interaction and morph user interface elements to suit fine-grained contexts of body-, arm-, hand-, and grip-centric frames of reference.
Often Virtual Reality (VR) experiences are limited by the design of standard controllers. This work aims to liberate a VR developer from these limitations in the physical realm to provide an expressive match to the limitless possibilities in the virtual realm. VirtualBricks is a LEGO based toolkit that enables construction of a variety of physical-manipulation enabled controllers for VR, by offering a set of feature bricks that emulate as well as extend the capabilities of default controllers. Based on the LEGO platform, the toolkit provides a modular, scalable solution for enabling passive haptics in VR. We demonstrate the versatility of our designs through a rich set of applications including re-implementations of artifacts from recent research. We share a VR Integration package for integration with Unity VR IDE, the CAD models for the feature bricks, for easy deployment of VirtualBricks within the community.
We present CATS, a digital painting system that synthesizes textures from live video in real-time, short-cutting the typical brush- and texture- gathering workflow. Through the use of boundary-aware texture synthesis, CATS produces strokes that are non-repeating and blend smoothly with each other. This allows CATS to produce paintings that would be difficult to create with traditional art supplies or existing software. We evaluated the effectiveness of CATS by asking artists to integrate the tool into their creative practice for two weeks; their paintings and feedback demonstrate that CATS is an expressive tool which can be used to create richly textured paintings.
Full-coverage displays can place visual content anywhere on the interior surfaces of a room (e.g., a weather display near the coat stand). In these settings, digital artefacts can be located behind the user and out of their field of view – meaning that it can be difficult to notify the user when these artefacts need attention. Although much research has been carried out on notification, little is known about how best to direct people to the necessary location in room environments. We designed five diverse attention-guiding techniques for full-coverage display rooms, and evaluated them in a study where participants completed search tasks guided by the different techniques. Our study provides new results about notification in full-coverage displays: we showed benefits of persistent visualisations that could be followed all the way to the target and that indicate distance-to-target. Our findings provide useful information for improving the usability of interactive full-coverage environments.
We compared four audio-based radar metaphors for providing directional stimuli to users of AR headsets. The metaphors are clock face, compass, white noise, and scale. Each metaphor, or method, signals the movement of a virtual arm in a radar sweep. In a user study, statistically significant differences were observed for accuracy and response time. Beat-based methods (clock face, compass) elicited responses biased to the left of the stimulus location, and non-beat-based methods (white noise, scale) produced responses biased to the right of the stimulus location. The beat methods were more accurate than the non-beat methods. However, the non-beat methods elicited quicker responses. We also discuss how response accuracy varies along the radar sweep between methods. These observations contribute design insights for non-verbal, non-visual directional prompting.
As design thinking shifted away from conventional methods with the rapid adoption of computer-aided design and fabrication technologies, architects have been seeking ways to initiate a comprehensive dialogue between the virtual and the material realms. Current methodologies do not offer embodied workflows that utilize the feedback obtained through a subsequent transition process between physical and digital design. Therefore, narrowing the separation between these two platforms remains as a research problem. This literature review elaborates the divide between physical and digital design, testing and manufacturing techniques in the morphological process of architectural form. We first review the digital transformation in the architectural design discourse. Then, we proceed by introducing a variety of methods that are integrating digital and physical workflows and suggesting an alternative approach. Our work unveils that there is a need for empirical research with a focus on integrated approaches to create intuitively embodied experiences for architectural designers.
Breastfeeding is not only a public health issue, but also a matter of economic and social justice. This paper presents an iteration of a participatory design process to create spaces for re-imagining products, services, systems, and policies that support breastfeeding in the United States. Our work contributes to a growing literature around making hackathons more inclusive and accessible, designing participatory processes that center marginalized voices, and incorporating systems- and relationship-based approaches to problem solving. By presenting an honest assessment of the successes and shortcomings of the first iteration of a hackathon, we explain how we re-structured the second “Make the Breast Pump Not Suck” hackathon in service of equity and systems design. Key to our re-imagining of conventional innovation structures is a focus on experience design, where joy and play serve as key strategies to help people and institutions build relationships across lines of difference. We conclude with a discussion of design principles applicable not only to designers of events, but to social movement researchers and HCI scholars trying to address oppression through the design of technologies and socio-technical systems.
Project Sidewalk: A Web-based Crowdsourcing Tool for Collecting Sidewalk Accessibility Data At Scale
We introduce Project Sidewalk, a new web-based tool that enables online crowdworkers to remotely label pedestrian-related accessibility problems by virtually walking through city streets in Google Street View. To train, engage, and sustain users, we apply basic game design principles such as interactive onboarding, mission-based tasks, and progress dashboards. In an 18-month deployment study, 797 online users contributed 205,385 labels and audited 2,941 miles of Washington DC streets. We compare behavioral and labeling quality differences between paid crowdworkers and volunteers, investigate the effects of label type, label severity, and majority vote on accuracy, and analyze common labeling errors. To complement these findings, we report on an interview study with three key stakeholder groups (N=14) soliciting reactions to our tool and methods. Our findings demonstrate the potential of virtually auditing urban accessibility and highlight tradeoffs between scalability and quality compared to traditional approaches.
The sport data tracking systems available today are based on specialized hardware (high-definition cameras, speed radars, RFID) to detect and track targets on the field. While effective, implementing and maintaining these systems pose a number of challenges, including high cost and need for close human monitoring. On the other hand, the sports analytics community has been exploring human computation and crowdsourcing in order to produce tracking data that is trustworthy, cheaper and more accessible. However, state-of-the-art methods require a large number of users to perform the annotation, or put too much burden into a single user. We propose HistoryTracker, a methodology that facilitates the creation of tracking data for baseball games by warm-starting the annotation process using a vast collection of historical data. We show that HistoryTracker helps users to produce tracking data in a fast and reliable way.
Clinical psychology literature indicates that reframing ir- rational thoughts can help bring positive cognitive change to those suffering from mental distress. Through data from an online mental health forum, we study how these cognitive processes play out in peer-to-peer conversations. Acknowledging the complexity of measuring cognitive change, we first provide an operational definition of a “moment of change” based on sentiment change in online conversations. Using this definition, we propose a predictive model that can identify whether a conversation thread or a post is associated with a moment of cognitive change. Consistent with psychological literature, we find that markers of language associated with sentiment and and affect are the most predictive. Further, cultural differences play an important role: predictive models trained on one country generalize poorly to others. To understand how a moment of change happens, we build a model that explicitly tracks topic and associated sentiment in a forum thread.
We present explorable multiverse analysis reports, a new approach to statistical reporting where readers of research papers can explore alternative analysis options by interacting with the paper itself. This approach draws from two recent ideas: i) multiverse analysis, a philosophy of statistical reporting where paper authors report the outcomes of many different statistical analyses in order to show how fragile or robust their findings are; and ii) explorable explanations, narratives that can be read as normal explanations but where the reader can also become active by dynamically changing some elements of the explanation. Based on five examples and a design space analysis, we show how combining those two ideas can complement existing reporting approaches and constitute a step towards more transparent research papers.
Certain video games show promise as tools for training spatial skills, one of the strongest predictors of future success in STEM. However, little is known about the gaming preferences of those who would benefit the most from such interventions: low spatial skill students. To provide guidance on how to design training games for this population, we conducted a survey of 350 participants from three populations: online college-age, students from a low SES high school, and students from a high SES high school. Participants took a timed test of spatial skills and then answered questions about their demographics, gameplay habits, preferences, and motivations. The only predictors of spatial skill were gender and population: female participants from online and low SES high school populations had the lowest spatial skill. In light of these findings, we provide design recommendations for game-based spatial skill interventions targeting low spatial skill students.
Trust facilitates cooperation and supports positive outcomes in social groups, including member satisfaction, information sharing, and task performance. Extensive prior research has examined individuals’ general propensity to trust, as well as the factors that contribute to their trust in specific groups. Here, we build on past work to present a comprehensive framework for predicting trust in groups. By surveying 6,383 Facebook Groups users about their trust attitudes and examining aggregated behavioral and demographic data for these individuals, we show that (1) an individual’s propensity to trust is associated with how they trust their groups, (2) smaller, closed, older, more exclusive, or more homogeneous groups are trusted more, and (3) a group’s overall friendship-network structure and an individual’s position within that structure can also predict trust. Last, we demonstrate how group trust predicts outcomes at both individual and group level such as the formation of new friendship ties.
Concept-Driven Visual Analytics: an Exploratory Study of Model- and Hypothesis-Based Reasoning with Visualizations
Visualization tools facilitate exploratory data analysis, but fall short at supporting hypothesis-based reasoning. We conducted an exploratory study to investigate how visualizations might support a concept-driven analysis style, where users can optionally share their hypotheses and conceptual models in natural language, and receive customized plots depicting the fit of their models to the data. We report on how participants leveraged these unique affordances for visual analysis. We found that a majority of participants articulated meaningful models and predictions, utilizing them as entry points to sensemaking. We contribute an abstract typology representing the types of models participants held and externalized as data expectations. Our findings suggest ways for rearchitecting visual analytics tools to better support hypothesis- and model-based reasoning, in addition to their traditional role in exploratory analysis. We discuss the design implications and reflect on the potential benefits and challenges involved.
Pictorial System Usability Scale (P-SUS): Developing an Instrument for Measuring Perceived Usability
We have developed a pictorial multi-item scale, called P-SUS (Pictorial System Usability Scale), which aims to measure the perceived usability of mobile devices. The scale is based on the established verbal usability questionnaire SUS (System Usability Scale). A user-centred design process was employed to develop and refine its 10 pictorial items. The scale was tested in a first validation study (N=60) using student participants. Psychometric properties (convergent validity, criterion-related validity, sensitivity, and reliability), as well as the motivation to fill in the scale were assessed. The results indicated satisfactory convergent validity for about two-thirds of the items. Furthermore, strong correlations were obtained for the sum scores between verbal and pictorial SUS, and the pictorial scale was perceived as more motivating than the verbal questionnaire. The P-SUS represents a first attempt to provide a pictorial usability scale for the evaluation of (mobile) devices.
Designing Second-Screening Experiences for Social Co-Selection and Critical Co-Viewing of Reality TV
Public commentary related to reality TV can be overwhelmed by thoughtless reactions and negative sentiments, which often problematically reinforce the cultural stereotyping typically employed in such media. We describe the design, and month-long evaluation, of a mobile “second-screening” application, Screenr, which uses co-voting and live textual tagging to encourage more critical co-viewing in these contexts. Our findings highlight how Screenr supported interrogation of the production qualities and claims of shows, promoted critical discourse around the motivations of programmes, and engaged participants in reflecting on their own assumptions and views. We situate our results within the context of existing second-screening co-viewing work, discuss implications for such technologies to support critical engagement with socio-political media, and provide design implications for future digital technologies in this domain.
Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.
HCI4D researchers and practitioners have leveraged voice forums to enable people with literacy, socioeconomic, and connectivity barriers to access, report, and share information. Although voice forums have received impassioned usage from low-income, low-literate, rural, tribal, and disabled communities in diverse HCI4D contexts, the participation of women in these services is almost non-existent. In this paper, we investigate the reasons for the low participation of women in social media voice forums by examining the use of Sangeet Swara in India and Baang in Pakistan by marginalized women and men. Our mixed-methods approach spanning content analysis of audio posts, quantitative analysis of interactions between users, and qualitative interviews with users indicate gender inequity due to deep-rooted patriarchal values. We found that women on these forums faced systemic discrimination and encountered abusive content, flirts, threats, and harassment. We discuss design recommendations to create social media voice forums that foster gender equity in use of these services.
Laughing is Scary, but Farting is Cute: A Conceptual Model of Children’s Perspectives of Creepy Technologies
In HCI, adult concerns about technologies for children have been studied extensively. However, less is known about what children themselves find concerning in everyday technologies. We examine children’s technology-related fears by probing their use of the colloquial term “creepy.” To understand children’s perceptions of “creepy technologies,” we conducted four participatory design sessions with children (ages 7 – 11) to design and evaluate creepy technologies, followed by interviews with the same children. We found that children’s fear reactions emphasized physical harm and threats to their relationships (particularly with attachment figures). The creepy signals from technology the children described include: deception, lack of control, mimicry, ominous physical appearance, and unpredictability. Children acknowledged trusted adults will mediate the relationship between creepy technology signals and fear responses. Our work contributes a close examination of what children mean when they say a technology is “creepy.” By treating these concerns as principal design considerations, developers can build systems that are more transparent about the risks they produce and more sensitive to the fears they may unintentionally raise.
While educational technologies such as MOOCs have helped scale content-based learning, scaling situated learning is still challenging. The time it takes to define a real-world project and to mentor learners is often prohibitive, especially given the limited contributions that novices are able to make. This paper introduces micro-role hierarchies, a form of coordination that integrates workflows and hierarchies to help short-term novices predictably contribute to complex projects. Individuals contribute through micro-roles, small experiential assignments taking roughly 2 hours. These micro-roles support execution of the desired work process, but also sequence into learning pathways, resulting in a learning dynamic similar to moving up an organizational hierarchy. We demonstrate micro-role hierarchies through Causeway, a platform for learning web development while building websites for nonprofits. We carry out a proof-of-concept study in which learners built static websites for refugee resettlement agencies in 2 hour long roles.
Tapping is an immensely important gesture in mobile touchscreen interfaces, yet people still frequently are required to learn which elements are tappable through trial and error. Predicting human behavior for this everyday gesture can help mobile app designers understand an important aspect of the usability of their apps without having to run a user study. In this paper, we present an approach for modeling tappability of mobile interfaces at scale. We conducted large-scale data collection of interface tappability over a rich set of mobile apps using crowdsourcing and computationally investigated a variety of signifiers that people use to distinguish tappable versus not-tappable elements. Based on the dataset, we developed and trained a deep neural network that predicts how likely a user will perceive an interface element as tappable versus not tappable. Using the trained tappability model, we developed TapShoe, a tool that automatically diagnoses mismatches between the tappability of each element as perceived by a human user—predicted by our model, and the intended or actual tappable state of the element specified by the developer or designer. Our model achieved reasonable accuracy: mean precision 90.2% and recall 87.0%, in matching human perception on identifying tappable UI elements. The tappability model and TapShoe were well received by designers via an informal evaluation with 7 professional interaction designers.
Navigating source code, an activity common in software development, is time consuming and in need of improvement. We present CodeGazer, a prototype for source code navigation using eye gaze for common navigation functions. These functions include actions such as “Go to Definition” and “Find All Usages” of an identifier, navigate to files and methods, move back and forth between visited points in code and scrolling. We present user study results showing that many users liked and even preferred the gaze-based navigation, in particular the “Go to Definition” function. Gaze-based navigation is also holding up well in completion time when compared to traditional methods. We discuss how eye gaze can be integrated into traditional mouse & keyboard applications in order to make “look up” tasks more natural.
HCI scholarship is increasingly concerned with the ethical impact of socio-technical systems. Current theoretically driven approaches that engage with ethics generally prescribe only abstract approaches by which designers might consider values in the design process. However, there is little guidance on methods that promote value discovery, which might lead to more specific examples of relevant values in specific design contexts. In this paper, we elaborate a method for value discovery, identifying how values impact the designer’s decision making. We demonstrate the use of this method, called Ethicography, in describing value discovery and use throughout the design process. We present analysis of design activity by user experience (UX) design students in two lab protocol conditions, describing specific human values that designers considered for each task, and visualizing the interplay of these values. We identify opportunities for further research, using the Ethicograph method to illustrate value discovery and translation into design solutions.
Understanding users’ perceptions of suspected computer-security problems can help us tailor technology to better protect users. To this end, we conducted a field study of users’ perceptions using 189,272 problem descriptions sent to the customer-support desk of a large anti-virus vendor from 2015 to 2018. Using qualitative methods, we analyzed 650 problem descriptions to study the security issues users faced and the symptoms that led users to their own diagnoses. Subsequently, we investigated to what extent and for what types of issues user diagnoses matched those of experts. We found, for example, that users and experts were likely to agree for most issues, but not for attacks (e.g., malware infections), for which they agreed only in 44% of the cases. Our findings inform several user-security improvements, including how to automate interactions with users to resolve issues and to better communicate issues to users.
Many personal informatics systems allow people to collect and manage personal data and reflect more deeply about themselves. However, these tools rarely offer ways to customize how the data is visualized. In this work, we investigate the question of how to enable people to determine the representation of their data. We analyzed the Dear Data project to gain insights into the design elements of personal visualizations. We developed DataSelfie, a novel system that allows individuals to gather personal data and design custom visuals to represent the collected data. We conducted a user study to evaluate the usability of the system as well as its potential for individual and collaborative sensemaking of the data.
Virtual reality (VR) headsets allow wearers to escape their physical surroundings, immersing themselves in a virtual world. Although escape may not be realistic or acceptable in many everyday situations, air travel is one context where early adoption of VR could be very attractive. While travelling, passengers are seated in restricted spaces for long durations, reliant on limited seat-back displays or mobile devices. This paper explores the social acceptability and usability of VR for in-flight entertainment. In an initial survey, we captured respondents’ attitudes towards the social acceptability of VR headsets during air travel. Based on the survey results, we developed a VR in-flight entertainment prototype and evaluated this in a focus group study. Our results discuss methods for improving the acceptability of VR in-flight, including using mixed reality to help users transition between virtual and physical environments and supporting interruption from other co-located people.
In video production, inserting B-roll is a widely used technique to enrich the story and make a video more engaging. However, determining the right content and positions of B-roll and actually inserting it within the main footage can be challenging, and novice producers often struggle to get both timing and content right. We present B-Script, a system that supports B-roll video editing via interactive transcripts. B-Script has a built-in recommendation system trained on expert-annotated data, recommending users B-roll position and content. To evaluate the system, we conducted a within-subject user study with 110 participants, and compared three interface variations: a timeline-based editor, a transcript-based editor, and a transcript-based editor with recommendations. Users found it easier and were faster to insert B-roll using the transcript-based interface, and they created more engaging videos when recommendations were provided.
Dynamic elements of the drawing process (e.g., order of compilation, speed, length, and pressure of strokes) are considered important because they can reveal the technique, process, and emotions of the artist. To explore how sensing, visualizing, and sharing these aspects of the creative process might shape art making and art viewing experiences, we designed a research probe which unobtrusively tracks and visualizes the movement and pressure of the artist’s pencil on an easel. Using our probe, we conducted studies with artists and art viewers, which reveal digital and physical representations of creative process as a means of reflecting on a multitude of factors about the finished artwork, including technique, style, and the emotions of the artists. We conclude by discussing future directions for HCI systems that sense and visualize aspects of the creative process in digitally-mediated arts, as well as the social considerations of sharing and curating intimate process information.
What Can Gestures Tell?: Detecting Motor Impairment in Early Parkinson’s from Common Touch Gestural Interactions
Parkinson’s disease (PD) is a chronic neurological disorder causing progressive disability that severely affects patients’ quality of life. Although early interventions can provide significant benefits, PD diagnosis is often delayed due to both the mildness of early signs and the high requirements imposed by traditional screening and diagnosis methods. In this paper, we explore the feasibility and accuracy of detecting motor impairment in early PD via sensing and analyzing users’ common touch gestural interactions on smartphones. We investigate four types of common gestures, including flick, drag, pinch, and handwriting gestures, and propose a set of features to capture PD motor signs. Through a 102-subject (35 early PD subjects and 67 age-matched controls) study, our approach achieved an AUC of 0.95 and 0.89/0.88 sensitivity/specificity in discriminating early PD subjects from healthy controls. Our work constitutes an important step towards unobtrusive, implicit, and convenient early PD detection from routine smartphone interactions.
There is a growing body of literature in HCI examining the intersection between policymaking and technology research. However, what it means to engage in policymaking in our field, or the ways in which evidence from HCI studies is translated into policy, is not well understood. We report on interviews with 11 participants working at the intersection of technology research and policymaking. Analysis of this data highlights how evidence is understood and made sense of in policymaking processes, what forms of evidence are privileged over others, and the work that researchers engage in to meaningfully communicate their work to policymaking audiences. We discuss how our findings pose challenges for certain traditions of research in HCI, yet also open up new policy opportunities for those engaging in more speculative research practices. We conclude by discussing three ways forward that the HCI community can explore to increase engagement with policymaking contexts.
Returning citizens (formerly incarcerated individuals) face great challenges finding employment, and these are exacerbated by the need for digital literacy in modern job search. Through 23 semi-structured interviews and a pilot digital literacy course with returning citizens in the Greater Detroit area, we explore tactics and needs with respect to job search and digital technology. Returning citizens exhibit great diversity, but overall, we find our participants to have striking gaps in digital literacy upon release, even as they are quickly introduced to smartphones by friends and family. They tend to have employable skills and ability to use offline social networks to find opportunities, but have little understanding of formal job search processes, online or offline. They mostly mirror mainstream use of mobile technology, but they have various reasons to avoid social media. These and other findings lead to recommendations for digital literacy programs for returning citizens.
Comparing Data from Chatbot and Web Surveys: Effects of Platform and Conversational Style on Survey Response Quality
This study aims to explore the feasibility of a text-based virtual agent as a new survey method to overcome the web survey’s common response quality problems, which are caused by respondents’ inattention. To this end, we conducted a 2 (platform: web vs. chatbot) × 2 (conversational style: formal vs. casual) experiment. We used satisficing theory to compare the responses’ data quality. We found that the participants in the chatbot survey, as compared to those in the web survey, were more likely to produce differentiated responses and were less likely to satisfice; the chatbot survey thus resulted in higher-quality data. Moreover, when a casual conversational style is used, the participants were less likely to satisfice-although such effects were only found in the chatbot condition. These results imply that conversational interactivity occurs when a chat interface is accompanied by messages with effective tone. Based on an analysis of the qualitative responses, we also showed that a chatbot could perform part of a human interviewer’s role by applying effective communication strategies.
In this study, we prototype and examine a system that allows a user to manipulate a 3D virtual object with multiple fingers without wearing any device. An autostereoscopic display produces a 3D image and a depth sensor measures the movement of the fingers. When a user touches a virtual object, haptic feedback is provided by ultrasound phased arrays. By estimating the cross section of the finger in contact with the virtual object and by creating a force pattern around it, it is possible for the user to recognize the position of the surface relative to the finger. To evaluate our system, we conducted two experiments to show that the proposed feedback method is effective in recognizing the object surface and thereby enables the user to grasp the object quickly without seeing it.
Today’s spectrum of playful fitness solutions features systems that are clearly game-first or fitness-first in design; hardly any sufficiently incorporate both areas. Consequently, existing applications and evaluations often lack in focus on attractiveness and effectiveness, which should be addressed on the levels of body, controller, and game scenario following a holistic design approach. To contribute to this topic and as a proof-of-concept, we designed the ExerCube, an adaptive fitness game setup. We evaluated participants’ multi-sensory and bodily experiences with a non-adaptive and an adaptive ExerCube version and compared them with personal training to reveal insights to inform the next iteration of the ExerCube. Regarding flow, enjoyment and motivation, the ExerCube is on par with personal training. Results further reveal differences in perception of exertion, types and quality of movement, social factors, feedback, and audio experiences. Finally, we derive considerations for future research and development directions in holistic fitness game setups.
Tough Times at Transitional Homeless Shelters: Considering the Impact of Financial Insecurity on Digital Security and Privacy
Addressing digital security and privacy issues can be particularly difficult for users who face challenging circumstances. We performed semi-structured interviews with residents and staff at 4 transitional homeless shelters in the U.S. San Francisco Bay Area (n=15 residents, 3 staff) to explore their digital security and privacy challenges. Based on these interviews, we outline four tough times themes — challenges experienced by our financially insecure participants that impacted their digital security and privacy — which included: (1) limited financial resources, (2) limited access to reliable devices and Internet, (3) untrusted relationships, and (4) ongoing stress. We provide examples of how each theme impacts digital security and privacy practices and needs. We then use these themes to provide a framework outlining opportunities for technology creators to better support users facing security and privacy challenges related to financial insecurity.
How does the presence of an audience influence the social interaction with a conversational system in a physical space? To answer this question, we analyzed data from an art exhibit where visitors interacted in natural language with three chatbots representing characters from a book. We performed two studies to explore the influence of audiences. In Study 1, we did fieldwork cross-analyzing the reported perception of the social interaction, the audience conditions (visitor is alone, visitor is observed by acquaintances and/or strangers), and control variables such as the visitor’s familiarity with the book and gender. In Study 2, we analyzed over 5,000 conversation logs and video recordings, identifying dialogue patterns and how they correlated with the audience conditions. Some significant effects were found, suggesting that conversational systems in physical spaces should be designed based on whether other people observe the user or not.
Unobtrusively Enhancing Reflection-in-Action of Teachers through Spatially Distributed Ambient Information
Reflecting on their performance during classroom-teaching is an important competence for teachers. Such reflection-in-action (RiA) enables them to optimize teaching on the spot. But RiA is also challenging, demanding extra thinking in teachers’ already intensive routines. Little is known on how HCI systems can facilitate teachers’ RiA during classroom-teaching. To fill in this gap, we evaluate ClassBeacons, a system that uses spatially distributed lamps to depict teachers’ ongoing performance on how they have divided their time and attention over students in the classroom. Empirical qualitative data from eleven teachers in 22 class periods show that this ambient information facilitated teachers’ RiA without burdening teaching in progress. Based on our theoretical grounding and field evaluation, we contribute empirical knowledge about how an HCI system enhanced teachers’ process of RiA as well as a set of design principles for unobtrusively supporting RiA.
Data scientists are responsible for the analysis decisions they make, but it is hard for them to track the process by which they achieved a result. Even when data scientists keep logs, it is onerous to make sense of the resulting large number of history records full of overlapping variants of code, output, plots, etc. We developed algorithmic and visualization techniques for notebook code environments to help data scientists forage for information in their history. To test these interventions, we conducted a think-aloud evaluation with 15 data scientists, where participants were asked to find specific information from the history of another person’s data science project. The participants succeed on a median of 80% of the tasks they performed. The quantitative results suggest promising aspects of our design, while qualitative results motivated a number of design improvements. The resulting system, called Verdant, is released as an open-source extension for JupyterLab.
I Don’t Even Have to Bother Them!: Using Social Media to Automate the Authentication Ceremony in Secure Messaging
The privacy guaranteed by secure messaging applications relies on users completing an authentication ceremony to verify they are using the proper encryption keys. We examine the feasibility of social authentication, which partially automates the ceremony using social media accounts. We implemented social authentication in Signal and conducted a within-subject user study with 42 participants to compare this with existing methods. To generalize our results, we conducted a Mechanical Turk survey involving 421 respondents. Our results show that users found social authentication to be convenient and fast. They particularly liked verifying keys asynchronously, and viewing social media profiles naturally coincided with how participants thought of verification. However, some participants reacted negatively to integrating social media with Signal, primarily because they distrust social media services. Overall, automating the authentication ceremony and distributing trust with additional service providers is promising, but this infrastructure needs to be more trusted than social media companies.
The home is filled with a rich diversity of sounds from mundane beeps and whirs to dog barks and children’s shouts. In this paper, we examine how deaf and hard of hearing (DHH) people think about and relate to sounds in the home, solicit feedback and reactions to initial domestic sound awareness systems, and explore potential concerns. We present findings from two qualitative studies: in Study 1, 12 DHH participants discussed their perceptions of and experiences with sound in the home and provided feedback on initial sound awareness mockups. Informed by Study 1, we designed three tablet-based sound awareness prototypes, which we evaluated with 10 DHH participants using a Wizard-of-Oz approach. Together, our findings suggest a general interest in smarthome-based sound awareness systems particularly for displaying contextually aware, personalized and glanceable visualizations but key concerns arose related to privacy, activity tracking, cognitive overload, and trust.
Humans expect their collaborators to look beyond the explicit interpretation of their words. Implicature is a common form of implicit communication that arises in natural language discourse when an utterance leverages context to imply information beyond what the words literally convey. Whereas computational methods have been proposed for interpreting and using different forms of implicature, its role in human and artificial agent collaboration has not yet been explored in a concrete domain. The results of this paper provide insights to how artificial agents should be structured to facilitate natural and efficient communication of actionable information with humans. We investigated implicature by implementing two strategies for playing Hanabi, a cooperative card game that relies heavily on communication of actionable implicit information to achieve a shared goal. In a user study with 904 completed games and 246 completed surveys, human players randomly paired with an implicature AI are 71% more likely to think their partner is human than players paired with a non-implicature AI. These teams demonstrated game performance similar to other state of the art approaches.
This paper explores the use of conversational speech question and answer systems in the challenging context of public spaces in slums. A major part of this work is a comparison of the source and speed of the given responses; that is, either machine-powered and instant or human-powered and delayed. We examine these dimensions via a two-stage, multi-sited deployment. We report on a pilot deployment that helped refine the system, and a second deployment involving the installation of nine of each type of system within a large Mumbai slum for a 40-day period, resulting in over 12,000 queries. We present the findings from a detailed analysis and comparison of the two question-answer corpora; discuss how these insights might help improve machine-powered smart speakers; and, highlight the potential benefits of multi-sited public speech installations within slum environments.
This paper investigates a hidden dimension of research with real world stakes: research subjects who care — sometimes deeply — about the topic of the research in which they participate. They manifest this care, we show, by managing how they are represented in the research process, by exercising politics in shaping knowledge production, and sometimes in experiencing trauma in the process. We draw first-hand reflections on participation in diversity research on Wikipedia, transforming participants from objects of study to active negotiators of research process. We depict how care, vulnerability, harm, and emotions shape ethnographic and qualitative data. We argue that, especially in reflexive cultures, research subjects are active agents with agendas, accountabilities, and political projects of their own. We propose ethics of care and collaboration to open up new possibilities for knowledge production and socio-technical intervention in HCI.
As service robots are envisioned to provide decision-making support (DMS) in public places, it is becoming essential to design the robot’s manner of offering assistance. For example, robot shop assistants that proactively or reactively give product recommendations may impact customers’ shopping experience. In this paper, we propose an anticipation-autonomy policy framework that models three levels of proactivity (high, medium and low) of service robots in DMS contexts. We conduct a within-subject experiment with 36 participants to evaluate the effects of DMS robot’s proactivity on user perceptions and interaction behaviors. Results show that a highly proactive robot is deemed inappropriate though people can get rich information from it. A robot with medium proactivity helps reduce the decision space while maintaining users’ sense of engagement. The least proactive robot grants users more control but may not realize its full capability. We conclude the paper with design considerations for service robot’s manner.
Research in Human-Computer Interaction for Development (HCI4D) routinely relies on and engages with the increasing penetration of smartphones and the internet. We examine the mobile, internet, and social media practices of women community health workers, for whom internet access has newly become possible. These workers are uniquely positioned at the intersections of various communities of practice—their familial units, workplaces, networks of health workers, larger communities, and the online world. However, they remain at the margins of each, on account of difference in gender, class, literacies, professional expertise, and more. Our findings unpack the legitimate peripheral participation of these workers; examining how they appropriate smartphones and the internet to move away from the peripheries to fully participate in these communities. We discuss how their activities are motivated by moves towards empowerment, digitization, and improved healthcare provision. We consider how future work might support, leverage, and extend their efforts.
Collaborative work with data is increasingly common and spans a broad range of activities – from creating or analysing data in a team, to sharing it with others, to reusing someone else’s data in a new context. In this paper, we explore collaboration practices around structured data and how they are supported by current technology. We present the results of an interview study with twenty data practitioners, from which we derive four high-level user needs for tool support. We compare them against the capabilities of twenty systems that are commonly associated with data activities, including data publishing software, wikis, web-based collaboration tools, and online community platforms. Our findings suggest that data-centric collaborative work would benefit from: structured documentation of data and its lifecycle; advanced affordances for conversations among collaborators; better change control; and custom data access. The findings help us formalise practices around data teamwork, and build a better understanding how people’s motivations and barriers when working with structured data.
Raycasting is the most common target pointing technique in virtual reality environments. However, performance on small and distant targets is impacted by the accuracy of the pointing device and the user’s motor skills. Current pointing facilitation techniques are currently only applied in the context of the virtual hand, i.e. for targets within reach. We propose enhancements to Raycasting: filtering the ray, and adding a controllable cursor on the ray to select the nearest target. We describe a series of studies for the design of the visual feedforward, filtering technique, as well as a comparative study between different 3D pointing techniques. Our results show that highlighting the nearest target is one of the most efficient visual feedforward technique. We also show that filtering the ray reduces error rate in a drastic way. Finally we show the benefits of RayCursor compared to Raycasting and another technique from the literature.
With upcoming breakthroughs in free-form display technologies, new user interface design challenges have emerged. Here, we investigate a question, which has been widely explored on traditional GUIs but unexplored on non-rectangular interfaces: what are the user strategies in terms of visual search when information is not presented in a traditional rectangular layout? To achieve this, we present two complementary studies investigating eye movements in different visual search tasks. Our results unveil which areas are seen first according to different visual structures. By doing so we address the question of where to place relevant content for the UI designers of non-rectangular displays.
Digital-augmentation of print-media can provide contextually relevant audio, visual, or haptic content to supplement the static text and images. The design of such augmentation–its medium, quantity, frequency, content, and access technique–can have a significant impact on the reading experience. In the worst case, such as where children are learning to read, the print medium can become a proxy for accessing digital content only, and the textual content is avoided. In this work, we examine how augmented content can change the reader’s behaviour with a comic book. We first report on the usage of a commercially available augmented comic for children, providing evidence that a third of all readers converted to simply viewing the digital media when printed content is duplicated. Second, we explore the design space for digital content augmentation in print media. Third, we report a user study with 136 children that examined the impact of both content length and presentation in a digitally-augmented comic book. From this, we report a series of design guidelines to assist designers and editors in the development of digitally-augmented print media.
Sketches and real-world user interface examples are frequently used in multiple stages of the user interface design process. Unfortunately, finding relevant user interface examples, especially in large-scale datasets, is a highly challenging task because user interfaces have aesthetic and functional properties that are only indirectly reflected by their corresponding pixel data and meta-data. This paper introduces Swire, a sketch-based neural-network-driven technique for retrieving user interfaces. We collect the first large-scale user interface sketch dataset from the development of Swire that researchers can use to develop new sketch-based data-driven design interfaces and applications. Swire achieves high performance for querying user interfaces: for a known validation task it retrieves the most relevant example as within the top-10 results for over 60% of queries. With this technique, for the first time designers can accurately retrieve relevant user interface examples with free-form sketches natural to their design workflows. We demonstrate several novel applications driven by Swire that could greatly augment the user interface design process.
Comics are an entertaining and familiar medium for presenting compelling stories about data. However, existing visualization authoring tools do not leverage this expressive medium. In this paper, we seek to incorporate elements of comics into the construction of data-driven stories about dynamic networks. We contribute DataToon, a flexible data comic storyboarding tool that blends analysis and presentation with pen and touch interactions. A storyteller can use DataToon rapidly generate visualization panels, annotate them, and position them within a canvas to produce a visually compelling narrative. In a user study, participants quickly learned to use DataToon for producing data comics.
Children under 11 are often regarded as too young to comprehend the implications of online privacy. Perhaps as a result, little research has focused on younger kids’ risk recognition and coping. Such knowledge is, however, critical for designing efficient safeguarding mechanisms for this age group. Through 12 focus group studies with 29 children aged 6-10 from UK schools, we examined how children described privacy risks related to their use of tablet computers and what information was used by them to identify threats. We found that children could identify and articulate certain privacy risks well, such as information oversharing or revealing real identities online; however, they had less awareness with respect to other risks, such as online tracking or game promotions. Our findings offer promising directions for supporting children’s awareness of cyber risks and the ability to protect themselves online.
Self-tracking can help people understand their medical condition and the factors that influence their symptoms. However, it is unclear how tracking technologies should be tailored to help people cope with the progression of a degenerative disease. To understand how smartphone apps and other tracking technologies can support people in coping with an incurable illness, we interviewed both people with Parkinson’s Disease (n=17) and care partners (n=6) who help people with Parkinson’s manage their lives. We describe how symptom trackers can help people identify and solve problems to improve their quality of life, the role symptom trackers can play in helping people combat their own tendencies towards avoidance and denial, and the complex role of care partners in defining and tracking ambiguous symptoms. Our findings yield insights that can guide the design of tracking technologies to help people with Parkinson’s Disease accept and plan for their condition.
Phishing attacks are a major problem, as evidenced by the DNC hackings during the 2016 US presidential election, in which staff were tricked into sharing passwords by fake Google security emails, granting access to confidential information. Vulnerabilities such as these are due in part to insufficient and tiresome user training in cybersecurity. Ideally, we would have more engaging training methods that teach cybersecurity in an active and entertaining way. To address this need, we introduce the game What.Hack, which not only teaches phishing concepts but also simulates actual phishing attacks in a role-playing game to encourage the player to practice defending themselves. Our user study shows that our game design is more engaging and effective in improving performance than a standard form of training and a competing training game design (which does not simulate phishing attempts through role-playing).
Existing methods for researching and designing to support relationships between parents and their adult children tend to lead to designs that respect the differences between them. We conducted 14 Position Exchange Workshops with parents and their adult children, where the child has left home in recent years, aiming to explicate and confront their positions in creative and supportive ways. We designed three co-design methods (Card Sort for Me & You, Would I Lie to You? and A Magic Machine for You) to support participants to explore, understand, empathize, and design for each other. The findings show that the methods facilitated understanding, renegotiating, and reimagining their current positions. We discuss how positions can help consider both perspectives in the design process. This paper seeks to contribute (1) how the notion of positions enables generating understandings of the relationship, and (2) a set of methods influenced by position exchange, empathy, and playful engagement that help explore human relationships.
Every person is unique, with individual behavioural characteristics: how one moves, coordinates, and uses their body. In this paper we investigate body motion as behavioural biometrics for virtual reality. In particular, we look into which behaviour is suitable to identify a user. This is valuable in situations where multiple people use a virtual reality environment in parallel, for example in the context of authentication or to adapt the VR environment to users’ preferences. We present a user study (N=22) where people perform controlled VR tasks (pointing, grabbing, walking, typing), monitoring their head, hand, and eye motion data over two sessions. These body segments can be arbitrarily combined into body relations, and we found that these movements and their combination lead to characteristic behavioural patterns. We present an extensive analysis of which motion/relation is useful to identify users in which tasks using classification methods. Our findings are beneficial for researchers and practitioners alike who aim to build novel adaptive and secure user interfaces in virtual reality.
Current virtual reality applications do not support people who have low vision, i.e., vision loss that falls short of complete blindness but is not correctable by glasses. We present SeeingVR, a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development. Our evaluation with 11 participants with low vision showed that SeeingVR enabled users to better enjoy VR and complete tasks more quickly and accurately. Developers also found our Unity toolkit easy and convenient to use.
New technologies emerge into an increasingly complex everyday life. How can we engage users further into material practices that explore ideas and notions of these new things? This paper proposes a set of qualities for short, intense, workshop-like experiences, created to generate strong individual commitments, and expose underlying personal desires as drivers for ideas. By making use of open-ended making to engage participants in the imagination of new things, we aim to allow a broad range of knowledge to materialise, focused on the making of work that is about technology, rather than of technology.
Thermporal: An Easy-To-Deploy Temporal Thermographic Sensor System to Support Residential Energy Audits
Underperforming, degraded, and missing insulation in US residential buildings is common. Detecting these issues, however, can be difficult. Using thermal cameras during energy audits can aid in locating potential insulation issues, but prior work indicates it is challenging to determine their severity using thermal imagery alone. In this work, we present an easy-to-deploy, temporal thermographic sensor system designed to support residential energy audits through quantitative analysis of building envelope performance. We then offer an evaluation of the system through two studies: (i) a one-week, in-home field study in five homes and (ii) a semi-structured interview study with five professional energy auditors. Our results show our system helps raise awareness, improves homeowners’ ability to gauge the severity of issues, and provides opportunities for new interactions between homeowners, building data, and professional auditors.
Emerging technologies—such as the voice enabled internet—present many opportunities and challenges for HCI research and society as a whole. Advocating for better, healthier implementations of these technologies will require us to communicate abstract values, such as trust, to an audience that ranges from the general public to technologists and even policymakers. In this paper, we show how a combination of film-making and product design can help to illustrate these abstract values. Working as part of a wider international advocacy campaign, Our Friends Electric focuses on the voice enabled internet, translating abstract notions of Internet Health into comprehensible digital futures for the relationship between our voice and the internet. We conclude with a call for designers of physical things to be more involved with the development of trust, privacy and security in this powerful emerging technological landscape.
Informed by considerations from medicine and wellness research, experience design, investigations of new and emerging technologies, and sociopolitical critique, HCI researchers have demonstrated that women’s health is a complex and rich topic. Turning these research outputs into productive interventions, however, is difficult. We argue that design is well positioned to address such a challenge thanks to its methodological traditions of problem setting and framing situated in synthetic (rather than analytic) knowledge production. In this paper, we focus on designing for experiences of menopause. Building on our prior empirical work on menopause and our commitment to pursue design informed by women’s lived experience, we iteratively generated dozens of design frames and accompanying design crits. We document the unfolding of our design reasoning, showing how good-seeming insights nonetheless often lead to bad designs, while working progressively towards stronger insights and design constructs. The latter we offer as a contribution to researchers and practitioners who work at the intersections of women’s health and design.
Previous research on games for people with visual impairment (PVI) has focused on co-designing or evaluating specific games – mostly under controlled conditions. In this research, we follow a game-agnostic, “in-the-wild” approach, investigating the habits, opinions and concerns of PVI regarding digital games. To explore these issues, we conducted an online survey and follow-up interviews with gamers with VI (GVI). Dominant themes from our analysis include the particular appeal of digital games to GVI, the importance of social trajectories and histories of gameplay, the need to balance complexity and accessibility in both games targeted to PVI and mainstream games, opinions about the state of the gaming industry, and accessibility concerns around new and emerging technologies such as VR and AR. Our study gives voice to an underrepresented group in the gaming community. Understanding the practices, experiences and motivations of GVI provides a valuable foundation for informing development of more inclusive games.
How humans use computers has evolved from human-machine interfaces to human-human computer mediated communication. Whilst the field of animal-computer interaction has roots in HCI, technology developed in this area currently only supports animal? computer communication. This design fiction paper presents animal-animal connected interfaces, using dogs as an instance. Through a co-design workshop, we created six proposals. The designs focused on what a dog internet could look like and how interactions might be presented. Analysis of the narratives and conceived designs indicated that participants’ concerns focused around asymmetries within the interaction. This resulted in the use of objects seen as familiar to dogs. This was conjoined with interest in how to initiate and end interactions, which was often achieved through notification systems. This paper builds upon HCI methods for unconventional users, and applies a design fiction approach to uncover key questions towards the creation of animal-to-animal interfaces.
Human Computer Interaction has developed great interest in the Maker Movement. Previous work has explored it from various perspectives, focusing either on its potentials or issues. As these are however only fragmented portrayals, this paper aims to take a broader perspective and interconnect some of the fragments. We conducted a qualitative study in the context of two Maker Faires to gain a better understanding of the complex dynamics that makers operate in. We captured the voices of different stakeholders and explored how their respective agendas relate to each other. The findings illustrate how the event is co-created at the nexus of different technological, social and economic interests while leaving space for diverse practices. The paper contributes a first focused analysis of Maker Faire, probes it as a site for research and discusses how holistic perspectives on the Maker Movement could create new research opportunities.
“My blood sugar is higher on the weekends”: Finding a Role for Context and Context-Awareness in the Design of Health Self-Management Technology
Tools for self-care of chronic conditions often do not fit the contexts in which self-care happens because the influence of context on self-care practices is unclear. We conducted a diary study with 15 adolescents with Type 1 Diabetes and their caregivers to understand how context affects self-care. We observed different contextual settings, which we call contextual frames, in which diabetes self-management varied depending on certain factors – physical activity, food, emotional state, insulin, people, and attitudes. The relative prevalence of these factors across contextual frames impacts self-care necessitating different types of support. We show that contextual frames, as phenomenological abstractions of context, can help designers of context-aware systems systematically explore and model the relation of context with behavior and with technology supporting behavior. Lastly, considering contextual frames as sensitizing concepts, we provide design direction for using context in technology design.
Lassoing objects is a basic function in illustration software and presentation tools. Yet, for many common object arrangements lassoing is sometimes time-consuming to perform and requires precise pen operation. In this work, we studied lassoing movements in a grid of objects similar to icons. We propose a quantitative model to predict the time to lasso such objects depending on the margins between icons, their sizes, and layout, which all affect the number of stopping and crossing movements. Results of two experiments showed that our models predict fully and partially constrained movements with high accuracy. We also analyzed the speed profiles and pen stroke trajectories and identified deeper insights into user behaviors, such as that an unconstrained area can induce higher movement speeds even in preceding path segments.
Mid-air tactile stimulation using ultrasonics has been used in a variety of human computer interfaces in the form of prototypes as well as products. When generating these tactile patterns with mid-air tactile ultrasonic displays, the common approach has been to sample the patterns using the hardware update rate capabilities to their full extent. In the current study we show that the hardware update rate can impact perception, but unexpectedly we find that higher update rates do not improve pattern perception. In a first user study, we highlight the effect of update rate on the perceived strength of a pattern, especially for patterns rendered at slow rate of less than 10 Hz. In a second user study, we identify the evolution of the optimal update rate according to variations in pattern size. Our main results show that update rate should be designated as additional parameter for tactile patterns. We also discuss how the relationships we defined in the current study can be implemented into designer tools so that designers remain oblivious to this additional complexity.
Work addressing the negative impacts of domestic violence on victim-survivors and service providers has slowly been contributing to the HCI discourse. However, work discussing the necessary, pre-emptive steps for researchers to enter these spaces sensitively and considerately, largely remains opaque. Heavily-politicised specialisms that are imbued with conflicting values and practices, such as domestic violence service delivery can be especially difficult to navigate. In this paper, we report on a mixed methods study consisting of interviews, a design dialogue and an ideation workshop with domestic violence service providers to explore the potential of an online service directory to support their work. Through this three-stage research process, we were able to characterise this unique service delivery landscape and identify tensions in services’ access, understandings of technologies and working practices. Drawing from our findings, we discuss opportunities for researchers to work with and sustain complex information ecologies in sensitive settings.
Evaluating the Impact of Pseudo-Colour and Coordinate System on the Detection of Medication-induced ECG Changes
The electrocardiogram (ECG), a graphical representation of the heart’s electrical activity, is used for detecting cardiac pathologies. Certain medications can produce a complication known as ‘long QT syndrome’, shown on the ECG as an increased gap between two parts of the waveform. Self-monitoring for this could be lifesaving, as the syndrome can result in sudden death, but detecting it on the ECG is difficult. Here we evaluate whether using pseudo-colour to highlight wave length and changing the coordinate system can support lay people in identifying increases in the QT interval. The results show that introducing colour significantly improves accuracy, and that whilst it is easier to detect a difference without colour with Cartesian coordinates, the greatest accuracy is achieved when Polar coordinates are combined with colour. The results show that applying simple visualisation techniques has the potential to improve ECG interpretation accuracy, and support people in monitoring their own ECG.
Opioid use disorder (OUD) poses substantial risks to personal well-being and public health. In online communities, users support those seeking recovery, in part by promoting clinically grounded treatments. However, some communities also promote clinically unverified OUD treatments, such as unregulated and untested drugs. Little research exists on which alternative treatments people use, whether these treatments are effective for recovery, or if they cause negative side effects. We provide the first large-scale social media study of clinically unverified, alternative treatments in OUD recovery on Reddit, partnering with an addiction research scientist. We adopt transfer learning across 63 subreddits to precisely identify posts related to opioid recovery. Then, we quantitatively discover potential alternative treatments and contextualize their effectiveness. Our work benefits health research and practice by identifying undiscovered recovery strategies. We also discuss the impacts to online communities dealing with stigmatized behavior and research ethics.
Electroactive polymers (EAP) are a promising material for shape changing interfaces, soft robotics and other novel design explorations. However, the uptake of EAP prototyping in design, art and architecture has been slow due to limited commercial availability, challenging high voltage electronics and lack of simple fabrication techniques. This paper introduces DIY tools for building and activating EAP prototypes, together with design methods for making novel shape-changing surfaces and structures, outside of material science labs. We present iterations of our methods and tools, their use and evaluation in participatory workshops and public installations and how they affect the design outcomes. We discuss unique aesthetic and interactive experiences enabled by the organic and subtle movement of semi-transparent EAP membranes. Finally, we summarise the potential of design tools and methods to facilitate increased exploration of interactive EAP prototypes and outline future steps.
With the rise of big data, there has been an increasing need for practitioners in this space and an increasing opportunity for researchers to understand their workflows and design new tools to improve it. Data science is often described as data-driven, comprising unambiguous data and proceeding through regularized steps of analysis. However, this view focuses more on abstract processes, pipelines, and workflows, and less on how data science workers engage with the data. In this paper, we build on the work of other CSCW and HCI researchers in describing the ways that scientists, scholars, engineers, and others work with their data, through analyses of interviews with 21 data science professionals. We set five approaches to data along a dimension of interventions: Data as given; as captured; as curated; as designed; and as created. Data science workers develop an intuitive sense of their data and processes, and actively shape their data. We propose new ways to apply these interventions analytically, to make sense of the complex activities around data practices.
An Autonomy-Perspective on the Design of Assistive Technology Experiences of People with Multiple Sclerosis
In HCI and Assistive Technology design, autonomy is regularly equated with independence. This is a shortcut and leaves out design opportunities by omitting a more nuanced idea of autonomy. To improve our understanding of how people with severe physical disabilities experience autonomy, particularly in the context of Assistive Technologies, we engaged in in-depth fieldwork with 15 people with Multiple Sclerosis who were used to assistive devices. We constructed a grounded theory from a series of interviews, focus groups and observations, pointing to strategies in which participants sought autonomy either in the short-term (managing their daily energy reserve) or in the long-term (making future plans). The theory shows how factors like enabling technologies, capital (human, social, psychological resources), and compatibility with daily practices facilitated a sense of being in control for our participants. Moreover, we show how over-ambitious or bad design (e.g., paternalism) can lead to opposite results and restrict autonomy.
Visualization recommender systems aim to lower the barrier to exploring basic visualizations by automatically generating results for analysts to search and select, rather than manually specify. Here, we demonstrate a novel machine learning-based approach to visualization recommendation that learns visualization design choices from a large corpus of datasets and associated visualizations. First, we identify five key design choices made by analysts while creating visualizations, such as selecting a visualization type and choosing to encode a column along the X- or Y-axis. We train models to predict these design choices using one million dataset-visualization pairs collected from a popular online visualization platform. Neural networks predict these design choices with high accuracy compared to baseline models. We report and interpret feature importances from one of these baseline models. To evaluate the generalizability and uncertainty of our approach, we benchmark with a crowdsourced test set, and show that the performance of our model is comparable to human performance when predicting consensus visualization type, and exceeds that of other visualization recommender systems.
Online health communities (OHCs) allow people living with a shared diagnosis or medical condition to connect with peers for social support and advice. OHCs have been well studied in conditions like diabetes and cancer, but less is known about their role in enigmatic diseases with unknown or complex causal mechanisms. In this paper, we study one such condition: Vulvodynia, a chronic pain syndrome of the vulvar region. Through observations of and interviews with members of a vulvodynia Facebook group, we found that while the interaction types are broadly similar to those found in other OHCs, the women spent more time seeking basic information and building individualized management plans. They also encounter significant emotional and interpersonal challenges, which they discuss with each other. We use this study to extend the field’s understanding of OHCs, and to propose implications for the design of self-tracking tools to support sensemaking in enigmatic conditions.
Network data that changes over time can be very useful for studying a wide range of important phenomena, from how social network connections change to epidemiology. However, it is challenging to analyze, especially if it has many actors, connections or if the covered timespan is large with rapidly changing links (e.g., months of changes with changes at second resolution). In these analyses one would often like to compare many periods of time to others, without having to look at the full timeline. To support this kind of analysis we designed and implemented a technique and system to visualize this dynamic data. The Dynamic Network Plaid (DNP) is designed for large displays and based on user-generated interactive timeslicing on the dynamic graph attributes and on linked provenance-preserving representations. We present the technique, interface and the design/evaluation with a group of public health researchers investigating non-suicidal self-harm picture sharing in Instagram.
Many people struggle to control their use of digital devices. However, our understanding of the design mechanisms that support user self-control remains limited. In this paper, we make two contributions to HCI research in this space: first, we analyse 367 apps and browser extensions from the Google Play, Chrome Web, and Apple App stores to identify common core design features and intervention strategies afforded by current tools for digital self-control. Second, we adapt and apply an integrative dual systems model of self-regulation as a framework for organising and evaluating the design features found. Our analysis aims to help the design of better tools in two ways: (i) by identifying how, through a well-established model of self-regulation, current tools overlap and differ in how they support self-control; and (ii) by using the model to reveal underexplored cognitive mechanisms that could aid the design of new tools.
I’m Sensing in the Rain: Spatial Incongruity in Visual-Tactile Mid-Air Stimulation Can Elicit Ownership in VR Users
Major virtual reality (VR) companies are trying to enhance the sense of immersion in virtual environments by implementing haptic feedback in their systems (e.g., Oculus Touch). It is known that tactile stimulation adds realism to a virtual environment. In addition, when users are not limited by wearing any attachments (e.g., gloves), it is even possible to create more immersive experiences. Mid-air haptic technology provides contactless haptic feedback and offers the potential for creating such immersive VR experiences. However, one of the limitations of mid-air haptics resides in the need for freehand tracking systems (e.g., Leap Motion) to deliver tactile feedback to the user’s hand. These tracking systems are not accurate, limiting designers capability of delivering spatially precise tactile stimulation. Here, we investigated an alternative way to convey incongruent visual-tactile stimulation that can be used to create the illusion of a congruent visual-tactile experience, while participants experience the phenomenon of the rubber hand illusion in VR.
Issues of social identity, attitudes towards self-disclosure, and potentially biased approaches to what is considered “typical” or “normal” are critical factors when designing visualizations for personal informatics systems. This is particularly true when working with vulnerable populations like those who self-track to manage serious mental illnesses like bipolar disorder (BD). We worked with individuals diagnosed with BD to 1) better understand sense-making challenges related to the representation and interpretation of personal data and 2) probe the benefits, risks, and limitations of participatory approaches to designing personal data visualizations that better reflect their lived experiences. We describe our co-design process, present a series of emergent visual encoding schemas resulting from these activities, and report on the assessment of these speculative designs by participants. We conclude by summarizing important considerations and implications for designing personal data visualizations for (and with) people who self-track to manage serious mental illness.
Methodological Gaps in Predicting Mental Health States from Social Media: Triangulating Diagnostic Signals
A growing body of research is combining social media data with machine learning to predict mental health states of individuals. An implication of this research lies in informing evidence-based diagnosis and treatment. However, obtaining clinically valid diagnostic information from sensitive patient populations is challenging. Consequently, researchers have operationalized characteristic online behaviors as “proxy diagnostic signals” for building these models. This paper posits a challenge in using these diagnostic signals, purported to support clinical decision-making. Focusing on three commonly used proxy diagnostic signals derived from social media, we find that predictive models built on these data, although offer strong internal validity, suffer from poor external validity when tested on mental health patients. A deeper dive reveals issues of population and sampling bias, as well as of uncertainty in construct validity inherent in these proxies. We discuss the methodological and clinical implications of these gaps and provide remedial guidelines for future research.
Putting the Value in VR: How to Systematically and Iteratively Develop a Value-Based VR Application with a Complex Target Group
In development, implementation and evaluation of eHealth it is essential to account for stakeholders’ perspectives, opinions and values, which are statements that specify what stakeholders want to achieve or improve via a technology. The use of values enables developers to systematically include stakeholders’ perspectives and the context of use in an eHealth development process. However, there are relatively few papers that explain how to use values in technology development. Consequently, in this paper we show how we formulated values during the multi-method, interdisciplinary and iterative development process of a VR application for a complex setting: forensic mental healthcare. We report the main foundations for these values: the outcomes of an online questionnaire with patients, therapists and other stakeholders (n=146) and interviews with patients and therapists (n=18). We show how a multidisciplinary project team used these qualitative results to formulate and adapt values and create lo-fi prototypes of a VR application. We discuss the importance of a systematic development process with multiple formative evaluations for eHealth and reflect on the role of values within this.
Navigating Ride-Sharing Regulations: How Regulations Changed the ‘Gig’ of Ride-Sharing for Drivers in Taiwan
Ride-sharing platforms have rapidly spread and disrupted ride hailing markets, resulting in conflicts between ride-sharing and taxi drivers. Taxi drivers claim that their counterparts have unfair advantages in terms of lower prices and a more stable customer base, making it difficult to earn a living. Local government entities have dealt with this disruption and conflict in different ways, often looking towards some form of regulation. While there have been discussions about what the regulation should be, there has been less work looking at what impacts regulations have on ride-sharing drivers and their usage of the platforms. In this paper we present our interview study of ride-sharing drivers in Taiwan, who have gone through three distinct phases of regulation. Drivers felt that regulations legitimized their work, while having to navigate consequences related to regulated access to platforms and fundamental changes to the “gig” of ride-sharing.
What Happens After Disclosing Stigmatized Experiences on Identified Social Media: Individual, Dyadic, and Social/Network Outcomes
Disclosing stigmatized experiences or identity facets on identified social media (e.g., Facebook) can be risky, inhibited, yet beneficial for the discloser. I investigate such disclosures’ outcomes when they do happen on identified social media as perceived by the individuals who perform them. I draw on interviews with women who have experienced pregnancy loss and are social media users in the U.S. I document outcomes at the social/network, individual, and dyad levels. I highlight the powerful role of connecting with others with a similar experience within networks of known ties, how disclosures lead to relationship changes, how disclosers take on new social roles as mentors and support sources, and how helpful connections following disclosures originate from various kinds of ties via diverse communication channels. I emphasize reciprocal disclosures as an outcome contributing to further outcomes (e.g., destigmatizing pregnancy loss). I provide design implications related to facilitating being a support source and mentor, helpful reciprocal disclosures, and finding similar others within networks of known ties.
Peer feedback is essential for learning in project-based disciplines. However, students often need guidance when acting as either a feedback provider or a feedback receiver, both to gain from peer feedback and to criticize their peers’ work. This paper explores how to more effectively scaffold this exchange such that peers more deeply engage in the feedback process. Within a game design course, we introduced different processes for feedback receivers to write questions to guide peer feedback. Feedback receivers wrote four main types of guiding questions: improve, share, brainstorm, critique. We found that “improve” questions tended to lead to better feedback (more specific, critical, and actionable) than other question types, but feedback receivers wrote improve questions least often. We offer insights on how best to scaffold the question-writing process to facilitate peer feedback exchange.
Few gender-focused studies of video games explore the gameplay experiences of women of color, and those that do tend to only emphasize negative phenomena (i.e., racial or gender discrimination). In this paper, we conduct an exploratory case study attending to the motivations and gaming practices of Black college women. Questionnaire responses and focus group discussion illuminate the plurality of gameplay experiences for this specific population of Black college women. Sixty-five percent of this population enjoy the ubiquity of mobile games with casual and puzzle games being the most popular genres. However, academic responsibilities and competing recreational interests inhibit frequent gameplay. Consequently, this population of Black college women represent two types of casual gamers who report positive gameplay experiences, providing insights into creating a more inclusive gaming subculture.
"If you want, I can store the encrypted password": A Password-Storage Field Study with Freelance Developers
In 2017 and 2018, Naiakshina et al. (CCS’17, SOUPS’18) studied in a lab setting whether computer science students need to be told to write code that stores passwords securely. The authors’ results showed that, without explicit prompting, none of the students implemented secure password storage. When asked about this oversight, a common answer was that they would have implemented secure storage – if they were creating code for a company. To shed light on this possible confusion, we conducted a mixed-methods field study with developers. We hired freelance developers online and gave them a similar password storage task followed by a questionnaire to gain additional insights into their work. From our research, we offer two contributions. First of all, we reveal that, similar to the students, freelancers do not store passwords securely unless prompted, they have misconceptions about secure password storage, and they use outdated methods. Secondly, we discuss the methodological implications of using freelancers and students in developer studies.
Virtual Showdown: An Accessible Virtual Reality Game with Scaffolds for Youth with Visual Impairments
Virtual Reality (VR) is a growing source of entertainment, but people who are visually impaired have not been effectively included. Audio cues are motivated as a complement to visuals, making experiences more immersive, but are not a primary cue. To address this, we implemented a VR game called Virtual Showdown. We based Virtual Showdown on an accessible real-world game called Showdown, where people use their hearing to locate and hit a ball against an opponent. Further, we developed Verbal and Verbal/Vibration Scaffolds to teach people how to play Virtual Showdown. We assessed the acceptability of Virtual Showdown and compared our scaffolds in an empirical study with 34 youth who are visually impaired. Thirty-three participants wanted to play Virtual Showdown again, and we learned that participants scored higher with the Verbal Scaffold or if they had prior Showdown experience. Our empirical findings inform the design of future accessible VR experiences.
Moderation Practices as Emotional Labor in Sustaining Online Communities: The Case of AAPI Identity Work on Reddit
We examine how and why Asian American and Pacific Islander (AAPI) moderators on Reddit shape the norms of their online communities through the analytic lens of emotional labor. We conduct interviews with 21 moderators who facilitate identity work discourse in AAPI subreddits and present a thematic analysis of their moderation practices. We report on their challenges to sustaining moderation, which include burning out from volunteer work, navigating hierarchical structures, and balancing unfulfilled expectations. We then describe strategies that moderators employ to manage emotional labor, which involve distancing away from drama, building solidarity from shared struggles, and integrating an ecology of tools for self-organized moderation. We provide recommendations for improving moderation in online communities centered around identity work and discuss implications of emotional labor in the design of Reddit and similar platforms.
We present a novel, multilayer interaction approach that enables state transitions between spatially above-screen and 2D on-screen feedback layers. This approach supports the exploration of haptic features that are hard to simulate using rigid 2D screens. We accomplish this by adding a haptic layer above the screen that can be actuated and interacted with (pressed on) while the user interacts with on-screen content using pen input. The haptic layer provides variable firmness and contour feedback, while its membrane functionality affords additional tactile cues like texture feedback. Through two user studies, we look at how users can use the layer in haptic exploration tasks, showing that users can discriminate well between different firmness levels, and can perceive object contour characteristics. Demonstrated also through an art application, the results show the potential of multilayer feedback to extend on-screen feedback with additional widget, tool and surface properties, and for user guidance.
In UX We Trust: Investigation of Aesthetics and Usability of Driver-Vehicle Interfaces and Their Impact on the Perception of Automated Driving
In the evolution of technical systems, freedom from error and early adoption plays a major role for market success and to maintain competitiveness. In the case of automated driving, we see that faulty systems are put into operation and users trust these systems, often without any restrictions. Trust and use are often associated with users’ experience of the driver-vehicle interfaces and interior design. In this work, we present the results of our investigations on factors that influence the perception of automated driving. In a simulator study, N=48 participants had to drive a SAE level 2 vehicle with either perfect or faulty driving function. As a secondary activity, participants had to solve tasks on an infotainment system with varying aesthetics and usability (2×2). Results reveal that the interaction of conditions significantly influences trust and UX of the vehicle system. Our conclusion is that all aspects of vehicle design cumulate to system and trust perception.
Sharing economy services have become increasingly popular. In addition to various well-known for-profit activities in this space (e.g., ride and apartment sharing), many community groups and non-profit organizations offer collections of shared things (e.g., books, tools) that explicitly aim to benefit local communities. We expect that both non-profit and for-profit approaches will see an increased use in the future. To support designers in devising new sharing economy services, we developed the Sharing Economy Design Cards, a design toolkit in the form of a card deck. We present two deployments of the cards: (1) in individual interviews with 16 designers and sharing economy domain experts; and (2) in two workshops with 5 participants each. Our findings show that the use of the cards not only facilitates the creation of future sharing platforms and services in a collaborative setting, but also helps to evaluate existing sharing economy services as an individual activity.
The availability of digital devices operated by voice is expanding rapidly. However, the applications of voice interfaces are still restricted. For example, speaking in public places becomes an annoyance to the surrounding people, and secret information should not be uttered. Environmental noise may reduce the accuracy of speech recognition. To address these limitations, a system to detect a user’s unvoiced utterance is proposed. From internal information observed by an ultrasonic imaging sensor attached to the underside of the jaw, our proposed system recognizes the utterance contents without the user’s uttering voice. Our proposed deep neural network model is used to obtain acoustic features from a sequence of ultrasound images. We confirmed that audio signals generated by our system can control the existing smart speakers. We also observed that a user can adjust their oral movement to learn and improve the accuracy of their voice recognition.
Assessing the Accuracy of Point & Teleport Locomotion with Orientation Indication for Virtual Reality using Curved Trajectories
Room-scale Virtual Reality (VR) systems have arrived in users’ homes where tracked environments are set up in limited physical spaces. As most Virtual Environments (VEs) are larger than the tracked physical space, locomotion techniques are used to navigate in VEs. Currently, in recent VR games, point & teleport is the most popular locomotion technique. However, it only allows users to select the position of the teleportation and not the orientation that the user is facing after the teleport. This results in users having to manually correct their orientation after teleporting and possibly getting entangled by the cable of the headset. In this paper, we introduce and evaluate three different point & teleport techniques that enable users to specify the target orientation while teleporting. The results show that, although the three teleportation techniques with orientation indication increase the average teleportation time, they lead to a decreased need for correcting the orientation after teleportation.
Introducing interactivity to films has proven a longstanding and difficult challenge due to their narrative-driven, linear and theatre-based nature. Previous research has suggested that Brain-Computer Interfaces (BCI) may be a promising approach but also revealed a tension between being immersed in the film and thinking about control. We report a performance-led and in-the-wild study of a BCI film called The MOMENT covering its design rationale and how it was experienced by the public as controllers, non-controllers and repeat viewers. Our findings suggest that BCI movies should be designed to be credibly controllable, generate personal versions, be watchable as linear films, encourage repeat viewing and fit the medium of cinema. They also reveal how viewers appreciated the sense of editing their own personal cuts, suggesting a new stance on introducing interactivity into lean-back media in which filmmakers release editorial control to users to make their own versions.
The Invisible Potential of Facial Electromyography: A Comparison of EMG and Computer Vision when Distinguishing Posed from Spontaneous Smiles
Positive experiences are a success metric in product and service design. Quantifying smiles is a method of assessing them continuously. Smiles are usually a cue of positive affect, but they can also be fabricated voluntarily. Automatic detection is a promising complement to human perception in terms of identifying the differences between smile types. Computer vision (CV) and facial distal electromyography (EMG) have been proven successful in this task. This is the first study to use a wearable EMG that does not obstruct the face to compare the performance of CV and EMG measurements in the task of distinguishing between posed and spontaneous smiles. The results showed that EMG has the advantage of being able to identify covert behavior not available through vision. Moreover, CV appears to be able to identify visible dynamic features that human judges cannot account for. This sheds light on the role of non-observable behavior in distinguishing affect-related smiles from polite positive affect displays.
This paper investigates a simple form of customization: giving users the choice to enable or disable gamification. We present a study (N=77) in the context of image tagging, in which a gamification approach was shown to be effective in previous work. In our case, some participants could enable or disable gamification after they had experienced the task with and without it. Other participants had no choice and did the task with or without game elements. The results indicate that those who are not attracted by the elements can be motivated to tag more through this choice. In contrast, those that like the elements are not affected by it. This suggests that systems should provide the option to disable gamification in the absence of more sophisticated tailoring.
“Pretty Close to a Must-Have”: Balancing Usability Desire and Security Concern in Biometric Adoption
We report on a qualitative inquiry among security-expert and non-expert mobile device users about the adoption of biometric authentication using semi-structured interviews(n=38, 19/19 expert/non-expert). Security experts more readily adopted biometrics than non-experts but also harbored greater distrust towards its use for sensitive transactions,feared biometric signature compromise, and in some cases distrusted newer facial recognition methods. Both groups harbored misconceptions, such as misunderstanding of the functional role of biometrics in authentication, and were about equally likely to have stopped using biometrics due to usability. Implications include the need for tailored training for security-informed advocates, better design for device sharing and co-registration, and consideration for usability needs in work environments. Refinement of these features would remove perceived obstacles to ubiquitous computing among the growing population of mobile technology users sensitized to security risk.
Online discussion websites, such as Reddit’s r/science forum, have the potential to foster science communication between researchers and the general public. However, little is known about who participates, what is discussed, and whether such websites are successful in achieving meaningful science discussions. To find out, we conducted a mixed-methods study analyzing 11,859 r/science posts and conducting interviews with 18 community members. Our results show that r/science facilitates rich information exchange and that the comments section provides a unique science communication document that guides engagement with scientific research. However, this community-sourced science communication comes largely from a knowledgeable public. We conclude with design suggestions for a number of critical problems that we uncovered: addressing the problem of topic newsworthiness and balancing broader participation and rigor.
Multi-plié: A Linear Foldable and Flattenable Interactive Display to Support Efficiency, Safety and Collaboration
We present the design concept of an accordion-fold interactive display to address the limits of touch-based interaction in airliner cockpits. Based on an analysis of pilot activity, tangible design principles for this design concept are identified. Two resulting functional prototypes are explored during participatory workshops with pilots, using activity scenarios. This exploration validated the design concept by revealing its ability to match pilot responsibilities in terms of safety, efficiency and collaboration. It provides an efficient visual perception of the system for real-time collaborative operations and tangible interaction to strengthen the perception of action and to manage safety through anticipation and awareness. The design work and insights enabled to specify further our needs regarding flexible screens. They also helped to better characterize the design concept as based on continuity of a developed surface, predictability of aligned folds and pleat face roles, embodied interactive properties, and flexibility through affordable reconfigurations.
Human trafficking and forced labor are global issues affecting millions of people around the world. This paper describes an initiative that we are currently undertaking to understand the role technology can play to support the critical-agency of migrant workers in these situations of severe exploitation. Building on five consultations with more than 170 direct and indirect stakeholders in Thailand, the paper presents the co-design, development, and evaluation of Apprise, a mobile app to support the identification of victims of human trafficking using a Value Sensitive Design approach. It also provides a critical reflection on the use of digital technology in the initial screening of potential victims of human trafficking, to understand in what ways Apprise can support the critical agency of migrant workers in vulnerable situations.
We present JigFab, an integrated end-to-end system that supports casual makers in designing and fabricating constructions with power tools. Starting from a digital version of the construction, JigFab achieves this by generating various types of constraints that configure and physically aid the movement of a power tool. Constraints are generated for every operation and are custom to the work piece. Constraints are laser cut and assembled together with predefined parts to reduce waste. JigFab’s constraints are used according to an interactive step-by-step manual. JigFab internalizes all the required domain knowledge for designing and building intricate structures, consisting of various types of finger joints, tenon & mortise joints, grooves, and dowels. Building such structures is normally reserved for artisans or automated with advanced CNC machinery.
This paper explores teachers’ expected and perceived gains from classroom participation in design projects. The results indicate that teachers hope the experience will be fun for the children, and that it will increase both children’s and their own knowledge about technology. Although they consider learning goals important, these do not necessarily have to be communicated to the children, since the teachers experience that the children are learning several skills anyway. However, early involvement in the definition of learning goals could make participation more beneficial. The teachers also see several gains from partication for themselves, especially related to using a design approach in the classroom. We discuss the implications of these finding and suggest a way to increase the user gains for both children and teachers by considering the opportunity to use classroom participation as a way to support teachers’ competence development, thereby fulfilling the promise of mutual learning as advocated in Participatory Design.
Human performance augmentation through technology has been a recurring theme in science and culture, aiming to increase human capabilities and accessibility. We investigate a related concept: virtual performance augmentation (VPA), using VR to give users the illusion of greater capabilities than they actually have. We propose a method for VPA of running and jumping, based on in place movements, and studied its effects in a VR exergame. We found that in place running and jumping in VR can be used to create a somewhat natural experience and can elicit medium to high physical exertion in an immersive and intrinsically motivating manner. We also found that virtually augmenting running and jumping can increase intrinsic motivation, perceived competence and flow, and may also increase motivation for physical activity in general. We discuss implications of VPA for safety and accessibility, with initial evidence suggesting that VPA may help users with physical impairments enjoy the benefits of exergaming.
WhatsApp, as the world’s most popular messaging application, offers significant opportunities for improving the reach and effectiveness of engagement projects. In collaboration with the International Federation of Red Cross and Red Crescent Societies (IFRC) we designed WhatFutures, a collaborative future forecasting engagement for global youth using WhatsApp. WhatFutures was successfully deployed with 487 players across 5 countries (Kenya, Bulgaria, Finland, Australia and Hong Kong) to inform strategic change within the IFRC. Based on our analysis of the activity – including 16,100 messages, 95 multimedia artifacts, and a post-engagement survey – we present a reflection upon the design decisions underpinning WhatFutures and identify how decisions made around group structures, processes and externalization of outputs influenced engagement and data quality. We conclude with the wider implications of our findings for the design of engagements that best utilize the affordances of existing messaging applications.
Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience
The ability to engage in real-time text conversations is an important feature on live streaming platforms. The moderation of this text content relies heavily on the work of unpaid volunteers. This study reports on interviews with 20 people who moderate for Twitch micro communities, defined as channels that are built around a single or group of streamers, rather than the broadcast of an event. The study identifies how people become moderators, their different styles of moderating, and the difficulties that come with the job. In addition to the hardships of dealing with negative content, moderators also have complex interpersonal relationships with the streamers and viewers, where the boundaries between emotional labor, physical labor, and fun are intertwined.
Communicating uncertainty has been shown to provide positive effects on user understanding and decision-making. Surprisingly however, most personal health tracking applications fail to disclose the accuracy of their measurements and predictions. In the case of fertility tracking applications (FTAs), inaccurate predictions have already led to numerous unwanted pregnancies and law suits. However, integrating uncertainty into FTAs is challenging: Prediction accuracy is hard to understand and communicate, and its effect on users’ trust and behavior is not well understood. We created a prototype for uncertainty visualizations for FTAs and evaluated it in a four-week field study with real users and their own data (N=9). Our results uncover far-reaching effects of communicating uncertainty: For example, users interpreted prediction accuracy as a proxy for their cycle health and as a security indicator for contraception. Displaying predicted and detected fertile phases next to each other helped users to understand uncertainty without negative emotional effects.
We propose a new text layout that facilitates reading comprehension. By sequentially fading out characters sentence-by-sentence from the beginning of each paragraph, we highlight the paragraph structure of the entire text and the relative positions of the sentences. To evaluate the effectiveness of the paragraph-based faded text in a reading comprehension, we measure the comprehension, eye movements, and recognition for both the proposed method and a conventional standard method. In the proposed method, rates of correct answers to text comprehension questions are improved. Moreover, the proposed method leads to slower reading speeds and better recognition rates for the first sentences of paragraphs, which are displayed in a relatively thicker mode. With the paragraph-based faded text, the reader is naturally facilitated to pay attention to the first sentence of each paragraph, suggesting that this reading style could result in a more accurate text comprehension.
While previous work on smartphone (un)locking has revealed real world usage patterns, several aspects still need to be explored. In this paper, we fill one of these knowledge gaps: the interplay between age and smartphone authentication behavior. To do this, we performed a two-month long field study (N = 134). Our results indicate that there are indeed significant differences across age. For instance, younger participants were more likely to use biometric unlocking mechanisms and older participants relied more on auto locks.
Coding for Outdoor Play: a Coding Platform for Children to Invent and Enhance Outdoor Play Experiences
Outdoor play is in decline, including its benefits to children’s development. Coding, a typically indoor, screen-based activity, can potentially enrich outdoor play, serving as a rule-making medium. We present a coding platform that controls a programmable hardware device, enabling children to technologically-enhance their outdoor play experiences by inventing game ideas, coding them, and playing their games together with their friends. In the evaluation study, 24 children used the system to invent and play outdoor games. Results show children are able to bridge between the different domains of coding and outdoor play. They used the system to modify traditional games and invent new ones, enriching their outdoor experience. Children merged computational concepts with physical game elements, integrated physical outdoor properties as variables in their code, and were excited to see their code come to life. We conclude children can use coding to express their ideas by creating technologically-enhanced outdoor play experiences.
Expert interaction techniques like hotkeys are efficient, but poorly adopted because they are hard to learn. HotStrokes removes the need for learning arbitrary mappings of commands to hotkeys. A user enters a HotStroke by holding a modifier key, then gesture typing a command name on a laptop trackpad as if on an imaginary virtual keyboard. The gestures are recognized using an adaptation of the SHARK2 algorithm with a new spatial model and a refined method for dynamic suggestions. A controlled experiment shows HotStrokes effectively augments the existing “menu and hotkey” command activation paradigm. Results show the method is efficient by reducing command activation time by 43% compared to linear menus. The method is also easy to learn with a high adoption rate, replacing 91% of linear menu usage. Finally, combining linear menus, hotkeys, and HotStrokes leads to 24% faster command activation overall.
Leaderboards are a workhorse of the gamification literature. While the effect of a leaderboard has been well studied, there is much less evidence how one’s peer group affects the treatment effect of a leaderboard. Through a pre-registered field experiment involving more than 1000 users on an online movie recommender website, we expose users to leaderboards, but different sets of users are exposed to different peer groups. Contrary to what a standard behavioral model would predict, we find that a user’s contribution increases when their peer’s scores are more dispersed. We also find that decreasing average peer contributions motivates a user to contribute more. Moreover, these effects are themselves mediated by group size. This sheds new light on existing theories of motivation and demotivation with regards to leaderboards, and also illustrates the potential of using personalized leaderboards to increase contributions.
We have limited understanding of how older adults use smartphones, how their usage differs from younger users, and the causes for those differences. As a result, researchers and developers may miss promising opportunities to support older adults or offer solutions to unimportant problems. To characterize smartphone usage among older adults, we collected iPhone usage data from 84 healthy older adults over three months. We find that older adults use fewer apps, take longer to complete tasks, and send fewer messages. We use cognitive test results from these same older adults to then show that up to 79% of these differences can be explained by cognitive decline, and that we can predict cognitive test performance from smartphone usage with 83% ROCAUC. While older adults differ from younger adults in app usage behavior, the “cognitively young” older adults use smartphones much like their younger counterparts. Our study suggests that to better support all older adults, researchers and developers should consider the full spectrum of cognitive function.
Although voice forums are widely used to enable marginalized communities to produce, consume, and share information, their financial sustainability is a key concern among HCI4D researchers and practitioners. We present ReCall, a crowdsourcing marketplace accessible via phone calls where low-income rural residents vocally transcribe audio files to gain free airtime to participate in voice forums as well as to earn money. We conducted a series of experimental and usability evaluations with 28 low-income people in rural India to examine the effect of phone types, channel types, and review modes on speech transcription performance. We then deployed ReCall for two weeks to 24 low-income rural residents who placed 5,879 phone calls, completed 29,000 micro tasks to yield transcriptions with 85% accuracy, and earned INR 20,500. Our mixed-methods analysis indicates that each minute of crowd work on ReCall gives users eight minutes of free airtime on another voice forum, and thus illustrates a way to address the financial sustainability of voice forums.
Smartwatches enable the wrist to be used as an ideal location to provide always-available haptic notifications as they are constantly worn with direct contact with the skin. With the wrist straps, the haptic feedback can be extended to the full space around the wrist to provide more spatial and enriched feedback. With ThermalBracelet, we investigate thermal feedback as a haptic feedback modality around the wrist. We present three studies that lead to the development of a smartwatch-integratable thermal bracelet that stimulates six locations around the wrist. Our initial evaluation reports on the selection of the thermal module configurations. Secondly, with the selected six-module configuration, we explore its usability in a real-world scenarios such as walking and reading. Thirdly, we investigate its capability of providing spatio temporal feedback while engaged in distracting tasks. Finally we present application scenarios that demonstrates its usability.
HaptiVec: Presenting Haptic Feedback Vectors in Handheld Controllers using Embedded Tactile Pin Arrays
HaptiVec is a new haptic feedback paradigm for handheld controllers which allows users to feel directional haptic pressure vectors on their fingers and hands while interacting with virtual environments. We embed a 3 by 5 tactile pin array (with an average pin spacing of 25 mm) into the handles of two custom VR type controllers. By presenting directional pressure vectors in eight cardinal directions (N, NE, E, SE, S, SW, W, NW) to users without prior training, they were able to distinguish the correct direction with an accuracy of at least 79%. We illustrate two applications where our device enhances virtual experiences over traditional vibrotactile feedback. In the first application, through the classic first-person shooter Doom, we demonstrate that users can receive directional pressure feedback corresponding to the direction of incident enemy projectiles. In the second application, we demonstrate how our controller can create a more immersive experience by allowing the user to feel their virtual climate by randomizing the directional vectors and presenting the user with “haptic rain” which adapts with the intensity of the rainfall.
Visual blends are an advanced graphic design technique to draw attention to a message. They combine two objects in a way that is novel and useful in conveying a message symbolically. This paper presents VisiBlends, a flexible workflow for creating visual blends that follows the iterative design process. We introduce a design pattern for blending symbols based on principles of human visual object recognition. Our workflow decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. An evaluation of the workflow shows that decentralized groups can generate blends in independent microtasks, co-located groups can collaboratively make visual blends for their own messages, and VisiBlends improves novices’ ability to make visual blends.
Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration
Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.
The correct execution of exercises, such as squats and dead-lifts, is essential to prevent various bodily injuries. Existing solutions either rely on expensive motion tracking or multiple Inertial Measurement Units (IMU) systems require an extensive set-up and individual calibration. This paper introduces a proof of concept, GymSoles, an insole prototype that provides feedback on the Centre of Pressure (CoP) at the feet to assist users with maintaining the correct body posture, while performing squats and dead-lifts. GymSoles was evaluated with 13 users in three conditions: 1) no feedback, 2) vibrotactile feedback, and 3) visual feedback. It has shown that solely providing feedback on the current CoP, results in a significantly improved body posture.
Affine Transformations (ATs) often escape an intuitive approach due to their high complexity. Therefore, we developed GEtiT that directly encodes ATs in its game mechanics and scales the knowledge’s level of abstraction. This results in an intuitive application as well as audiovisual presentation of ATs and hence in a knowledge learning. We also developed a specific Virtual Reality (VR) version to explore the effects of immersive VR on the learning outcomes. This paper presents our approach of directly encoding abstract knowledge in game mechanics, the conceptual design of GEtiT and its technical implementation. Both versions are compared in regard to their usability in a user study. The results show that both GEtiT versions induce a high degree of flow and elicit a good intuitive use. They validate the effectiveness of the design and the resulting knowledge application requirements. Participants favored GEtiT VR thus showing a potentially higher learning quality when using VR.
Mid-air 3D sketching has been mainly explored in Virtual Reality (VR) and typically requires special hardware for motion capture and immersive, stereoscopic displays. The recently developed motion tracking algorithms allow real-time tracking of mobile devices, and have enabled a few mobile applications for 3D sketching in Augmented Reality (AR). However, they are more suitable for making simple drawings only, since they do not consider special challenges with mobile AR 3D sketching, including the lack of stereo display, narrow field of view, and the coupling of 2D input, 3D input and display. To address these issues, we present Mobi3DSketch, which integrates multiple sources of inputs with tools, mainly different versions of 3D snapping and planar/curves surface proxies. Our multimodal interface supports both absolute and relative drawing, allowing easy creation of 3D concept designs in situ. The effectiveness and expressiveness of Mobi3DSketch are demonstrated via a pilot study.
Prototyping electronic circuits is an increasingly popular activity, supported by researchers, who develop toolkits to improve the design, debugging, and fabrication of electronics. Although past work mainly dealt with circuit topology, in this paper we propose a system for determining or tuning the values of the circuit components. Based on the results of a formative study with seventeen makers, we designed VirtualComponent, a mixed-reality tool that allows users to digitally place electronic components on a real breadboard, tune their values in software, and see these changes applied to the physical circuit in real-time. VirtualComponent is composed of a set of plug-and-play modules containing banks of components, and a custom breadboard managing the connections and components’ values. Through demonstrations and the results of an informal study with twelve makers, we show that VirtualComponent is easy to use and allows users to test components’ value configurations with little effort.
HCI scholars have become increasingly interested in describing the complex nature of UX practice. In parallel, HCI and STS scholars have sought to describe the ethical and value-laden relationship between designers and design outcomes. However, little research describes the ethical engagement of UX practitioners as a form of design complexity, including the multiple mediating factors that impact ethical awareness and decision-making. In this paper, we use a practice-led approach to describe ethical complexity, presenting three varied cases of UX practitioners based onin situ observations and interviews. In each case, we describe salient factors relating to ethical mediation, including organizational practices, self-driven ethical principles, and unique characteristics of specific projects the practitioner is engaged in. Using the concept of mediation from activity theory, we provide a rich account of practitioners’ ethical decision making. We propose future work on ethical awareness and design education based on the concept of ethical mediation.
Asking pairwise comparison questions is common. Yet, we often find ourselves comparing apples and oranges — the two entities of interest are not readily comparable. To understand how technologies can extend our capabilities to conduct pairwise comparisons during data analysis, we analyzed pairwise comparison questions collected from crowd workers and propose a taxonomy of pairwise comparisons. We demonstrate how the taxonomy can be adopted by incorporating pairwise comparison capabilities into Duo, a spreadsheet application that supports comparing two groups of records in a data table. Duo decomposes a pairwise comparison question into rules and showcases sloppy rules, a query technique for specifying pairwise comparisons. We conducted a user study comparing sloppy rules and natural language. The findings suggest that for easier pairwise comparison tasks, the two techniques are comparable in efficiency and preference and that for more difficult pairwise comparison tasks, sloppy rules allow faster specification and are more preferable.
“Everyone Has Some Personal Stuff”: Designing to Support Digital Privacy with Shared Mobile Phone Use in Bangladesh
People in South Asia frequently share a single device among multiple individuals, resulting in digital privacy challenges. This paper explores a design concept that aims to mitigate some of these challenges through a ‘tiered’ privacy model. Using this model, a person creates a ‘shared’ account that contains data they are willing to share and that is assigned a password that will be shared. Simultaneously, they create a separate ‘secret’ account that contains data they prefer to keep secret and that uses a password they do not share with anyone. When a friend or family member asks to check their device, the user can tell them the password for their shared account, with their private data secure in the secret account that the other person is unaware of. We explore the benefits and trade-offs of our design through a three-week deployment with 21 participants in Bangladesh, presenting findings that show how our work aids digital privacy while also exposing the challenges that remain.
This paper presents a study of mobile phone use by people settling in a new land to access state provided digital services. It shows that digital literacy and access to technology are not the only resources and capabilities needed to successfully access digital services and do not guarantee a straightforward resettlement process. Using creative engagement methods, the research involved 132 “newcomers” seeking to settle in Sweden. Ribot and Peluso’s theory of access (2003) was employed to examine the complex web of access experienced by our participants. We uncover that when communities are dealing with high levels of precarity, their primary concerns are related to accessing the benefits of a service, rather than controlling access. Broadening the HCI framework, the paper concludes that a sociotechnical model of access needs to connect access control and access benefit to facilitate the design of an effective digital service.
While the use of ad blockers prevents negative impacts of advertising on user experience, it poses a serious threat to the business model of commercial web services and freely available content on the web. As an alternative, we investigate the user enjoyment and the advertising effectiveness of playfully deactivating online ads. We created eight game concepts, performed a pre-study assessing the users’ perception of them (N=50) and implemented three well-perceived ones. In a lab study (N=72), we found that these game concepts are more enjoyable than deactivating ads without game elements. Additionally, one game concept was even preferred over using an ad blocker. Notably, playfully deactivating ads was shown to have a positive impact on users’ brand and product memory, enhancing the advertising effectiveness. Thus, our results indicate that playfully deactivating ads is a promising way of bridging the gap between user enjoyment and effective advertising.
Dealing with fear of falling is a challenge in sport climbing. Virtual reality (VR) research suggests that using physical and reality-based interaction increases the presence in VR. In this paper, we present a study that investigates the influence of physical props on presence, stress and anxiety in a VR climbing environment involving whole body movement. To help climbers overcoming fear of falling, we compared three different conditions: Climbing in reality at 10 m height, physical climbing in VR (with props attached to the climbing wall) and virtual climbing in VR using game controllers. From subjective reports and biosignals, our results show that climbing with props in VR increases the anxiety and sense of realism in VR for sport climbing. This suggests that VR in combination with physical props are an effective simulation setup to induce the sense of height.
With recent interest in shape-changing interfaces, material-driven design, wearable technologies, and soft robotics, digital fabrication of soft actuatable material is increasingly in demand. Much of this research focuses on elastomers or non-stretchy air bladders. Computationally-controlled machine knitting offers an alternative fabrication technology which can rapidly produce soft textile objects that have a very different character: breathable, lightweight, and pleasant to the touch. These machines are well established and optimized for the mass production of garments, but compared to other digital fabrication techniques such as CNC machining or 3D printing, they have received much less attention as general purpose fabrication devices. In this work, we explore new ways to employ machine knitting for the creation of actuated soft objects. We describe the basic operation of this type of machine, then show new techniques for knitting tendon-based actuation into objects. We explore a series of design strategies for integrating tendons with shaping and anisotropic texture design. Finally, we investigate different knit material properties, including considerations for motor control and sensing.
This paper investigates how to sketch NLP-powered user experiences. Sketching is a cornerstone of design innovation. When sketching, designers rapidly experiment with a number of abstract ideas using simple, tangible instruments such as drawings and paper prototypes. Sketching NLP-powered experiences, however, presents challenges, i.e. How to visualize abstract language interaction? How to ideate a broad range of technically feasible intelligent functionalities? As a first step towards understanding these challenges, we present a first-person account of our sketching process when designing intelligent writing assistance. We detail the challenges we encountered and emergent solutions, such as a new format of wireframe for sketching language interactions and a new wizard-of-oz-based NLP rapid prototyping method. Drawing on these findings, we discuss the importance of abstraction in sketching and other implications.
Engagement with Mental Health Screening on Mobile Devices: Results from an Antenatal Feasibility Study
Perinatal depression (PND) affects up to 15% of women within the United Kingdom and has a lasting impact on a woman’s quality of life, birth outcomes and her child’s development. Suicide is the leading cause of maternal mortality. However, it is estimated that at least 50% of PND cases go undiagnosed. This paper presents the results of the first feasibility study to examine the potential of mobile devices to engage women in antenatal mental health screening. Using a mobile application, 254 women attending 14 National Health Service midwifery clinics provided 2,280 momentary and retrospective reports of their wellbeing over a 9-month period. Women spoke positively of the experience, installing and engaging with this technology regardless of age, education, wellbeing, number of children, marital or employment status, or past diagnosis of depression. 39 women reported a risk of depression, self-harm or suicide; two-thirds of whom were not identified by screening in-clinic.
Recent work established that it is possible for human artists to encode information into hand-drawn markers, but it is difficult to do when simultaneously maintaining aesthetic quality. We present two methods for relieving the mental burden associated with encoding, while allowing an artist to draw as freely as possible. A ‘Helper Overlay’ guides the artist with real-time feedback indicating where visual features should be added or removed, and an ‘Autocomplete Tool’ directly adds necessary features to the drawing for the artist to touch up. Both methods are enabled by a two-part algorithm that uses a tree-search for finding ‘major’ changes and a dynamic programming method for finding the minimum number of ‘minor’ changes. A 24-person study demonstrates that a majority of participants prefer both tools over previous methods of manual encoding, with the Helper Overlay being the more popular of the two.
Visualizations have a potentially enormous influence on how data are used to make decisions across all areas of human endeavor. However, it is not clear how this power connects to ethical duties: what obligations do we have when it comes to visualizations and visual analytics systems, beyond our duties as scientists and engineers? Drawing on historical and contemporary examples, I address the moral components of the design and use of visualizations, identify some ongoing areas of visualization research with ethical dilemmas, and propose a set of additional moral obligations that we have as designers, builders, and researchers of visualizations.
Sliders are one of the most fundamental components used in touchscreen user interfaces (UIs). When entering data using a slider, errors occur due e.g. to visual perception, resulting in inputs not matching what is intended by the user. However, it is unclear if the errors occur uniformly across the full range of the slider or if there are systematic offsets. We conducted a study to assess the errors occurring when entering values with horizontal and vertical sliders as well as two common visual styles. Our results reveal significant effects of slider orientation and style on the precision of the entered values. Furthermore, we identify systematic offsets that depend on the visual style and the target value. As the errors are partially systematic, they can be compensated to improve users’ precision. Our findings provide UI designers with data to optimize user experiences in the wide variety of application areas where slider based touchscreen input is used.
Today’s smartphone notification systems are incapable of determining whether a notification has been successfully perceived without explicit interaction from the user. If the system incorrectly assumes that a notification has not been perceived, it may repeat it redundantly, disrupting the user and others (e.g., phone ringing). Or, if it incorrectly assumes that a notification was perceived, and therefore fails to repeat it, the notification will be missed altogether (e.g., text message). Results from a laboratory study confirm, for the first time, that both vibrotactile and auditory smartphone notifications induce skin conductance responses (SCR), that the induced responses differ from that of arbitrary stimuli, and that they could be employed to predict perception of smartphone notifications after their presentation using wearable sensors.
Exploring the Opportunities for Technologies to Enhance Quality of Life with People who have Experienced Vision Loss
Research predicts that 196 million people will be diagnosed with Age-Related Macular Degeneration (AMD) by 2020. People who experience AMD and other vision loss face barriers that affect their Quality of Life (QoL). People experience only modest improvement from technologies (e.g., screen readers, CCTV), tools (e.g., magnifying glasses, tactile buttons), and human help (e.g., friends, blindness organizations). Further, there are issues to accessing these resources based on one’s place of residence. To explore these challenges and determine design implications to support people who have experienced vision loss (PVL), we conducted a qualitative semi-structured interview study exploring QoL with 10 PVL. We uncovered themes of supporting creative work, recognizing the impact of one’s living in a non-urban setting on QoL, and increasing efficiency at accomplishing tasks. We motivate the inclusion of PVL in the design process because they learned skills while sighted and are now low vision or blind.
Visualization of ranked lists is a common occurrence, but many in-the-wild solutions fly in the face of vision science and visualization wisdom. For example, treemaps and bubble charts are commonly used for this purpose, despite the fact that the data is not hierarchical and that length is easier to perceive than area. Furthermore, several new visual representations have recently been suggested in this area, including wrapped bars, packed bars, piled bars, and Zvinca plots. To quantify the differences and trade-offs for these ranked-list visualizations, we here report on a crowdsourced graphical perception study involving six such visual representations, including the ubiquitous scrolled barchart, in three tasks: ranking (assessing a single item), comparison (two items), and average (assessing global distribution). Results show that wrapped bars may be the best choice for visualizing ranked lists, and that treemaps are surprisingly accurate despite the use of area rather than length to represent value.
Designers are often discouraged from creating data visualizations that omit or distort information, because they can easily be misleading. However, the same representations that could be used to deceive can provide benefits when chosen to appropriately align with user tasks. We present an interaction technique, Perceptual Glimpses, which allows for the transparent presentation of so-called ‘deceptive’ views of information that are made temporary using quasimodes. When presented using Perceptual Glimpses, message-level exaggeration caused by a truncated axis on a bar chart was reduced under some conditions, but users require guidance to avoid errors, and view presentation order may affect trust. When Perceptual Glimpses was extended to display a range of views that might otherwise be deceptive or difficult to understand if shown out of context, users were able to understand and leverage these transformations to perform a range of low-level tasks. Design recommendations and examples suggest extensions of the technique.
You `Might’ Be Affected: An Empirical Analysis of Readability and Usability Issues in Data Breach Notifications
Data breaches place affected individuals at significant risk of identity theft. Yet, prior studies have shown that many consumers do not take protective actions after receiving a data breach notification from a company. We analyzed 161 data breach notifications sent to consumers with respect to their readability, structure, risk communication, and presentation of potential actions. We find that notifications are long and require advanced reading skills. Many companies downplay or obscure the likelihood of the receiver being affected by the breach and associated risks. Moreover, potential actions and offered compensations are frequently described in lengthy paragraphs instead of clearly listed. Little information is provided regarding an action’s urgency and effectiveness; little guidance is provided on which actions to prioritize. Based on our findings, we provide recommendations for designing more usable and informative data breach notifications that could help consumers better mitigate the consequences of being affected by a data breach.
Relatively few studies of accessibility and transportation for people with vision impairments have investigated forms of transportation besides public transportation and walking. To develop a more nuanced understanding of this context, we turn to ridesharing, an increasingly used mode of transportation. We interviewed 16 visually-impaired individuals about their active use of ridesharing services like Uber and Lyft. Our findings show that, while people with vision impairments value independence, ridesharing involves building trust across a complex network of stakeholders and technologies. This data is used to start a discussion on how other systems can facilitate trust for people with vision impairments by considering the role of conversation, affordances of system incentives, and increased agency.
We present an empirical comparison of eleven bare hand, mid-air mode-switching techniques suitable for virtual reality in two experiments. The first evaluates seven techniques spanning dominant and non-dominant hand actions. Techniques represent common classes of actions selected by a methodical examination of 56 examples of prior art. The standard “subtraction method” protocol is adapted for 3D interfaces, with two baseline selection methods, bare hand pinch and device controller button. A second experiment with four techniques explores more subtle dominant-hand techniques and the effect of using a dominant hand device for selection. Results provide guidance to practitioners when choosing bare hand, mid-air mode-switching techniques, and for researchers when designing new mode-switching methods in VR.
Students with visual impairments struggle to learn various concepts in the academic curriculum because diagrams, images, and other visual are not accessible to them. To address this, researchers have design interactive 3D printed models (I3Ms) that provide audio descriptions when a user touches components of a model. In prior work, I3Ms were designed on an ad hoc basis, and it is currently unknown what general guidelines produce effective I3M designs. To address this gap, we conducted two studies with Teachers of the Visually Impaired (TVIs). First, we led two design workshops with 35 TVIs, who modified sample models and added interactive elements to them. Second, we worked with three TVIs to design three I3Ms in an iterative instructional design process. At the end of this process, the TVIs used the I3Ms we designed to teach their students. We conclude that I3Ms should (1) have effective tactile features (e.g., distinctive patterns between components), (2) contain both auditory and visual content (e.g., explanatory animations), and (3) consider pedagogical methods (e.g., overview before details).
Home is a person’s castle, a private and protected space. Internet-connected devices such as locks, cameras, and speakers might make a home “smarter” but also raise privacy issues because these devices may constantly and inconspicuously collect, infer or even share information about people in the home. To explore user-centered privacy designs for smart homes, we conducted a co-design study in which we worked closely with diverse groups of participants in creating new designs. This study helps fill the gap in the literature between studying users’ privacy concerns and designing privacy tools only by experts. Our participants’ privacy designs often relied on simple strategies, such as data localization, disconnection from the Internet, and a private mode. From these designs, we identified six key design factors: data transparency and control, security, safety, usability and user experience, system intelligence, and system modality. We discuss how these factors can guide design for smart home privacy.
Understanding Perceptions of Problematic Facebook Use: When People Experience Negative Life Impact and a Lack of Control
While many people use social network sites to connect with friends and family, some feel that their use is problematic, seriously affecting their sleep, work, or life. Pairing a survey of 20,000 Facebook users measuring perceptions of problematic use with behavioral and demographic data, we examined Facebook activities associated with problematic use as well as the kinds of people most likely to experience it. People who feel their use is problematic are more likely to be younger, male, and going through a major life event such as a breakup. They spend more time on the platform, particularly at night, and spend proportionally more time looking at profiles and less time browsing their News Feeds. They also message their friends more frequently. While they are more likely to respond to notifications, they are also more likely to deactivate their accounts, perhaps in an effort to better manage their time. Further, they are more likely to have seen content about social media or phone addiction. Notably, people reporting problematic use rate the site as more valuable to them, highlighting the complex relationship between technology use and well-being. A better understanding of problematic Facebook use can inform the design of context-appropriate and supportive tools to help people become more in control.
Split keyboards are widely used on hand-held touchscreen devices (e.g., tablets). However, typing on a split keyboard often requires eye movement and attention switching between two halves of the keyboard, which slows users down and increases fatigue. We explore peripheral typing, a superior typing mode in which a user focuses her visual attention on the output text and keeps the split keyboard in peripheral vision. Our investigation showed that peripheral typing reduced attention switching, enhanced user experience and increased overall performance (27 WPM, 28% faster) over the typical eyes-on typing mode. This typing mode can be well supported by accounting the typing behavior in statistical decoding. Based on our study results, we have designed GlanceType, a text entry system that supported both peripheral and eyes-on typing modes for real typing scenario. Our evaluation showed that peripheral typing not only well co-existed with the existing eyes-on typing, but also substantially improved the text entry performance. Overall, peripheral typing is a promising typing mode and supporting it would significantly improve the text entry performance on a split keyboard.
Remote Collaboration using Virtual Reality (VR) and Augmented Reality (AR) has recently become a popular way for people from different places to work together. Local workers can collaborate with remote helpers by sharing 360-degree live video or 3D virtual reconstruction of their surroundings. However, each of these techniques has benefits and drawbacks. In this paper we explore mixing 360 video and 3D reconstruction together for remote collaboration, by preserving benefits of both systems while reducing drawbacks of each. We developed a hybrid prototype and conducted user study to compare benefits and problems of using 360 or 3D alone to clarify the needs for mixing the two, and also to evaluate the prototype system. We found participants performed significantly better on collaborative search tasks in 360 and felt higher social presence, yet 3D also showed potential to complement. Participant feedback collected after trying our hybrid system provided directions for improvement.
To make evidence-based recommendations to decision-makers, researchers conducting systematic reviews and meta-analyses must navigate a garden of forking paths: a series of analytical decision-points, each of which has the potential to influence findings. To identify challenges and opportunities related to designing systems to help researchers manage uncertainty around which of multiple analyses is best, we interviewed 11 professional researchers who conduct research synthesis to inform decision-making within three organizations. We conducted a qualitative analysis identifying 480 analytical decisions made by researchers throughout the scientific process. We present descriptions of current practices in applied research synthesis and corresponding design challenges: making it more feasible for researchers to try and compare analyses, shifting researchers’ attention from rationales for decisions to impacts on results, and supporting communication techniques that acknowledge decision-makers’ aversions to uncertainty. We identify opportunities to design systems which help researchers explore, reason about, and communicate uncertainty in decision-making about possible analyses in research synthesis.
When a user needs to reposition the cursor during text editing, this is often done using the mouse. For experienced typists especially, the switch between keyboard and mouse can slow down the keyboard editing workflow considerably. To address this we propose ReType, a new gaze-assisted positioning technique combining keyboard with gaze input based on a new ‘patching’ metaphor. ReType allows users to perform some common editing operations while keeping their hands on the keyboard. We present the result of two studies. A free-use study indicated that ReType enhances the user experience of text editing. ReType was liked by many participants, regardless of their typing skills. A comparative user study showed that ReType is able to match or even beat the speed of mouse-based interaction for small text edits. We conclude that the gaze-augmented user interface can make common interactions more fluent, especially for professional keyboard users.
Desktop Electrospinning: A Single Extruder 3D Printer for Producing Rigid Plastic and Electrospun Textiles
We present a new type of 3D printer that combines rigid plastic printing with melt electrospinning? a technique that uses electrostatic forces to create thin fibers from a molten polymer. Our printer enables custom-shaped textile sheets (similar in feel to wool felt) to be produced alongside rigid plastic using a single material (i.e., PLA) in a single process. We contribute open-source firmware, hardware specifications, and printing parameters to achieve melt electrospinning. Our approach offers new opportunities for fabricating interactive objects and sensors that blend the flexibility, absorbency and softness of produced electrospun textiles with the structure and rigidity of hard plastic for actuation, sensing, and tactile experiences.
With the recent advancement in computer vision, Artificial Intelligence (AI), and mobile technologies, it has become technically feasible for computerized Face Reading Technologies (FRTs) to learn about one’s health in everyday settings. However, how to design FRT-based applications for everyday health practices remains unexplored. This paper presents a design study with a technology probe called Faced, a mobile health checkup application based on the facial diagnosis method from Traditional Chinese Medicine (TCM). A field trial of Faced with 10 participants suggests potential usage modes and highlights a number of critical design issues in the use of FRTs for everyday health, including adaptability, practicality, sensitivity, and trustworthiness. We end by discussing design implications to address the unique challenges of fully integrating FRTs into everyday health practices.
Spatial layout is a key component in graphic design. While people who are blind or visually impaired (BVI) can use screen readers or magnifiers to access digital content, these tools fail to fully communicate the content’s graphic design information. Through semi-structured interviews and contextual inquiries, we identify the lack of this information and feedback as major challenges in understanding and editing layouts. Guided by these insights and a co-design process with a blind hobbyist web developer, we developed an interactive, multimodal authoring tool that lets blind people understand spatial relationships between elements and modify layout templates. Our tool automatically generates tactile print-outs of a web page’s layout, which users overlay on top of a tablet that runs our self-voicing digital design tool. We conclude with design considerations grounded in user feedback for improving the accessibility of spatially encoded information and developing tools for BVI authors.
The limitations of stereo display systems affect depth perception, e.g., due to the vergence-accommodation conflict or diplopia. We performed three studies to understand how stereo display deficiencies impact 3D pointing for targets in front of a screen and close to the user, i.e., in peripersonal space. Our first two experiments compare movements with and without a change in visual depth for virtual respectively physical targets. Results indicate that selecting targets along the depth axis is slower and has less throughput for virtual targets, while physical pointing demonstrates the opposite result. We then propose a new 3D extension for Fitts’ law that models the effect of stereo display deficiencies. Next, our third experiment verifies the model and measures more broadly how the change in visual depth between targets affects pointing performance in peripersonal space and confirms significant effects on time and throughput. Finally, we discuss implications for 3D user interface design.
Studies have shown certain game tasks such as targeting to be negatively and significantly affected by latencies as low as 41ms. Therefore it is important to understand the relationship between local latency – delays between an input action and resulting change in the display – and common gaming tasks such as targeting and tracking. In addition, games now use a variety of input devices, including touchscreens, mice, tablets and controllers. These devices provide very different combinations of direct/indirect input, absolute/relative movement, and position/rate control, and are likely to be affected by latency in different ways. We performed a study evaluating and comparing the effects of latency across four devices (touchscreen, mouse, controller and drawing tablet) on targeting and interception tasks. We analyze both throughput and path characteristics, identify differences between devices, and provide design considerations for game designers.
What began as a quest for artificial general intelligence branched into several pursuits, including intelligent assistants developed by tech companies and task-oriented chatbots that deliver more information or services in specific domains. Progress quickened with the spread of low-latency networking, then accelerated dramatically a few years ago. In 2016, task-focused chatbots became a centerpiece of machine intelligence, promising interfaces that are more engaging than robotic answering systems and that can accommodate our increasingly phone-based information needs. Hundreds of thousands were built. Creating successful non-trivial chatbots proved more difficult than anticipated. Some developers now design for human-chatbot (humbot) teams, with people handling difficult queries. This paper describes the conversational agent space, difficulties in meeting user expectations, potential new design approaches, uses of human-bot hybrids, and implications for the ultimate goal of creating software with general intelligence.
Haptic feedback is used in cars to reduce visual inattention. While tactile feedback like vibration can be influenced by the car’s movement, thermal and cutaneous push feedback should be independent of such interference. This paper presents two driving simulator studies investigating novel tactile feedback on the steering wheel for navigation. First, devices on one side of the steering wheel were warmed, indicating the turning direction, while those on the other side were cooled. This thermal feedback was compared to audio. The thermal navigation lead to 94.2% correct recognitions of warnings 200m before the turn and to 91.7% correct turns. Speech had perfect recognition for both. In the second experiment, only the destination side was indicated thermally, and this design was compared to cutaneous push feedback. The simplified thermal feedback design did not increase recognition, but cutaneous push feedback had high recognition rates (100% for 200 m warnings, 98% for turns).
Standard controllers for virtual reality (VR) lack sophisticated means to convey a realistic, kinesthetic impression of size, resistance or inertia. We present the concept and implementation of Drag:on, an ungrounded shape-changing VR controller that provides dynamic passive haptic feedback based on drag, i.e. air resistance, and weight shift. Drag:on leverages the airflow occurring at the controller during interaction. By dynamically adjusting its surface area, the controller changes the drag and rotational inertia felt by the user. In a user study, we found that Drag:on can provide distinguishable levels of haptic feedback. Our prototype increases the haptic realism in VR compared to standard controllers and when rotated or swung improves the perception of virtual resistance. By this, Drag:on provides haptic feedback suitable for rendering different virtual mechanical resistances, virtual gas streams, and virtual objects differing in scale, material and fill state.
Smartphones are used predominantly one-handed, using the thumb for input. Many smartphones, however, have grown beyond 5″. Users cannot tap everywhere on these screens without destabilizing their grip. ForceRay (FR) lets users aim at an out-of-reach target by applying a force touch at a comfortable thumb location, casting a virtual ray towards the target. Varying pressure moves a cursor along the ray. When reaching the target, quickly lifting the thumb selects it. In a first study, FR was 195 ms slower and had a 3% higher selection error than the best existing technique, BezelCursor (BC), but FR caused significantly less device movement than all other techniques, letting users maintain a steady grip and removing their concerns about device drops. A second study showed that an hour of training speeds up both BC and FR, and that both are equally fast for targets at the screen border.
Designers strive to make their mobile apps stand out in a competitive market by creating a distinctive brand personality. However, it is unclear whether users can form a consistent impression of brand personality by looking at a few user interface (UI) screenshots in the app store, and if this process can be modeled computationally. To bridge this gap, we first collect crowd assessment on brand personalities depicted by the UIs of 318 applications, and statistically confirm that users can reach substantial agreement. To further model how users process mobile UI visually, we compute UI descriptors including Color, Organization, and Texture at both element and page levels. We feed these descriptors to a computational model, achieving a high accuracy of predicting perceived brand personality (MSE = 0.035 and R^2 = 0.78). This work could benefit designers by highlighting contributing visual factors to brand personality creation and providing quick, low-cost design feedback.
During the last decade, people have started to experiment with insertable technology like RFID or NFC chips and use them for e.g. identification. However, little is known about how people in fact interact with and adapt insertables. We conducted a video analysis of 122 YouTube videos to gain insight into the interaction with the insertables. Second, we implemented an online survey to complement our data from the video analysis. Our findings show that there are many opportunities for interaction with insertables both for task-oriented and creative purposes. However, there are also multiple challenges and obstacles as well as side effects and health concerns. Our findings conclude that the current infrastructure is not ready to support the use of insertables yet, and we discuss implications of this.
We examine the articulation characteristics of stroke-gestures produced by people with upper body motor impairments on touchscreens as well as the accuracy rates of popular classification techniques, such as the $-family, to recognize those gestures. Our results on a dataset of 9,681 gestures collected from 70 participants reveal that stroke-gestures produced by people with motor impairments are recognized less accurately than the same gesture types produced by people without impairments, yet still accurately enough (93.0%) for practical purposes; are similar in terms of geometrical criteria to the gestures produced by people without impairments; but take considerably more time to produce (3.4s vs. 1.7s) and exhibit lower consistency (-49.7%). We outline a research roadmap for accessible gesture input on touchscreens for users with upper body motor impairments, and we make our large gesture dataset publicly available in the community.
When creating digital artefacts, it is important to ensure that the product being made is accessible to as much of the population as is possible. Many guidelines and supporting tools exist to assist reaching this goal. However, little is known about developers’ understanding of accessible practice and the methods that are used to implement this. We present findings from an accessibility design workshop that was carried out with a mixture of 197 developers and digital technology students. We discuss perceptions of accessibility, techniques that are used when designing accessible products, and what areas of accessibility development participants believed were important. We show that there are gaps in the knowledge needed to develop accessible products despite the effort to promote accessible design. Our participants are themselves aware of where these gaps are and have suggested a number of areas where tools, techniques and guidance would improve their practice.
Touchstone2 offers a direct-manipulation interface for generating and examining trade-offs in experiment designs. Based on interviews with experienced researchers, we developed an interactive environment for manipulating experiment design parameters, revealing patterns in trial tables, and estimating and comparing statistical power. We also developed TSL, a declarative language that precisely represents experiment designs. In two studies, experienced HCI researchers successfully used Touchstone2 to evaluate design trade-offs and calculate how many participants are required for particular effect sizes. We discuss Touchstone2’s benefits and limitations, as well as directions for future research.
Socio-technical Dynamics: Cooperation of Emergent and Established Organisations in Crises and Disasters
Increasing ubiquitousness of information and communication technology exerts influence on crisis and disaster management. New media enable citizens to rapidly self-organise in emergent groups. Theoretical framing of their interactions with established organisations is lacking. To address this, we conduct a thematic analysis on qualitative data from the European migration crisis of 2015. We draw on context-rich material from both emergent groups and established organisation. To represent our findings, we introduce the notion of socio-technical dynamics. We derive implications for computer supported cooperative work in crises and disasters. These insights contribute to the efficient involvement of emergent groups in established systems.
In a CHI paper from 10 years ago, entitled “Accounting for Diversity in Subjective Judgments”, an interesting dichotomy was reported between, on the one side, the increased use of idiosyncratic constructs when judging the user experience of diverse products and, on the other hand, the statistical methods available to analyze such data. The paper more specifically proposed a method to extract diverse perspectives (called views) from experimental data. The current paper provides three improvements of this existing method by: 1) showing that a little-known approach for clustering attributes, called VARCLUS, can be applied and extended to provide a more optimal algorithm, 2) showing how the VARCLUS method can be applied to perform both within- and across-subject analysis, and 3) providing access to the VARCLUS method by incorporating it in ILLMO, a user-friendly and freely available program for interactive statistics.
ElasticVR: Providing Multilevel Continuously-Changing Resistive Force and Instant Impact Using Elasticity for VR
Resistive force (e.g., due to object elasticity) and impact (e.g., due to recoil) are common effects in our daily life. However, resistive force continuously changes due to users’ movements while impact instantly occurs when an event triggers it. These feedback are still not realistically provided by current VR haptic methods. In this paper, a wearable device, ElasticVR, which consists of an elastic band, servo motors and mechanical brakes, is proposed to provide the continuously-changing resistive force and instantly-occurring impact upon the user’s hand to enhance VR realism. By changing two physical properties, length and extension distance, of the elastic band, ElasticVR provides multilevel resistive force with no delay and impact with little delay, respectively, for realistic and versatile VR applications. A force perception study was performed to observe users’ force distinguishability of the resistive force and impact, and the prototype was built based on its results. A VR experience study further proves that the resistive force and impact from ElasticVR both outperform those from current approaches in realism. Applications using ElasticVR are also demonstrated.
Personality is an established domain of research in psychology, and individual differences in various traits are linked to a variety of real-life outcomes and behaviours. Personality detection is an intricate task that typically requires humans to fill out lengthy questionnaires assessing specific personality traits. The outcomes of this, however, may be unreliable or biased if the respondents do not fully understand or are not willing to honestly answer the questions. To this end, we propose a framework for objective personality detection that leverages humans’ physiological responses to external stimuli. We exemplify and evaluate the framework in a case study, where we expose subjects to affective image and video stimuli, and capture their physiological responses using a commercial-grade eye-tracking sensor. These responses are then processed and fed into a classifier capable of accurately predicting a range of personality traits. Our work yields notably high predictive accuracy, suggesting the applicability of the proposed framework for robust personality detection.
Chronic Fatigue Syndrome (CFS) is a debilitating medical condition that is characterized by a range of physical, cognitive and social impairments. This paper investigates CFS patients’ perspectives on the potential for technological support for self-management of their symptoms. We report findings from three studies in which people living with CFS 1) prioritized symptoms that they would like technologies to address, 2) articulated their current approaches to self-management alongside challenges they face, and 3) reflected on their experiences with three commercial smartphone apps related to symptom management. We contribute an understanding of the specific needs of the ME/CFS population and the ways in which they currently engage in self-management using technology. The paper ends by describing five high-level design recommendations for ME/CFS self-management technologies.
HCI has become increasingly interested in the use of technology during difficult life experiences. Yet despite considerable popularity, little is known about how and why people engage with games in times of personal difficulty. Based on a qualitative analysis of an online survey (N=95), our findings indicate that games offered players much needed respite from stress, supported them in dealing with their feelings, facilitated social connections, stimulated personal change and growth, and provided a lifeline in times of existential doubt. However, despite an emphasis on gaming as being able to support coping in ways other activities did not, participants also referred to games as unproductive and as an obstacle to living well. We discuss these findings in relation to both coping process and outcome, while considering tensions around the potential benefits and perceived value of gaming.
We introduce the dissimilarity-consensus method, a new approach to computing objective measures of consensus between users’ gesture preferences to support data analysis in end-user gesture elicitation studies. Our method models and quantifies the relationship between users’ consensus over gesture articulation and numerical measures of gesture dissimilarity, e.g., Dynamic Time Warping or Hausdorff distances, by employing growth curves and logistic functions. We exemplify our method on 1,312 whole-body gestures elicited from 30 children, ages 3 to 6 years, and we report the first empirical results in the literature on the consensus between whole-body gestures produced by children this young. We provide C# and R software implementations of our method and make our gesture dataset publicly available.
The view of quality in human-computer interaction continuously develops, having in past decades included consistency, transparency, usability, and positive emotions. Recently, meaning is receiving increased interest in the user experience literature and in industry, referring to the end, purpose or significance of interaction with computers. However, the notion of meaning remains elusive and a bewildering number of senses are in use. We present a framework of meaning in interaction, based on a synthesis of psychological meaning research. The framework outlines five distinct senses of the experience of meaning: connectedness, purpose, coherence, resonance, and significance. We illustrate the usefulness of the framework by analyzing a selection of recent papers at the CHI conference and by raising a series of open research questions about the interplay of meaning, user experience, reflection, and well-being.
Sustainabot – Exploring the Use of Everyday Foodstuffs as Output and Input for and with Emergent Users
Mainstream digital interactions are spread over a plethora of devices and form-factors, from mobiles to laptops; printouts to large screens. For emergent users, however, such abundance of choice is rarely accessible or affordable. In particular, viewing mobile content on a larger screen, or printing out copies, is often not available. In this paper we present Sustainabot – a small robot printer that uses everyday materials to print shapes and patterns from mobile phones. Sustainabot was proposed and developed by and with emergent users through a series of co-creation workshops. We begin by discussing this process, then detail the open-source mobile printer prototype. We carried out two evaluations of Sustainabot, the first focused on printing with materials in situ, and the second on understandability of its output. We present these results, and discuss opportunities and challenges for similar developments. We conclude by highlighting where and how similar devices could be used in future.
Through focus groups (n=61) and surveys (n=2,083) of parents and teens, we investigated how parents and their teen children experience their own and each other’s phone use in the context of parent-teen relationships. Both expressed a lack of agency in their own and each other’s phone use, feeling overly reliant on their own phone and displaced by the other’s phone. In a classic example of the fundamental attribution error, each party placed primary blame on the other, and rationalized their own behavior with legitimizing excuses. We present a conceptual model showing how parents’ and teens’ relationships to their phones and perceptions of each other’s phone use are inextricably linked, and how, together, they contribute to parent-teen tensions and disconnections. We use the model to consider how the phone might play a less highly charged role in family life and contribute to positive connections between parents and their teen children.
On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction
We propose a multi-scale Mixed Reality (MR) collaboration between the Giant, a local Augmented Reality user, and the Miniature, a remote Virtual Reality user, in Giant-Miniature Collaboration (GMC). The Miniature is immersed in a 360-video shared by the Giant who can physically manipulate the Miniature through a tangible interface, a combined 360-camera with a 6 DOF tracker. We implemented a prototype system as a proof of concept and conducted a user study (n=24) comprising of four parts comparing: A) two types of virtual representations, B) three levels of Miniature control, C) three levels of 360-video view dependencies, and D) four 360-camera placement positions on the Giant. The results show users prefer a shoulder mounted camera view, while a view frustum with a complimentary avatar is a good visualization for the Miniature virtual representation. From the results, we give design recommendations and demonstrate an example Giant-Miniature Interaction.
“I feel it is my responsibility to stream”: Streaming and Engaging with Intangible Cultural Heritage through Livestreaming
Globalization has led to the destruction of many cultural practices, expressions, and knowledge found within local communities. These practices, defined by UNESCO as Intangible Cultural Heritage (ICH), have been identified, promoted, and safeguarded by nations, academia, organizations and local communities to varying degrees. Despite such efforts, many practices are still in danger of being lost or forgotten forever. With the increased popularity of livestreaming in China, some streamers have begun to use livestreaming to showcase and promote ICH activities. To better understand the practices, opportunities, and challenges inherent in sharing and safeguarding ICH through livestreaming, we interviewed 10 streamers and 8 viewers from China. Through our qualitative investigation, we found that ICH streamers had altruistic motivations and engaged with viewers using multiple modalities beyond livestreams. We also found that livestreaming encouraged real-time interaction and sociality, while non-live curated videos attracted attention from a broader audience and assisted in the archiving of knowledge.
AILA: Attentive Interactive Labeling Assistant for Document Classification through Attention-Based Deep Neural Networks
Document labeling is a critical step in building various machine learning applications. However, the step can be time-consuming and arduous, requiring a significant amount of human efforts. To support an efficient document labeling environment, we present a system called Attentive Interactive Labeling Assistant (AILA). In its core, AILA uses Interactive Attention Module (IAM), a novel module that visually highlights words in a document that labelers may pay attention to when labeling a document. IAM utilizes attention-based Deep Neural Networks which not only support a prediction of which words to highlight but also enable labelers to indicate words that should be assigned a high attention weight while labeling to improve the future quality of word prediction.We evaluated the labeling efficiency and the accuracy by comparing the conditions with and without IAM in our study. The results showed that participants’ labeling efficiency increased significantly under the condition with IAM than the condition without IAM, while the two conditions maintained roughly the same labeling accuracy.
Current advances in machine translation increase the need for translators to switch from traditional translation to post-editing (PE) of machine-translated text, a process that saves time and improves quality. This affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Our results of an elicitation study with professional translators indicate that a combination of pen, touch, and speech could well support common PE tasks, and received high subjective ratings by our participants. Therefore, we argue that future translation environment research should focus more strongly on these modalities in addition to mouse- and keyboard-based approaches. On the other hand, eye tracking and gesture modalities seem less important. An additional interview regarding interface design revealed that most translators would also see value in automatically receiving additional resources when a high cognitive load is detected during PE.
In this paper, we report three user experiments that investigate in how far the perception of a bar in a bar chart changes based on the height of its neighboring bars. We hypothesized that the perception of the very same bar, for instance, might differ when it is surrounded by the top highest vs. the top lowest bars. Our results show that such neighborhood effects exist: a target bar surrounded by high neighbor bars, is perceived to be lower as the same bar surrounded with low neighbors. Yet, the effect size of this neighborhood effect is small compared to other data-inherent effects: the judgment accuracy largely depends on the target bar rank, number of data items, and other data characteristics of the dataset. Based on the findings, we discuss design implications for perceptually optimizing bar charts.
Talking about Chat at Work in the Global South: An Ethnographic Study of Chat Use in India and Kenya
In this paper, we examine how two chat apps fit into the communication ecosystem of six large distributed enterprises, in India and Kenya. From the perspective of management, these chat apps promised to foster greater communication and awareness between workers in the field, and between fieldworkers and the enterprises administration and management centres. Each organisation had multiple different types of chat groups, characterised by the types of content and interaction patterns they mediate, and the different organisational functions they fulfil. Examining the interplay between chat and existing local practices for coordination, collaboration and knowledge-sharing, we discuss how chat manifests in the distributed workplace and how it fits — or otherwise — alongside the rhythms of both local and remote work. We contribute to understandings of chat apps for workplace communication and provide insights for shaping their ongoing development.
"What’s Happening at that Hip?": Evaluating an On-body Projection based Augmented Reality System for Physiotherapy Classroom
We present two studies to discuss the design, usability analysis, and educational outcome resulting from our system Augmented Body in physiotherapy classroom. We build on prior user-centric design work that investigates existing teaching methods and discuss opportunities for intervention. We present the design and implementation of a hybrid system for physiotherapy education combining an on-body projection based virtual anatomy supplemented by pen-based tablets to create real-time annotations. We conducted a usability evaluation of this system, comparing with projection only and traditional teaching conditions. Finally, we focus on a comparative study to evaluate learning outcome among students in actual classroom settings. Our studies showed increased usage of visual representation techniques in students’11 note taking behavior and statistically significant improvement in some learning aspects. We discuss challenges for designing augmented reality systems for education, including minimizing attention split, addressing text-entry issues, and digital annotations on a moving physical body.
This paper proposes methods of optimising alphabet encoding for skin reading in order to avoid perception errors. First, a user study with 16 participants using two body locations serves to identify issues in recognition of both individual letters and words. To avoid such issues, a two-step optimisation method of the symbol encoding is proposed and validated in a second user study with eight participants using the optimised encoding with a seven vibromotor wearable layout on the back of the hand. The results show significant improvements in the recognition accuracy of letters (97%) and words (97%) when compared to the non-optimised encoding.
Bring the Outside In: Providing Accessible Experiences Through VR for People with Dementia in Locked Psychiatric Hospitals
Many people with dementia (PWD) residing in long-term care may face barriers in accessing experiences beyond their physical premises; this may be due to location, mobility constraints, legal mental health act restrictions, or offence-related restrictions. In recent years, there have been research interests towards designing non-pharmacological interventions aiming to improve the Quality of Life (QoL) for PWD within long-term care. We explored the use of Virtual Reality (VR) as a tool to provide 360°-video based experiences for individuals with moderate to severe dementia residing in a locked psychiatric hospital. We discuss at depth the appeal of using VR for PWD, and the observed impact of such interaction. We also present the design opportunities, pitfalls, and recommendations for future deployment in healthcare services. This paper demonstrates the potential of VR as a virtual alternative to experiences that may be difficult to reach for PWD residing within locked setting.
While we usually have no trouble with orientation, our sense of direction frequently fails in the absence of a frame of reference. Open-water swimmers raise their heads to look for a reference point, since disorientation might result in exhaustion or even drowning. In this paper, we report on Clairbuoyance – a system that provides feedback about the swimmer’s orientation through lights mounted on swimming goggles. We conducted an experiment with two versions of Clairbuoyance: Discrete signals relative to a chosen direction, and continuous signals providing a sense of absolute direction. Participants swam to a series of targets. Proficient swimmers preferred the discrete mode; novice users the continuous one. We determined that both versions of Clairbuoyance enabled reaching the target faster than without the help of the system, although the discrete mode increased error. Based on the results, we contribute insights for designing directional guidance feedback for swimmers.
Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes
Clinical decision support tools (DST) promise improved healthcare outcomes by offering data-driven insights. While effective in lab settings, almost all DSTs have failed in practice. Empirical research diagnosed poor contextual fit as the cause. This paper describes the design and field evaluation of a radically new form of DST. It automatically generates slides for clinicians’ decision meetings with subtly embedded machine prognostics. This design took inspiration from the notion of Unremarkable Computing, that by augmenting the users’ routines technology/AI can have significant importance for the users yet remain unobtrusive. Our field evaluation suggests clinicians are more likely to encounter and embrace such a DST. Drawing on their responses, we discuss the importance and intricacies of finding the right level of unremarkableness in DST design, and share lessons learned in prototyping critical AI systems as a situated experience.
AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness
We are entering an era of AI-Mediated Communication (AI-MC) where interpersonal communication is not only mediated by technology, but is optimized, augmented, or generated by artificial intelligence. Our study takes a first look at the potential impact of AI-MC on online self-presentation. In three experiments we test whether people find Airbnb hosts less trustworthy if they believe their profiles have been written by AI. We observe a new phenomenon that we term the Replicant Effect: Only when participants thought they saw a mixed set of AI- and human-written profiles, they mistrusted hosts whose profiles were labeled as or suspected to be written by AI. Our findings have implications for the design of systems that involve AI technologies in online self-presentation and chart a direction for future work that may upend or augment key aspects of Computer-Mediated Communication theory.
We describe a method for rapid prototyping of haptic interfaces for touch devices. A sheet-like touch interface is constructed from magnetic rubber sheets and conductive materials. The magnetic sheet is thin, and the capacitive sensor of the touch device can still detect the user’s finger above the sheet because of the rubber’s dielectric nature. Furthermore, tactile feedback can be customized with ease by using our magnetizing toolkit to change the magnetic patterns. Using the magnetizing toolkit, we investigated the appropriate size and thickness of haptic interfaces and demonstrated several interfaces such as buttons, sliders, switches, and dials. Our method is an easy and convenient way to customize the size, shape, and haptic feedback of a wide variety of interfaces.
We have recently seen the emergence of new platforms that aim to provide remotely located entrepreneurs and startup companies with support analogous to that found within traditional incubation or acceleration spaces. This paper offers an understanding of these ‘virtual hubs’, and the inherently socio-technical interactions that occur between their members. Our study analyzes a sample of existing virtual hubs in two stages. First, we contribute broader insight into the current landscape of virtual hubs by documenting and categorizing 25 hubs regarding their form, support offered and a selection of further qualities. Second, we contribute detailed insight into the operation and experience of such hubs, from an analysis of 10 semi-structured interviews with organizers and participants of virtual hubs. We conclude by analyzing our findings in terms of relational aspects of non-virtual hubs from the literature and remediation theory, and propose opportunities for advancing the design of such platforms.
E-commerce sites have an incentive to encourage impulse buying, even when not in the consumer’s best interest. This study investigates what features e-commerce sites use to encourage impulse buying and what tools consumers desire to curb their online spending. We present two studies: (1) a systematic content analysis of 200 top e-commerce websites in the U.S. and (2) a survey of online impulse buyers (N=151). From Study 1, we find that e-commerce sites contain multiple features that encourage impulsive buying, including those that lower perceived risks, leverage social influence, and enhance perceived proximity to the product. Conversely, from Study 2 we find that online impulse buyers want tools that (a) encourage deliberation and avoidance, (b) enforce spending limits and postponement, (c) increase checkout effort, (d) make costs more salient, and (e) reduce product desire. These findings inform the design of “friction” technologies that help users make more deliberative consumer choices.
We investigate how families repair communication breakdowns with digital home assistants. We recruited 10 diverse families to use an Amazon Echo Dot in their homes for four weeks. All families had at least one child between four and 17 years old. Each family participated in pre- and post- deployment interviews. Their interactions with the Echo Dot (Alexa) were audio recorded throughout the study. We analyzed 59 communication breakdown interactions between family members and Alexa, framing our analysis with concepts from HCI and speech-language pathology. Our findings indicate that family members collaborate using discourse scaffolding (supportive communication guidance) and a variety of speech and language modifications in their attempts to repair communication breakdowns with Alexa. Alexa’s responses also influence the repair strategies that families use. Designers can relieve the communication repair burden that primarily rests with families by increasing digital home assistants’ abilities to collaborate together with users to repair communication breakdowns.
Many of the guidelines that inform how designers create data visualizations originate in studies that unintentionally exclude populations that are most likely to be among the ‘data poor’. In this paper, we explore which factors may drive attention and trust in rural populations with diverse economic and educational backgrounds – a segment that is largely underrepresented in the data visualization literature. In 42 semi-structured interviews in rural Pennsylvania (USA), we find that a complex set of factors intermix to inform attitudes and perceptions about data visualization – including educational background, political affiliation, and personal experience. The data and materials for this research can be found at https://osf.io/uxwts/
HCI and Affective Health: Taking stock of a decade of studies and charting future research directions
In the last decade, the number of articles on HCI and health has increased dramatically. We extracted 139 papers on depression, anxiety and bipolar health issues from 10 years of SIGCHI conference proceedings. 72 of these were published in the last two years. A systematic analysis of this growing body of literature revealed that most innovation happens in automated diagnosis, and self-tracking, although there are innovative ideas in tangible interfaces. We noted an overemphasis on data production without consideration of how it leads to fruitful interventions. Moreover, we see a need to promote ethical practices for involvement of people living with affective disorders. Finally, although only 16 studies evaluate technologies in a clinical context, several forms of support and intervention illustrate how rich insights are gained from evaluations with real patients. Our findings highlight potential for growth in the design space of affective health technologies.
Tables, desks, and counters are often nearby, motivating their use as interactive surfaces. However, they are typically cluttered. As an alternative, we explore touch input along the ‘edge’ of table-like surfaces. The performance of tapping, crossing, and dragging is tested along the two ridges and front face of a table edge. Results show top ridge movement time is comparable to the top face when tapping or dragging. When crossing, both ridges are at least 11% faster than the top face. Effective width analysis is used to model performance and provide recommended target sizes. Based on observed user behaviour, variations of top and bottom ridge crossing are explored in a second study, and design recommendations with example applications are provided.
Examining and Enhancing the Illusory Touch Perception in Virtual Reality Using Non-Invasive Brain Stimulation
Virtual reality (VR) can be immersive to such a degree that users sometimes report feeling tactile sensations based on visualization of the touch, without any actual physical contact. This effect is not only interesting for studies of human perception, but can also be leveraged to improve the quality of VR by evoking tactile sensations without usage of specialized equipment. The aim of this paper is to study brain processing of the illusory touch and its enhancement for purposes of exploitation in VR scene design. To amplify the illusory touch, transcranial direct current stimulation (tDCS) was used. Participants attended two sessions with blinded stimulation and interacted with a virtual ball using tracked hands in VR. The effects were studied using electroencephalography (EEG), that allowed us to examine stimulation-induced changes in processing of the illusory touch in the brain, as well as to identify its neural correlates. Results confirm enhanced processing of the illusory touch after the stimulation, and some of these changes were correlated to subjective rating of its magnitude.
Participant engagement in online studies is key to collecting reliable data, yet achieving it remains an often discussed challenge in the research community. One factor that might impact engagement is the formality of language used to communicate with participants throughout the study. Prior work has found that language formality can convey social cues and power hierarchies, affecting people’s responses and actions. We explore how formality influences engagement, measured by attention, dropout, time spent on the study and participant performance, in an online study with 369 participants on Mechanical Turk (paid) and LabintheWild (volunteer). Formal language improves participant attention compared to using casual language in both paid and volunteer conditions, but does not affect dropout, time spent, or participant performance. We suggest using more formal language in studies containing complex tasks where fully reading instructions is especially important. We also highlight trade-offs that different recruitment incentives provide in online experimentation.
Experiencing materials in virtual reality (VR) is enhanced by combining visual and haptic feedback. While VR easily allows changes to visual appearances, modifying haptic impressions remains challenging. Existing passive haptic techniques require access to a large set of tangible proxies. To reduce the number of physical representations, we look towards fabrication to create more versatile counterparts. In a user study, 3D-printed hairs with length varying in steps of 2.5 mm were used to influence the feeling of roughness and hardness. By overlaying fabricated hair with visual textures, the resolution of the user’s haptic perception increased. As changing haptic sensations are able to elicit perceptual switches, our approach can extend a limited set of textures to a much broader set of material impressions. Our results give insights into the effectiveness of 3D-printed hair for enhancing texture perception in VR.
The design space of social drones, where autonomous flyers operate in close proximity to human users or bystanders, is distinct from use cases involving a remote human operator and/or an uninhabited environment; and warrants foregrounding human-centered design concerns. Recently, research on social drones has followed a trend of rapid growth. This paper consolidates the current state of the art in human-centered design knowledge about social drones through a review of relevant studies, scaffolded by a descriptive framework of design knowledge creation. Our analysis identified three high-level themes that sketch out knowledge clusters in the literature, and twelve design concerns which unpack how various dimensions of drone aesthetics and behavior relate to pertinent human responses. These results have the potential to inform and expedite future research and practice, by supporting readers in defining and situating their future contributions. The materials and results of our analysis are also published in an open online repository that intends to serve as a living hub for a community of researchers and designers working with social drones.
This paper presents a qualitative study of the recent integration of a UK-based, digital-first mobile banking app – Monzo – with the web automation service IFTTT (If This Then That). Through analysis of 113 unique IFTTT ‘recipes’ shared by Monzo users on public community forums, we illustrate the potentially diverse functions of these recipes, and how they are achieved through different kinds of automation. Beyond achieving more convenient and efficient financial management, we note many playful and expressive applications of conditionality and automation that far extend traditional functions of banking applications and infrastructure. We use these findings to map opportunities, challenges and areas of future research in the development of ‘programmable money’ and related financial technologies. Specifically, we present design implications for the extension of native digital banking applications; novel uses of banking data; the applicability of blockchains and smart contracts; and future forms of financial autonomy.
Designing novel interfaces is challenging. Designers typically rely on experience or subjective judgment in the absence of analytical or objective means for selecting interface parameters. We demonstrate Bayesian optimization as an efficient tool for objective interface feature refinement. Specifically, we show that crowdsourcing paired with Bayesian optimization can rapidly and effectively assist interface design across diverse deployment environments. Experiment 1 evaluates the approach on a familiar 2D interface design problem: a map search and review use case. Adding a degree of complexity, Experiment 2 extends Experiment 1 by switching the deployment environment to mobile-based virtual reality. The approach is then demonstrated as a case study for a fundamentally new and unfamiliar interaction design problem: web-based augmented reality. Finally, we show how the model generated as an outcome of the refinement process can be used for user simulation and queried to deliver various design insights.
This paper compares the effectiveness of data comics and infographics for data-driven storytelling. While infographics are widely used, comics are increasingly popular for explaining complex and scientific concepts. However, empirical evidence comparing the effectiveness and engagement of infographics, comics and illustrated texts is still lacking. We report on the results of two complementary studies, one in a controlled setting and one in the wild. Our results suggest participants largely prefer data comics in terms of enjoyment, focus, and overall engagement and that comics improve understanding and recall of information in the stories. Our findings help to understand the respective roles of the investigated formats as well as inform the design of more effective data comics and infographics.
Text-based conversational systems, also referred to as chatbots, have grown widely popular. Current natural language understanding technologies are not yet ready to tackle the complexities in conversational interactions. Breakdowns are common, leading to negative user experiences. Guided by communication theories, we explore user preferences for eight repair strategies, including ones that are common in commercially-deployed chatbots (e.g., confirmation, providing options), as well as novel strategies that explain characteristics of the underlying machine learning algorithms. We conducted a scenario-based study to compare repair strategies with Mechanical Turk workers (N=203). We found that providing options and explanations were generally favored, as they manifest initiative from the chatbot and are actionable to recover from breakdowns. Through detailed analysis of participants’ responses, we provide a nuanced understanding on the strengths and weaknesses of each repair strategy.
End-user elicitation studies are a popular design method. Currently, such studies are usually confined to a lab, limiting the number and diversity of participants, and therefore the representativeness of their results. Furthermore, the quality of the results from such studies generally lacks any formal means of evaluation. In this paper, we address some of the limitations of elicitation studies through the creation of the Crowdlicit system along with the introduction of end-user identification studies, which are the reverse of elicitation studies. Crowdlicit is a new web-based system that enables researchers to conduct online and in-lab elicitation and identification studies. We used Crowdlicit to run a crowd-powered elicitation study based on Morris’s “Web on the Wall” study (2012) with 78 participants, arriving at a set of symbols that included six new symbols different from Morris’s. We evaluated the effectiveness of 49 symbols (43 from Morris and six from Crowdlicit) by conducting a crowd-powered identification study. We show that the Crowdlicit elicitation study resulted in a set of symbols that was significantly more identifiable than Morris’s.
Quantum computing is a fundamentally different way of performing computation than classical computing. Many problems that are considered hard for classical computers may have efficient solutions using quantum computers. Recently, technology companies including IBM, Microsoft, and Google have invested in developing both quantum computing hardware and software to explore the potential of quantum computing. Because of the radical shift in computing paradigms that quantum represents, we see an opportunity to study the unique needs people have when interacting with quantum systems, what we call Quantum HCI (QHCI). Based on interviews with experts in quantum computing, we identify four areas in which HCI researchers can contribute to the field of quantum computing. These areas include understanding current and future quantum users, tools for programming and debugging quantum algorithms, visualizations of quantum states, and educational materials to train the first generation of “quantum native” programmers.
Code puzzles are an increasingly popular way to introduce youth to programming. Yet our knowledge about how to maximize learning from puzzles is incomplete. We conducted a data collection study and trained a model that predicts cognitive load, the mental effort necessary to complete a task, on a future puzzle. Controlling cognitive load can lead to more effective learning. Our model suggests that it is possible to predict Cognitive Load on future problems; the model could correctly distinguish the more difficult puzzle within a pair 71%-79% of the time. Further, studying the model itself provides new insights into the sources of puzzle difficulty, the factors that contribute to Cognitive Load, and their inter-relationships. Finally, the ability to predict Cognitive Load on a future puzzle is an important step towards the creation of adaptive code puzzle systems.
MModern day voice-activated virtual assistants allow users to share and ask for information that could be considered as personal through different input modalities and devices. Using Google Assistant, this study examined if the differences in modality (i.e., voice vs. text) and device (i.e., smartphone vs. smart home device) affect user perceptions when users attempt to retrieve sensitive health information from voice assistants. Major findings from this study suggest that voice (vs. text) interaction significantly enhanced perceived social presence of the voice assistant, but only when the users solicited less sensitive health-related information. Furthermore, when individuals reported less privacy concerns, voice (vs. text) interaction elicited positive attitudes toward the voice assistant via increased social presence, but only in the low (vs. high) information sensitivity condition. Contrary to modality, the device difference did not exert any significant impact on the attitudes toward the voice assistant regardless of the sensitivity level of the health information being asked or the level of individuals’ privacy concerns.
Influencers in Multiplayer Online Shooters: Evidence of Social Contagion in Playtime and Social Play
In a wide range of social networks, people’s behavior is influenced by social contagion: we do what our network does. Networks often feature particularly influential individuals, commonly called “influencers.” Existing work suggests that in-game social networks in online games are similar to real-life social networks in many respects. However, we do not know whether there are in-game equivalents to influencers. We therefore applied standard social network features used to identify influencers to the online multiplayer shooter Tom Clancy’s The Division. Results show that network feature-defined influencers had indeed an outsized impact on playtime and social play of players joining their in-game network.
This paper studies the use of Digital Financial Services (DFS) as a solution to women’s financial inclusion in deeply patriarchal, resource constrained communities. Through a qualitative, empirical study we map the financial life cycles of 20 women micro-entrepreneurs in different cities in Pakistan and the challenges they face. We explore how technology is currently influencing these women’s businesses and personal lives and reveal how mobile money is not tuned to the problems they face and their financial needs. We present alternate design directions for meeting the technological and financial needs of these women, circumnavigating the patriarchal structures that constrain them.
Despite claims of Mobile TV’s mainstream arrival in 2010, it took until 2017 for watching professionally-produced television content on mobile phones to truly become a mass-market phenomenon in America, with half of all TV content expected to be watched on mobile phones by 2020. But what professionally produced content are people watching on their phones and when are they watching it? Are there any clusters of behavior that emerge in the broader population when it comes to watching TV on the phone? We set out to answer these questions through two surveys deployed to representative samples of online Americans. We discuss our findings on the mass-market arrival of mobile TV viewing and differences from how the HCI community has previously envisioned mobile video. We conclude with implications for the design of future mobile TV systems.
In calls for privacy by design (PBD), regulators and privacy scholars have investigated the richness of the concept of “privacy.” In contrast, “design” in HCI is comprised of rich and complex concepts and practices, but has received much less attention in the PBD context. Conducting a literature review of HCI publications discussing privacy and design, this paper articulates a set of dimensions along which design relates to privacy, including: the purpose of design, which actors do design work in these settings, and the envisioned beneficiaries of design work. We suggest new roles for HCI and design in PBD research and practice: utilizing values- and critically-oriented design approaches to foreground social values and help define privacy problem spaces. We argue such approaches, in addition to current “design to solve privacy problems” efforts, are essential to the full realization of PBD, while noting the politics involved when choosing design to address privacy.
Practitioners Teaching Data Science in Industry and Academia: Expectations, Workflows, and Challenges
Data science has been growing in prominence across both academia and industry, but there is still little formal consensus about how to teach it. Many people who currently teach data science are practitioners such as computational researchers in academia or data scientists in industry. To understand how these practitioner-instructors pass their knowledge onto novices and how that contrasts with teaching more traditional forms of programming, we interviewed 20 data scientists who teach in settings ranging from small-group workshops to large online courses. We found that: 1) they must empathize with a diverse array of student backgrounds and expectations, 2) they teach technical workflows that integrate authentic practices surrounding code, data, and communication, 3) they face challenges involving authenticity versus abstraction in software setup, finding and curating pedagogically-relevant datasets, and acclimating students to live with uncertainty in data analysis. These findings can point the way toward better tools for data science education and help bring data literacy to more people around the world.
In-car intelligent assistants offer the opportunity to help drivers productively use previously unclaimed time during their commute. However, engaging in secondary tasks can reduce attention on driving and thus may affect road safety. Any interface used while driving, even if speech-based, cannot consider non-driving tasks in isolation of driving—alerts for safer driving and timing of the non-driving tasks are crucial to maintaining safety. In this work, we explore experiences with a speech-based assistant that attempts to help drivers safely complete complex productivity tasks. Via a controlled simulator study, we look at how level of support and road context alerts from the assistant influence a driver’s ability to drive safely while writing a document or creating slides via speech. Our results suggest ways to support speech-based productivity interactions and how speech-based road context alerts may influence driver behavior.
Despite historical precedence and modern prevalence, mental illness and associated disorders are frequently aligned with notions of deviance and, by association, abnormality. The view that mental illness deviates from an implicit social norm permeates the CHI community, impacting how scholars approach research in this space. In this paper, we challenge community and societal norms aligning mental illness with deviance. We combine semi-structured interviews with digital ethnography of public Instagram accounts to examine how Instagram users express mental illness. Drawing on small stories research, we find that individuals situate mental illness within their everyday lives and negotiate their tellings of experience due to the influence of various social control structures. We discuss implications for incorporating ‘the everyday’ into the design of technological solutions for marginalized communities and the ways in which researchers and designers may inadvertently perpetuate and instantiate stigma related to mental illness.
In this paper we unpack empirical data from two domains within the Blockchain information infrastructure: The cryptocurrency trading domain, and the energy domain. Through these accounts we introduce the relational concepts of Blockchain Assemblages and Whiteboxing. Blockchain assemblages comprise configurations of digital and analogue artefacts that are entangled with imaginaries about the current and future state of the Blockchain information infrastructure. Rather than being a black box, Blockchain assemblages alternate between being dynamic and stable entities. We propose Whiteboxing as the sociomaterial process which drives blockchain assemblages in their dynamic state to be (re)configured, while related artefacts and imaginaries are simultaneously transformed, creating dynamic representations. Whiteboxing is triggered during disconfirming events when representations are discovered as problematic. Complementing existing historical accounts demonstrating technologies in the making, the contribution of this paper, proposes whiteboxing as an analytical concept which allows us to unpack how contemporary technologies are created through entrepreneurial activities.
Risk vs. Restriction: The Tension between Providing a Sense of Normalcy and Keeping Foster Teens Safe Online
Foster youth are particularly vulnerable to offline risks; yet, little is known about their online risk experiences or how foster parents mediate technology use in the home. We conducted 29 interviews with foster parents of 42 teens (ages 13-17) who were part of the child welfare system. Foster parents faced significant challenges relating to technology mediation in the home. Based on parental accounts, over half of the foster teens encountered high-risk situations that involved interacting with unsafe people online, resulting in rape, sex trafficking, and/or psychological harm. Overall, foster parents were at a loss for how to balance online safety with technology access in a way that engendered positive relationships with their foster teens. Instead, parents often resorted to outright restriction. Our research highlights the importance of considering the unique needs of foster families and designing technologies to address the challenges faced by this vulnerable population of teens and parents.
Adoption of commercial smart home devices is rapidly increasing, allowing in-situ research in people’s homes. As these technologies are deployed in shared spaces, we seek to understand interactions among multiple people and devices in a smart home. We conducted a mixed-methods study with 18 participants (primarily people who drive smart device adoption in their homes) living in multi-user smart homes, combining semi-structured interviews and experience sampling. Our findings surface tensions and cooperation among users in several phases of smart device use: device selection and installation, ordinary use, when the smart home does not work as expected, and over longer term use. We observe an outsized role of the person who installs devices in terms of selecting, controlling, and fixing them; negotiations between parents and children; and minimally voiced privacy concerns among co-occupants, possibly due to participant sampling. We make design recommendations for supporting long-term smart homes and non-expert household members.
Various stakeholders in the sports domain rely on the analysis and presentation of sports data to derive insights. In particular, sportswriters construct game stories using statistical information; fans share their viewpoints based on the real-time stats while watching the game. In this paper, we explore how these stakeholders construct data-driven sports stories. We began by observing a sportswriter, then analyzed published sports stories, and characterized 1500 fan comments about particular sporting events. We found that their story needs were similar in some respects while quite different in others. Based on the findings, we implemented two exploratory prototypes: GameViews-Writers for sportswriters to quickly extract key game information and GameViews-Fans to support a real-time data-driven game-viewing experience for fans. We report insights from two user studies conducted with four professional sportswriters and eight sports fans, respectively. We discuss the results of these studies and present several avenues for future work.
Data analysts use computational notebooks to write code for analyzing and visualizing data. Notebooks help analysts iteratively write analysis code by letting them interleave code with output, and selectively execute cells. However, as analysis progresses, analysts leave behind old code and outputs, and overwrite important code, producing cluttered and inconsistent notebooks. This paper introduces code gathering tools, extensions to computational notebooks that help analysts find, clean, recover, and compare versions of code in cluttered, inconsistent notebooks. The tools archive all versions of code outputs, allowing analysts to review these versions and recover the subsets of code that produced them. These subsets can serve as succinct summaries of analysis activity or starting points for new analyses. In a qualitative usability study, 12 professional analysts found the tools useful for cleaning notebooks and writing analysis code, and discovered new ways to use them, like generating personal documentation and lightweight versioning.
Participatory design (PD) with heterogeneous groups poses particular challenges, requiring spaces in which different agendas or visions can be negotiated. In this paper we report on our PD work with two groups of neurodiverse children to design technologies that support co-located, social play. The heterogeneity in the groups in terms of abilities, conceptions of play, motivations to be involved and individual preferences has challenged us to think of the design process and its outcomes as spaces for continuous negotiation. Drawing on the notion of agonistic PD, we sought not to necessarily reconcile all views, but foster constructive disagreement as a resource for and possible outcome of design. Using our project work as a case study, we report on controversies, big and small, and how they manifested themselves in the processes and outcomes. Reflecting on our experiences, we discuss possible implications on the notion of democratising technology innovation.
In the near future, emergency services within Canada will be supporting new technologies for 9-1-1 call centres and firefighters to learn about an emergency situation. One such technology is drones. To understand the benefits and challenges of using drones within emergency response, we conducted a study with citizens who have called 9-1-1 and firefighters who respond to a range of everyday emergencies. Our results show that drones have numerous benefits to both firefighters and 9-1-1 callers which include context awareness and social support for callers who receive feelings of assurance that help is on the way. Privacy was largely not an issue, though safety issues arose especially for complex uses of drones such as indoor flying. Our results point to opportunities for designing drone systems that help people to develop a sense of trust with emergency response drones, and mitigate privacy and safety concerns with more complex drone systems.
Touchscreens are the predominant medium for interactions with digital services; however, their current fixed form factor narrows the scope for rich physical interactions by limiting interaction possibilities to a single, planar surface. In this paper we introduce the concept of PickCells, a fully re-configurable device concept composed of cells, that breaks the mould of rigid screens and explores a modular system that affords rich sets of tangible interactions and novel across-device relationships. Through a series of co-design activities — involving HCI experts and potential end-users of such systems — we synthesised a design space aimed at inspiring future research, giving researchers and designers a framework in which to explore modular screen interactions. The design space we propose unifies existing works on modular touch surfaces under a general framework and broadens horizons by opening up unexplored spaces providing new interaction possibilities. In this paper, we present the PickCells concept, a design space of modular touch surfaces, and propose a toolkit for quick scenario prototyping.
Active Edge is a feature of Google Pixel 2 smartphone devices that creates a force-sensitive interaction surface along their sides, allowing users to perform gestures by holding and squeezing their device. Supported by strain gauge elements adhered to the inner sidewalls of the device chassis, these gestures can be more natural and ergonomic than on-screen (touch) counterparts. Developing these interactions is an integration of several components: (1) an insight and understanding of the user experiences that benefit from squeeze gestures; (2) hardware with the sensitivity and reliability to sense a user’s squeeze in any operating environment; (3) a gesture design that discriminates intentional squeezes from innocuous handling; and (4) an interaction design to promote a discoverable and satisfying user experience. This paper describes the design and evaluation of Active Edge in these areas as part of the product’s development and engineering.
People eat every day and biting is one of the most fundamental and natural actions that they perform on a daily basis. Existing work has explored tooth click location and jaw movement as input techniques, however clenching has the potential to add control to this input channel. We propose clench interaction that leverages clenching as an actively controlled physiological signal that can facilitate interactions. We conducted a user study to investigate users’ ability to control their clench force. We found that users can easily discriminate three force levels, and that they can quickly confirm actions by unclenching (quick release). We developed a design space for clench interaction based on the results and investigated the usability of the clench interface. Participants preferred the clench over baselines and indicated a willingness to use clench-based interactions. This novel technique can provide an additional input method in cases where users’ eyes or hands are busy, augment immersive experiences such as virtual/augmented reality, and assist individuals with disabilities.
Interferi is an on-body gesture sensing technique using acoustic interferometry. We use ultrasonic transducers resting on the skin to create acoustic interference patterns inside the wearer’s body, which interact with anatomical features in complex, yet characteristic ways. We focus on two areas of the body with great expressive power: the hands and face. For each, we built and tested a series of worn sensor configurations, which we used to identify useful transducer arrangements and machine learning fea-tures. We created final prototypes for the hand and face, which our study results show can support eleven- and nine-class gestures sets at 93.4% and 89.0% accuracy, re-spectively. We also evaluated our system in four continu-ous tracking tasks, including smile intensity and weight estimation, which never exceed 9.5% error. We believe these results show great promise and illuminate an inter-esting sensing technique for HCI applications.
Given the focus of professional graduate ICT programs on technical and managerial skills, pedagogical engagement with external organizations tends to be transactional and artifact-centered. This inhibits the students’ ability to understand social, technical and ethical issues in context, or to develop affective relationships with users and other stakeholders. To address this, we designed a service learning course that partnered students with non-profit organizations to help with their technology challenges. The service project was deliberately left open-ended to force students (and partners) to tackle important questions around project scoping and impact. By drawing parallels to soil care practices, we explore how “care time” emerged in this context, and how the incorporation of ambiguity galvanized students, community, and faculty to make time to navigate it. This led to non-tangible yet vital outcomes such as overcoming social limitations, building symbiotic relationships, and enacting acts of care necessary for more ethical orchestration of technology.
This paper tracks a debate that occurred, first, within the field of Ubiquitous Computing but quickly spread to CHI and beyond, in which design scholars argued that seamlessness had long been an implicit and privileged design virtue, often at the expense of seamfulness. Seamless design emphasizes clarity, simplicity, ease of use, and consistency to facilitate technological interaction. Seamful design emphasizes configurability, user appropriation, and revelation of complexity, ambiguity or inconsistency. Here we review these literatures together and argue that, rather than rival approaches, seamful and seamless design are complements, each emphasizing different aspects of downstream user agency. Ultimately, we situate this debate within the larger, perennial discussion about the strategic revelation and concealment of human and technological operations, and therein the role of design.
We address a relatively under-explored aspect of human-computer interaction: people’s abilities to understand the relationship between a machine learning model’s stated performance on held-out data and its expected performance post deployment. We conduct large-scale, randomized human-subject experiments to examine whether laypeople’s trust in a model, measured in terms of both the frequency with which they revise their predictions to match those of the model and their self-reported levels of trust in the model, varies depending on the model’s stated accuracy on held-out data and on its observed accuracy in practice. We find that people’s trust in a model is affected by both its stated accuracy and its observed accuracy, and that the effect of stated accuracy can change depending on the observed accuracy. Our work relates to recent research on interpretable machine learning, but moves beyond the typical focus on model internals, exploring a different component of the machine learning pipeline.
Gentrification-the spatial expression of economic inequality-is fundamentally a matter of social justice. Yet, even as work outside of HCI has begun to discuss how computing can enable or challenge gentrification, HCI’s growing social justice agenda has not engaged with this issue. This omission creates an opportunity for HCI to develop a research and design agenda at the intersection of computing, social justice, and gentrification. We begin this work by outlining existing scholarship describing how the consumption side dynamics of gentrification are mediated by contemporary socio-technical systems. Subsequently, we build on the social justice framework introduced by Dombrowski, Harmon, and Fox to discuss how HCI may resist or counter such forces. We offer six modes of research that HCI scholars can pursue to engage gentrification.
Prior work has shown that embodiment can benefit virtual agents, such as increasing rapport and conveying non-verbal information. However, it is unclear if users prefer an embodied to a speech-only agent for augmented reality (AR) headsets that are designed to assist users in completing real-world tasks. We conducted a study to examine users’ perceptions and behaviors when interacting with virtual agents in AR. We asked 24 adults to wear the Microsoft HoloLens and find objects in a hidden object game while interacting with an agent that would offer assistance. We presented participants with four different agents: voice-only, non-human, full-size embodied, and a miniature embodied agent. Overall, users preferred the miniature embodied agent due to the novelty of his size and reduced uncanniness as opposed to the larger agent. From our results, we draw conclusions about how agent representation matters and derive guidelines on designing agents for AR headsets.
Tasks on crowdsourcing platforms such as Amazon Mechanical Turk often request workers’ personal information, raising privacy risks that may be exacerbated by requester-worker power dynamics. We interviewed 14 workers to understand how they navigate these risks. We found that Turkers’ decisions to provide personal information during tasks were based on evaluations of the pay rate, the requester, the purpose, and the perceived sensitivity of the request. Participants also engaged in multiple privacy-protective behaviors, such as abandoning tasks or providing inaccurate data, though there were costs associated with these behaviors, such as wasted time and risk of rejection. Finally, their privacy concerns and practices evolved as they learned about both the platform and worker-designed tools and forums. These findings deepen our understanding of both privacy decision-making and invisible labor in paid crowdsourcing, and emphasize a general need to understand how privacy stances change over time.
Printed Circuit Board (PCB) design tools are critical in helping users build non-trivial electronics devices. While recent work recognizes deficiencies with current tools and explores novel methods, little has been done to understand modern designers and their needs. To gain better insight into their practices, we interview fifteen electronics designers of a variety of backgrounds. Our open-ended, semi-structured interviews examine both overarching design flows and details of individual steps. One major finding was that most creative engineering work happens during system architecture, yet current tools operate at lower abstraction levels and create significant tedious work for designers. From that insight, we conceptualize abstractions and primitives for higher-level tools and elicit feedback from our participants on clickthrough mockups of design flows through an example project. We close with our observation on opportunities for improving board design tools and discuss generalizability of our findings beyond the electronics domain.
Virtual Reality painting is a form of 3D-painting done in a Virtual Reality (VR) space. Being a relatively new kind of art form, there is a growing interest within the creative practices community to learn it. Currently, most users learn using community posted 2D-videos on the internet, which are a screencast recording of the painting process by an instructor. While such an approach may suffice for teaching 2D-software tools, these videos by themselves fail in delivering crucial details that required by the user to understand actions in a VR space. We conduct a formative study to identify challenges faced by users in learning to VR-paint using such video-based tutorials. Informed by results of this study, we develop a VR-embedded tutorial system that supplements video tutorials with 3D and contextual aids directly in the user’s VR environment. An exploratory evaluation showed users were positive about the system and were able to use the proposed system to recreate painting tasks in VR.
In HCI, the honeypot effect describes a form of audience engagement in which a person’s interaction with a technology stimulates passers-by to observe, approach and engage in an interaction themselves. In this paper we explore the potential for honeypot effects to arise in the use of mobile augmented reality (AR) applications in urban spaces. We present an observational study of Santa’s Lil Helper, a mobile AR game that created a Christmas-themed treasure hunt in a metropolitan area. Our study supports a consideration of three factors that may impede the honeypot effect: the presence of people in relation to the game and its interactive components; the visibility of gameplay in urban space; and the extent to which the game permits a shared experience. We consider how these factors can inform the design of future AR experiences that are capable of stimulating honeypot effects in public space.
Information infrastructures have become integral components of policy debates related to climate change and sustainability. To better understand this relationship, we studied the tools used to forecast and respond to sea-level rise in the San Francisco Bay Area, where active debates on how to best prepare for this issues are underway and will have important consequences for the future of the region. Drawing on 18 months of qualitative research we argue that competing visions of the problem are intimately intertwined with different elements of information infrastructure and beliefs about the role of data in policymaking. Current infrastructure in the region, far from being a neutral actor in these debates, exhibits an infrastructural bias, privileging some approaches over others. We identify some of the tactics that community organizations deploy to subvert the claims of sea-level rise experts and advance their own perspective, which prioritizes considerations of justice over technical expertise.
Environmental concerns have driven an interest in sustainable smart cities, through the monitoring and optimisation of networked infrastructures. At the same time, there are concerns about who these interventions and services are for, and who benefits. HCI researchers and designers interested in civic life have started to call for the democratisation of urban space through resistance and political action to challenge state and corporate claims. This paper contributes to an emerging body of work that seeks to involve citizens in the design of sustainable smart cities, particularly in the context of marginalised and culturally diverse urban communities. We present a study involving co-designing Internet of Things with urban agricultural communities and discuss three ways in which design can participate in the right to the sustainable smart city through designing for the commons, care, and biocultural diversity.
Play is the work of children-but access to play is not equal from child to child. Having access to a place to play is a challenge for marginalized children, such as children with disabilities. For autistic children, playing with other children in the physical world may be uncomfortable or even painful. Yet, having practice in the social skills play provides is essential for childhood development. In this ethnographic work, I explore how one community uses the sense of place and the digital embodied experience in a virtual world specifically to give autistic children access to play with their peers. The contribution of this work is twofold. First, I demonstrate how various physical and virtual spaces work together to make play possible. Second, I demonstrate these spaces, though some of them are digital, are no more or less “real” than the physical spaces making up a schoolyard or playground.
Vulnerabilities persist despite existing software security initiatives and best practices. This paper focuses on the human factors of software security, including human behaviour and motivation. We conducted an online survey to explore the interplay between developers and software security processes, e.g., we looked into how developers influence and are influenced by these processes. Our data included responses from 123 software developers currently employed in North America who work on various types of software applications. Whereas developers are often held responsible for security vulnerabilities, our analysis shows that the real issues frequently stem from a lack of organizational or process support to handle security throughout development tasks. Our participants are self-motivated towards software security, and the majority did not dismiss it but identified obstacles to achieving secure code. Our work highlights the need to look beyond the individual, and take a holistic approach to investigate organizational issues influencing software security.
Email is an essential tool for communication and social interaction. It also functions as a broadcast medium connecting businesses with their customers, as an authentication mechanism, and as a vector for scams and security threats. These uses are enabled by the fact that the only barrier to reaching someone by email is knowing his or her email address. This feature has given rise to the spam email industry but also has another side-effect that is becoming increasingly common: misdirected email, or legitimate emails that are intended for somebody else but are sent to the wrong recipient. In this paper we present findings from an interview study and survey focusing on characteristics of misdirected email messages, possible reasons why they happen, and how people manage these messages when they receive them. Misdirected email arises as a result of signifiers (usernames) which were selected by people for social and self-representation purposes, that are also used by machines for addressing. Because there is no mechanism for dealing with misdirected emails in a systematic way, individual recipients must choose whether to take action and how much effort to put forth to prevent potential negative consequences for themselves and others.
Wall displays support people in interacting with large information spaces in two ways: On the one hand, the physical space in front of such displays enables them to navigate information spaces physically. On the other hand, the visual overview of the information space on the display may promote the formation of spatial memory; from studies of desktop computers we know this can boost performance. However, it remains unclear how the benefits of locomotion and overviews relate and whether one is more important than the other. We study this question through a wall display adaptation of the classic Data Mountain system to separate the effects of locomotion and visual overview. Our findings suggest that overview improves recall and that the combination of overview and locomotion outperforms all other combinations of factors.
Annotating rich audio data is an essential aspect of training and evaluating machine listening systems. We approach this task in the context of temporally-complex urban soundscapes, which require multiple labels to identify overlapping sound sources. Typically this work is crowdsourced, and previous studies have shown that workers can quickly label audio with binary annotation for single classes. However, this approach can be difficult to scale when multiple passes with different focus classes are required to annotate data with multiple labels. In citizen science, where tasks are often image-based, annotation efforts typically label multiple classes simultaneously in a single pass. This paper describes our data collection on the Zooniverse citizen science platform, comparing the efficiencies of different audio annotation strategies. We compared multiple-pass binary annotation, single-pass multi-label annotation, and a hybrid approach: hierarchical multi-pass multi-label annotation. We discuss our findings, which support using multi-label annotation, with reference to volunteer citizen scientists’ motivations.
PDF documents often contain rich data tables that offer opportunities for dynamic reuse in new interactive applications. We describe a pipeline for extracting, analyzing, and parsing PDF tables based on existing machine learning and rule-based techniques. Implementing and deploying this pipeline on a corpus of 447 documents with 1,171 tables results in only 11 tables that are correctly extracted and parsed. To improve the results of automatic table analysis, we first present a taxonomy of errors that arise in the analysis pipeline and discuss the implications of cascading errors on the user experience. We then contribute a system with two sets of lightweight interaction techniques (gesture and toolbar), for viewing and repairing extraction errors in PDF tables on mobile devices. In an evaluation with 17 users involving both a phone and a tablet, participants effectively repaired common errors in 10 tables, with an average time of about 2 minutes per table.
Despite the increasing amount of smart textile design practitioners, the methods and tools commonly available have not progressed to the same scale. Most smart textile interaction designs today rely on detecting changes in resistance. The tools and sensors for this are generally limited to DC-voltage-divider based sensors and multimeters. Furthermore, the textiles and the materials used in smart textile design can exhibit behaviour making it difficult to identify even simple interactions using those means. For instance, steel-based textiles exhibit intrinsic semiconductive properties that are difficult to identify with current methods. In this paper, we show an alternative way to measure interaction with smart textiles. By relying on visualisation known as Lissajous-figures and frequency-based signals, we can detect even subtle and varied forms of interaction with smart textiles. We also show an approach to measuring frequency-based signals and present an Arduino-based system called Teksig to support this type of textile practice.
The emergence of a 3D pen brings 3D modeling from a screen-based computer-aided design (CAD) system and 3D printing to direct and rapid crafting by 3D doodling. However, 3D doodling remains challenging, requiring craft skills to rapidly express an idea, which is critical in creative making. We explore a new process of 3D modeling using 3D pen + 3D printer. Our pilot study shows that users need support to reduce the number of non-creative tasks to explore a wide design strategy. With the opportunity to invent a new 3D modeling process that needs to incorporate both a pen and printer, we propose techniques and a system that empower users to print while doodling to focus on creative exploration. Our user study shows that users can create diverse 3D models using a pen and printer. We discuss the roles of the human and fabrication machine for the future of fabrication.
Creative writing, from poetry to journalism, is at the crux of human ingenuity and social interaction. Existing creative writing support tools produce entire passages or fully formed sentences, but these approaches fail to adapt to the writer’s own ideas and intentions. Instead we posit to build tools that generate ideas coherent with the writer’s context and encourage writers to produce divergent outcomes. To explore this, we focus on supporting metaphor creation. We present Metaphoria, an interactive system that generates metaphorical connections based on an input word from the writer. Our studies show that Metaphoria provides more coherent suggestions than existing systems, and supports the expression of writers’ unique intentions. We discuss the complex issue of ownership in human-machine collaboration and how to build adaptive creativity support tools in other domains.
Complex activities often require people to work across multiple software applications. However, people frequently lack valuable knowledge about at least one application, especially as software changes and new software emerges. Existing help systems either lack contextual knowledge or are tightly-knit into a single application. We introduce an application-independent approach for contextually presenting video learning resources and demonstrate it through the RePlay system. RePlay uses accessibility APIs to gather context about the user’s activity. It leverages an existing search engine to present relevant videos and highlights key segments within them using video captions. We report on a week-long field study (n=7) and a lab study (n=24) showing that contextual assistance helps people spend less time away from their task than web video search and replaces current video navigation strategies. Our findings highlight challenges with representing and using context across applications.
This paper examines the promise of empathy, the name commonly given to the initial phase of the human-centered design process in which designers seek to understand their intended users in order to inform technology development. By analyzing popular empathy activities aimed at understanding people with disabilities, we examine the ways empathy works to both powerfully and problematically align designers with the values of people who may use their products. Drawing on disability studies and feminist theorizing, we describe how acts of empathy building may further distance people with disabilities from the processes designers intend to draw them into. We end by reimagining empathy as guided by the lived experiences of people with disabilities who are traditionally positioned as those to be empathized.
Music-streaming platforms offer users a large amount of content for consumption. Finding the right music can be challenging and users often need to search through extensive catalogs provided by these platforms. Prior research has focused on general-domain web search, which is designed to meet a broad range of user goals. Here, we study search in the domain of music, seeking to understand how and why people use search and how they evaluate their search experiences on a music-streaming platform. Over two studies, we conducted semi-structured interviews with 27 participants, asking about their search habits and preferences, and observing their behavior while searching for music. Analysis revealed participants evaluated their search experiences along two dimensions: success and effort. Importantly, how participants perceived success and effort differed by their mindset, or the way they assessed the results of their query. We conclude with recommendations to improve the user experience of music search.
Filling out printed forms (e.g., checks) independently is currently impossible for blind people, since they cannot pinpoint the locations of the form fields, and quite often, they cannot even figure out what fields (e.g., name) are present in the form. Hence, they always depend on sighted people to write on their behalf, and help them affix their signatures. Extant assistive technologies have exclusively focused on reading, with no support for writing. In this paper, we introduce WiYG, a Write-it-Yourself guide that directs a blind user to the different form fields, so that she can independently fill out these fields without seeking assistance from a sighted person. Specifically, WiYG uses a pocket-sized custom 3D printed smartphone attachment, and well-established computer vision algorithms to dynamically generate audio instructions that guide the user to the different form fields. A user study with 13 blind participants showed that with WiYG, users could correctly fill out the form fields at the right locations with an accuracy as high as 89.5%.
Social, Cultural and Systematic Frustrations Motivating the Formation of a DIY Hearing Loss Hacking Community
Research on attitudes to assistive technology (AT) has shown both the positive and negative impact of these technologies on quality of life. Building on this research, we examine the sociocultural and technological frustrations with hearing loss (HL) technologies that motivate personal approaches to solving these issues. Drawing on meet-up observations and contextual interview data, we detail participants’ experiences of and attitudes towards hearing AT that influences hacking hearing loss. Hearing AT is misunderstood as a solution to the impairment, influencing one-to-one interactions, cultural norms, and systematic frustrations. Participants’ exasperation with the slow development of top-down solutions has led some members to design and develop their own personalised solutions. Beyond capturing a segment of the growing DIY health and wellbeing phenomenon, our findings extend beyond implications for design to present recommendations for the hearing loss industry, policy makers, and importantly, for researchers engaging with grassroots DIY health movements.
My Naturewatch Camera is an inexpensive wildlife camera that we designed for people to make themselves as a way of promoting engagement with nature and digital making. We aligned its development to the interests of the BBC’s Natural History Unit as part of an orchestrated engagement strategy also involving our project website and outreach to social media. Since June 2018, when the BBC featured the camera on one of their Springwatch 2018 broadcasts, over 1000 My Naturewatch Cameras have been constructed using instructions and software from our project website and commercially available components, without direct contact with our studio. In this paper, we describe the project and outcomes with a focus on its success in promoting engagement with nature, engagement with digital making, and the effectiveness of this strategy for sharing research products outside traditional commercial channels.
Social media sites are where life happens for many of today’s young people, so it is important to teach them to use these sites safely and effectively. Many youth receive classroom education on digital literacy topics, but have few chances to build actual skills. Social Media TestDrive, an interactive social media simulation, fills a gap in digital literacy education by combining experiential learning in a realistic and safe social media environment with educator-facilitated classroom lessons. The tool was piloted with 12 educators and over 200 students, and formative evaluation data suggest that TestDrive achieved high levels of engagement with both groups. Students reported the modules enhanced their understanding of digital citizenship issues, and educators noted that students were engaging in meaningful classroom conversations. Finally, we discuss the importance of involving multiple stakeholder groups (e.g., researchers, youth, educators, curriculum developers) in designing educational technology.
Investigating the Impact of a Real-time, Multimodal Student Engagement Analytics Technology in Authentic Classrooms
We developed a real-time, multimodal Student Engagement Analytics Technology so that teachers can provide just-in-time personalized support to students who risk disengagement. To investigate the impact of the technology, we ran an exploratory semester-long study with a teacher in two classrooms. We used a multi-method approach consisting of a quasi-experimental design to evaluate the impact of the technology and a case study design to understand the environmental and social factors surrounding the classroom setting. The results show that the technology had a significant impact on the teacher’s classroom practices (i.e., increased scaffolding to the students) and student engagement (i.e., less boredom). These results suggest that the technology has the potential to support teachers’ role of being a coach in technology-mediated learning environments.
Sorting items by user rating is a fundamental interaction pattern of the modern Web, used to rank products (Amazon), posts (Reddit), businesses (Yelp), movies (YouTube), and more. To implement this pattern, designers must take in a distribution of ratings for each item and define a sensible total ordering over them. This is a challenging problem, since each distribution is drawn from a distinct sample population, rendering the most straightforward method of sorting — comparing averages — unreliable when the samples are small or of different sizes. Several statistical orderings for binary ratings have been proposed in the literature (e.g., based on the Wilson score, or Laplace smoothing), each attempting to account for the uncertainty introduced by sampling. In this paper, we study this uncertainty through the lens of human perception, and ask “How do people sort by ratings?” In an online study, we collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items. Our results shed light on the cognitive models users employ to choose between rating distributions, which sorts of comparisons are most contentious, and how the presentation of rating information affects users’ preferences.
The Smartphone as a Pacifier and its Consequences: Young adults’ smartphone usage in moments of solitude and correlations to self-reflection
The smartphone plays a dominant role in everyday life. Among young adults, the average daily usage time is almost four hours. The present study [N=399] examines the specific psychological role of smartphone usage during alone time (e.g. in the subway, waiting, in bed). Particularly, we explore its role in coping with negative emotions in the sense of an “attachment object”, providing comfort like a pacifier for infants. Results underlined the pacifying role of smartphone usage to cope with negative emotions in moments of alone time. Moreover, particular personality dispositions (e.g., high need to belong, high proneness to boredom) were associated with more extensive self-reported smartphone usage and mediated by the perception of the smartphone as an attachment object. Finally, smartphone usage was negatively correlated to self-insight, possibly substituting intense inner debates or self-realizations during alone time. Implications for HCI research and practice are discussed.
Elementary school educators increasingly use digital technologies to teach students, manage classrooms, and complete everyday tasks. Prior work has considered the educational and pedagogical implications of technology use, but little research has examined how educators consider privacy and security in relation to classroom technology use. To better understand what privacy and security mean to elementary school educators, we conducted nine focus groups with 25 educators across three metropolitan regions in the northeast U.S. Our findings suggest that technology use is an integral part of elementary school classrooms, that educators consider digital privacy and security through the lens of curricular and classroom management goals, and that lessons to teach children about digital privacy and security are rare. Using Bronfenbrenner’s ecological systems theory, we identify design opportunities to help educators integrate privacy and security into decisions about digital technology use and to help children learn about digital privacy and security.
Critical approaches to smart technologies have emerged in HCI that question the conditions necessary for smart technologies to benefit people. Smart services rely on a relation of trust and sense of security between people and technology requiring a more expansive definition of security. Using established design methods, we worked with two residents’ groups to critically explore and rethink smart services in the home and city. From our data analysis, we derive insights about perceptions and understandings of trust, privacy and security of smart devices, and identify how technological security needs to work in concert with social and relational forms of security for smart services to be effective. We conclude with an orientation for HCI that focuses on designing services for and with smart people and things.
Voice recording is a challenging task with many pitfalls due to sub-par recording environments, mistakes in recording setup, microphone quality, etc. Newcomers to voice recording often have difficulty recording their voice, leading to recordings with low sound quality. Many amateur recordings of poor quality have two key problems: too much reverberation (echo), and too much background noise (e.g. fans, electronics, street noise). We present VoiceAssist, a system that helps inexperienced users produce high quality recordings by providing real-time visual feedback on audio quality. We integrate modern audio quality measures into an interactive human-machine feedback loop, so that the audio quality can be maximized at capture-time. We demonstrate the utility of this feedback for improving the recording quality with a user study. When presented with visual feedback about recording quality, users produced recordings that were strongly preferred by third-party listeners, when compared to recordings made without this feedback.
HTTPS and TLS are the backbone of Internet security, however setting up web servers to run these protocols is a notoriously difficult process. In this paper, we perform two live subjects usability studies on the deployment of HTTPS in a real-world setting. Study 1 is a within subjects comparison between traditional HTTPS configuration (purchasing a certificate and installing it on a server) and Let’s Encrypt, which automates much of the process. Study 2 is a between subjects study looking at the same two systems, examining why users encounter usability issues. Overall we confirm past results that HTTPS is difficult to deploy, and we find some evidence that suggests Let’s Encrypt is an easier, more efficient method for deploying HTTPS.
In this paper, we explore an alternative form of volunteer work, PledgeWork, where individuals, rather than working directly for a charity, make indirect donations by completing tasks provided by a third party task provider. PledgeWork poses novel research questions on issues of user acceptance of on-line volunteerism, on quality and quantity of work performed as a volunteer, and on the benefits low-barrier volunteerism might provide to charities. To evaluate these questions, we conduct a mixed methods study that compares the quality and quantity of work between volunteer workers and paid workers and user attitudes toward PledgeWork, including perceived benefits and drawbacks. We find that PledgeWork can improve the quality of simple tasks and that the vast majority of our participants expressed interest in using our PledgeWork platform to contribute to a charity. Our interview also reveals current problems with volunteering and online donations, thus highlighting additional strengths of PledgeWork.
Many smartphone users engage in compulsive and habitual phone checking they find frustrating, yet our understanding of how this phenomenon is experienced is limited. We conducted a semi-structured interview, a think-aloud phone-use demonstration, and a sketching exercise with 39 smartphone users (ages 14-64) to probe their experiences with compulsive phone checking. Their insights revealed a small taxonomy of common triggers that lead up to instances of compulsive phone use and a second set that end compulsive phone use sessions. Though participants expressed frustration with their lack of self-control, they also reported that the activities they engage in during these sessions can be meaningful, which they defined as transcending the current instance of use. Participants said they periodically reflect on their compulsive use and delete apps that drive compulsive checking without providing sufficient meaning. We use these findings to create a descriptive model of the cycle of compulsive checking, and we call on designers to craft experiences that meet users’ definition of meaningfulness rather than creating lock-out mechanisms to help them police their own use.
Wearable activity trackers can encourage physical activity (PA)-a behavior critical for preventing obesity and reducing the risks of chronic diseases. However, prior work has rarely explored how these tools can leverage family support or help people think about strategies for being active-wo factors necessary for achieving regular PA. In this 2-month qualitative study, we investigated PA tracking practices amongst 14 families living in low-income neighborhoods, where obesity is prevalent. We characterize how social discussions of PA data rarely extended beyond the early stages of experiential learning, thus limiting the utility of PA trackers. Caregivers and children rarely analyzed their experiences to derive insights about the meaning of their PA data for their wellbeing. Those who engaged in these higher-order learning processes were often influenced by parenting beliefs shaped by personal health experiences. We contribute recommendations for how technology can more effectively support family experiential learning using PA tracking data.
Consumer-fabrication technologies potentially improve the effectiveness and adoption of assistive technology (AT) by engaging AT users in AT creation. However, little is known about the role of clinicians in this revolution. We investigate clinical AT fabrication by working as expert fabricators for clinicians over a four-month period. We observed and co-designed AT with four occupational therapists at two clinics: a free clinic for uninsured clients, and a Veteran’s Affairs Hospital. We find that existing fabrication processes, particularly with respect to rapid prototyping, do not align with clinical practice and itsdo-no-harm ethos. We recommend software solutions that would integrate into client care by: amplifying clinicians’ expertise, revealing appropriate fabrication opportunities, and supporting adaptable fabrication.
Conveying uncertainty in information artifacts is difficult; the challenge only grows as the demand for mass communication through multiple channels expands. In particular, as natural hazards increase with changing global conditions, including hurricanes which threaten coastal areas, we need better means of communicating uncertainty around risks that empower people to make good decisions. We examine how people share and respond to a range of visual representations of risk from authoritative sources during hurricane events. Because these images are now shared widely on social media platforms, Twitter provides the means to study them on a large scale as close to in vivo as possible. Using mixed methods, this study analyzes diffusion of and reactions to forecast and other risk imagery during the highly damaging 2017 Atlantic hurricane season to describe the collective response to visual representations of risk.
The home has been a major focus of the HCI community for over two decades. Despite this body of research, nascent works have argued that HCI’s characterization of ‘the home’ remains narrow and requires more diverse accounts of domestic configurations. Our work contributes to this area through a four-month ethnography of three collective homes in Vancouver, Canada. Collective homes represent an alternative housing model that offers agency to individual members and the collective group by sharing values, resources, labour, space and memory. Our paper offers two contributions. First, we offer an in-depth design ethnography of three collective homes, attending to the values, ownership models, practices, and everyday interactions observed in the ongoing making of these domestic settings. Second, we interpret and synthesize our findings to provide new opportunities for expanding the way we conceptualize and design for ‘the home’ in HCI.
Recent sustainable HCI research has advocated “working with nature” as a potentially efficacious alternative to human efforts to control it: yet it is less clear how to do so. We contribute to the theoretical aspect of this research by presenting an ethnographic study on alternative farming practices, in which the farm is not so much a system but an assemblage characterized by multiple systems or rationalities always evolving and changing. In them, relationships among species alternate between mutually beneficial in one moment (or season), and harmful in the next. If HCI is to participate in and to support working with nature, we believe that it will have to situate itself within such assemblages and temporalities. In this work, we look into nontraditional users (e.g., nonhumans) and emerging forms of uses (e.g., interactions between human and other species) to help open a design space for technological interventions. We offer three ethnographic accounts in which farmers-and ourselves as researchers-learn to notice, respond, and engage in symbiotic encounters with companion species and the living soil itself.
Although modern classrooms are increasingly moving towards digital immersion and personalized learning, we have few insights into K-12 teachers’ current practices, motivations, and barriers in setting up their digital classroom ecosystems. We interviewed 20 teachers on their process of discovering and integrating a vast range of productivity software and educational platforms in their classrooms, with a particular focus on how they personalize the UI and content of these tools (e.g., with plugins, templates, or option menus). We found that teachers largely depended on their own experimentation and professional circles to find, personalize, and troubleshoot software tools to support student needs or their own preferences. Teachers were often hesitant to attempt more advanced personalizations due to concerns over student confusion and increased troubleshooting load. We derive several design implications for HCI to better support teachers in sharing their personalized setups and helping their students benefit from digital immersion.
Healthy Lies: The Effects of Misrepresenting Player Health Data on Experience, Behavior, and Performance
Game designers use a variety of techniques that mislead players with the goal of inducing play experience. For example, designers may manipulate data displays of player health-showing they have less health than they actually do-to induce tension. While commonly used, players make decisions based on in-game data displays, raising the question of how misrepresentations impact behavior and performance, and whether this might have unintended consequences. To provide a better understanding of how data misrepresentation impacts play, we compare two versions of a game: one that displays health accurately and one that misrepresents health. Our results suggest that even subtle manipulations to data displays can have a measurable effect on behavior and performance, and these changes can help explain differences in experience. We show that data misrepresentations need to be designed carefully to avoid unintended effects. Our work provides new directions for research into the design of misrepresentation in games.
Pseudo-Haptic Weight: Changing the Perceived Weight of Virtual Objects By Manipulating Control-Display Ratio
In virtual reality, the lack of kinesthetic feedback often prevents users from experiencing the weight of virtual objects. Control-to-display (C/D) ratio manipulation has been proposed as a method to induce weight perception without kinesthetic feedback. Based on the fact that lighter (heavier) objects are easier (harder) to move, this method induces an illusory perception of weight by manipulating the rendered position of users’ hands—increasing or decreasing their displayed movements. In a series of experiments we demonstrate that C/D-ratio induces a genuine perception of weight, while preserving ownership over the virtual hand. This means that such a manipulation can be easily introduced in current VR experiences without disrupting the sense of presence. We discuss these findings in terms of estimation of physical work needed to lift an object. Our findings provide the first quantification of the range of C/D-ratio that can be used to simulate weight in virtual reality.
EnhancedTouchX, a bracelet-type interpersonal body area network device, not only detects but also quantifies interpersonal hand-to-hand touch interactions. Without any wired connection, it can identify the direction and gestures of a touch. The developed device can connect to an external device via Bluetooth Low Energy for monitoring and logging where, when, how long, who, and how the touch interactions occurred. These daily augmented touch interactions provided by such contextual information would offer a variety of applications to facilitate social interactions. Our experiment, conducted with several pairs of participants, demonstrates that the devices can identify the direction of a touch (from one initiating the touch (active touch) to the one being touched (passive touch)) with 95% accuracy. In addition, the devices are also capable of identifying four types of touch gestures with 85% accuracy using a simple threshold classifier.
We introduce the IoT Un-Kit Experience, a co-design approach that engages people in exploring, designing and generating personally meaningful IoT applications and that also serves as a means to explore IoT kit design through in-home workshops. Un-Kit represents a seemingly uncompleted set of sensors, actuators and media elements that have a decontextualized appearance – unfinished state, undefined purpose and unboxed form. The approach emphasises users contemplating and experiencing the IoT elements in their familiar space through detailed and layered conversation with researchers; rather than focusing on connecting up the kit itself, thus their ideas are not constrained by the kit or their competence with it. We illustrate the approach through in-home workshops with older adults, envisioned users of IoT who have had limited voice in its conception. The Un-kit approach supported participants to lead the process and to imagine new artfully integrated designs, with personally legible interactions and aesthetic qualities that fit their desire. We offer insights for a more situated and responsive approach to design of the IoT and its constituent kits.
Augmented reality (AR) games have been growing in popularity in recent years. However, current AR games offer limited opportunities for a synchronous multiplayer experience. This paper introduces a model for designing AR experiences in which players inhabit a shared, real-time augmented environment and can engage in synchronous and collaborative interactions with other players. We explored the development of this model through the creation of Brick, a two-player mobile AR game at the room scale. We refined Brick over multiple rounds of iteration, and we used our playtests to investigate a range of issues involved in designing shared-world AR games. Our findings suggest that there are five major categories of interactions in a shared-world AR system: single-player, intrapersonal, multiplayer, interpersonal, and environmental. We believe that this model can support the development of collaborative AR games and new forms of social gameplay.
The recent proliferation of fabrication and making activities has introduced a large number of users to a variety of tools and equipment. Monitored, reactive and adaptive fabrication spaces are needed to provide personalized information, feedback and assistance to users. This paper explores the sensorization of making and fabrication activities, where the environment, tools, and users were considered to be separate entities that could be instrumented for data collection. From this exploration, we present the design of a modular system that can capture data from the varied sensors and infer contextual information. Using this system, we collected data from fourteen participants with varying levels of expertise as they performed seven representative making tasks. From the collected data, we predict which activities are being performed, which users are performing the activities, and what expertise the users have. We present several use cases of this contextual information for future interactive fabrication spaces.
Scaptics and Highlight-Planes: Immersive Interaction Techniques for Finding Occluded Features in 3D Scatterplots
Three-dimensional scatterplots suffer from well-known perception and usability problems. In particular, overplotting and occlusion, mainly due to density and noise, prevent users from properly perceiving the data. Thanks to accurate head and hand tracking, immersive Virtual Reality (VR) setups provide new ways to interact and navigate with 3D scatterplots. VR also supports additional sensory modalities such as haptic feedback. Inspired by methods commonly used in Scientific Visualisation to visually explore volumes, we propose two techniques that leverage the immersive aspects of VR: first, a density-based haptic vibration technique (Scaptics) which provides feedback through the controller; and second, an adaptation of a cutting plane for 3D scatterplots (Highlight-Plane). We evaluated both techniques in a controlled study with two tasks involving density (finding high- and low-density areas). Overall, Scaptics was the most time-efficient and accurate technique, however, in some conditions, it was outperformed by Highlight-Plane.
Sensing interfaces relying on head or facial gestures provide effective solutions for hands-free scenarios. Most of these interfaces utilize sensors attached to the face, as well as into the mouth, being either obtrusive or limited in input bandwidth. In this paper, we propose ChewIt — a novel intraoral input interface. ChewIt resembles an edible object that allows users to perform various hands-free input operations, both simply and discreetly. Our design is informed by a series of studies investigating the implications of shape, size, locations for comfort, discreetness, maneuverability, and obstructiveness. Additionally, we evaluated potential gestures that users could use to interact with such an intraoral interface.
Although many challenges of managing computer files have been identified in past studies — and many alternative prototypes made — the scale and structure of personal file collections remain relatively unknown. We studied 348 such collections, and found they are typically considerably larger in scale (30-190 thousand files) and structure (folder trees twice taller and many times wider) than previously thought, which suggests files and folders are used now more than ever despite advances in Web storage, desktop search, and tagging. Data along many measures within and across collections were log normally distributed, indicating that personal collections resemble imbalanced, group-made collections and confirming the intuition that personal information management behaviour varies greatly. Directions for the generation of test collections and other future research are discussed.
The Effects of Interruption Timings on Autonomous Height-Adjustable Desks that Respond to Task Changes
Actuated furniture, such as electric adjustable sit-stand desks, helps users vary their posture and contributes to comfort and health. However, studies found that users rarely initiate height changes. Therefore, in this paper, we look into furniture that adjusts itself to the user’s needs. A situated interview study indicated task-changing as an opportune moment for automatic height adjustment. We then performed a Wizard of Oz study to find the best timing for changing desk height to minimize interruption and discomfort. The results are in line with prior work on task interruption in graphical user interfaces and show that the table should change height during a task change. However, results also indicate that until users build trust in the system, they prefer actuation after a task change to experience the impact of the adjustment. Based on the results, we discuss design guidelines for interactive desks with agency.
IoT appliances are gaining consumer traction, from smart thermostats to smart speakers. These devices generally have limited user interfaces, most often small buttons and touchscreens, or rely on voice control. Further, these devices know little about their surroundings unaware of objects, people and activities happening around them. Consequently, interactions with these “smart” devices can be cumbersome and limited. We describe SurfaceSight, an approach that enriches IoT experiences with rich touch and object sensing, offering a complementary input channel and increased contextual awareness. For sensing, we incorporate LIDAR into the base of IoT devices, providing an expansive, ad hoc plane of sensing just above the surface on which devices rest. We can recognize and track a wide array of objects, including finger input and hand gestures. We can also track people and estimate which way they are facing. We evaluate the accuracy of these new capabilities and illustrate how they can be used to power novel and contextually-aware interactive experiences.
Productivity behavior change systems help us reduce our time on unproductive activities. However, is that time actually saved, or is it just redirected to other unproductive activities? We report an experiment using HabitLab, a behavior change browser extension and phone application, that manipulated the frequency of interventions on a focal goal and measured the effects on time spent on other applications and platforms. We find that, when intervention frequency increases on the focal goal, time spent on other applications is held constant or even reduced. Likewise, we find that time is not redistributed across platforms from browser to mobile phone or vice versa. These results suggest that any conservation of procrastination effect is minimal, and that behavior change designers may target individual productivity goals without causing substantial negative second-order effects.
In working to rescue victims of human trafficking, law enforcement officers face a host of challenges. Working in complex, layered organizational structures, they face challenges of collaboration and communication. Online information is central to every phase of a human-trafficking investigation. With terabytes of available data such as sex work ads, policing is increasingly a big-data research problem. In this study, we interview sixteen law enforcement officers working to rescue victims of human trafficking to try to understand their computational needs. We highlight three major areas where future work in human-computer interaction can help. First, combating human trafficking requires advances in information visualization of large, complex, geospatial data, as victims are frequently forcibly moved across jurisdictions. Second, the need for unified information databases raises critical research issues of usable security and privacy. Finally, the archaic nature of information systems available to law enforcement raises policy issues regarding resource allocation for software development.
Vocal shortcuts, short spoken phrases to control interfaces, have the potential to reduce cognitive and physical costs of interactions. They may benefit expert users of creative applications (e.g., designers, illustrators) by helping them maintain creative focus. To aid the design of vocal shortcuts and gather use cases and design guidelines for speech interaction, we interviewed ten creative experts. Based on our findings, we built VoiceCuts, a prototype implementation of vocal shortcuts in the context of an existing creative application. In contrast to other speech interfaces, VoiceCuts targets experts’ unique needs by handling short and partial commands and leverages document model and application context to disambiguate user utterances. We report on the viability and limitations of our approach based on feedback from creative experts.
Rumors are an enduring form of communication across socio-cultural landscapes globally. Counter to their typical negative association, rumors play a nuanced role, helping people collectively deal with problems through constructing a representation of an uncertain situation. Drawing on unstructured interviews and participant observation from a technology goods marketplace in Bangalore, India, we study the circulation of rumors related to the government’s recent policy of demonetization and entry of online marketplaces and digital wallets, all of which disrupted existing market practices. These rumors emerge as attempts at sensemaking when a community is faced with ambiguity. Through highlighting the relationship of institutional trust with rumors, the paper argues that the study of rumors can help us identify the concerns of a community in the face of differential power relations. Further, rumors are a form of social bonding which help communities make sense of their place in society and shape existing practices.
We examine nursing documentation on a newly implemented electronic flowsheet in medical resuscitations to identify the temporal patterns of documentation and how the recorded information supported time-critical teamwork. To determine when the information was documented, we compared timestamps from 58 flowsheet logs to those of verbal communications derived from video review. We also drew on observations of 95 resuscitations to understand the behaviors of nurse documenters. We found that only 8% of the verbal reports were documented in near real-time (one minute within the verbal report), while 42% of reports were not documented in the electronic flowsheet. In addition, 38% were documented early (before the verbal report) and 12% were documented with a delay, ranging from one to 58 minutes after the report. Our study showed that the electronic flowsheet design posed many challenges for real-time documentation, leading to paper-based workarounds and the use of free-text fields on the flowsheet to visualize and keep track of time, and to communicate temporal information to the team. These findings suggest that documenters shape the temporal rhythms of not only their own work but also the rhythms of the electronic record and medical process. We discuss the implications of these rhythms for EHR redesign to support real-time documentation in high-risk, safety-critical settings.
Personas are valuable tools to help designers get to know their users and adopt their perspectives. Yet people are complex and multiple identities have to be considered in their interplay to account for a comprehensive representation otherwise, personas might be superficial and prone to activate stereotypes. Therefore, the way users’ identities are presented in a limited set of personas is crucial to account for diversity and highlight facets which otherwise would go unnoticed. In this paper, we introduce an approach to the development of personas informed by social identity theory. The effectiveness of this approach is investigated in a qualitative study in the context of the design process for an e-learning platform for women in tech. The results suggest that considering multiple identities in the construction of personas adds value when designing technologies.
Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blind people to coordinate hand movements using proprioception, we propose a deep learning system that jointly models hand segmentation and object localization for object classification. We investigate the utility of hands as a natural interface for including and indicating the object of interest in the camera frame. We confirm the potential of this approach by analyzing existing datasets from people with visual impairments for object recognition. With a new publicly available egocentric dataset and an extensive error analysis, we provide insights into this approach in the context of teachable recognizers.
Online Collectible Card Games (OCCGs) are enormously popular worldwide. Previous studies found that the social aspects of physical CCGs are crucial for player engagement. However, we know little about the different types of sociability that OCCGs afford. Nor to what extent they influence players’ social experiences. This mixed method online survey study focuses on a representative OCCG, Hearthstone  to 1) identify and define social design features and examine the extent to which players’ use of these features predict their sense of community; 2) investigate participants’ attitudes towards and experiences with the game community. The results show that players rarely use social features, and these features contribute differently to predicting players’ sense of community. We also found emergent toxic behaviors, afforded by the social features. Findings can inform the best practices and principles in the design of OCCGs, and contribute to our understanding of players’ perceptions of OCCG communities.
Capturing fine-grained hand activity could make computational experiences more powerful and contextually aware. Indeed, philosopher Immanuel Kant argued, “the hand is the visible part of the brain.” However, most prior work has focused on detecting whole-body activities, such as walking, running and bicycling. In this work, we explore the feasibility of sensing hand activities from commodity smartwatches, which are the most practical vehicle for achieving this vision. Our investigations started with a 50 participant, in-the-wild study, which captured hand activity labels over nearly 1000 worn hours. We then studied this data to scope our research goals and inform our technical approach. We conclude with a second, in-lab study that evaluates our classification stack, demonstrating 95.2% accuracy across 25 hand activities. Our work highlights an underutilized, yet highly complementary contextual channel that could unlock a wide range of promising applications.
The HCI community has worked to expand and improve our consideration of the societal implications of our work and our corresponding responsibilities. Despite this increased engagement, HCI continues to lack an explicitly articulated politic, which we argue re-inscribes and amplifies systemic oppression. In this paper, we set out an explicit political vision of an HCI grounded in emancipatory autonomy-an anarchist HCI, aimed at dismantling all oppressive systems by mandating suspicion of and a reckoning with imbalanced distributions of power. We outline some of the principles and accountability mechanisms that constitute an anarchist HCI. We offer a potential framework for radically reorienting the field towards creating prefigurative counterpower-systems and spaces that exemplify the world we wish to see, as we go about building the revolution in increment.
Beyond “One-Size-Fits-All”: Understanding the Diversity in How Software Newcomers Discover and Make Use of Help Resources
For most modern feature-rich software, considerable external help and learning resources are available on the web (e.g., documentation, tutorials, videos, Q&A forums). But, how do users new to an application discover and make use of such resources? We conducted in-lab and diary studies with 26 software newcomers from a variety of different backgrounds who were all using Fusion 360, a 3D modeling application, for the first time. Our results illustrate newcomers’ diverse needs, perceptions, and help-seeking behaviors. We found a number of distinctions in how technical and non-technical users approached help-seeking, including: when and how they initiated the help-seeking process, their struggles in recognizing relevant help, the degree to which they made coordinated use of the application and different resources, and in how they perceived the utility of different help formats. We discuss implications for moving beyond “one-size-fits-all” help resources towards more structured, personalized, and curated help and learning materials.
Many working professionals commute via public transit, yet they have limited tools for learning about their urban neighborhoods and fellow commuters. We designed a location-based game called City Explorer to investigate how transit commuters capture, share, and view community information that is specifically tied to locations. Through a four-week field study, we found that participants valued the increased awareness of their personal travel routines that they gained through City Explorer. When viewing community information, they preferred information that was factual rather than opinion-based and was presented at the start and end of their commutes. Participants found less value in connecting with other transit riders because transit rides were often seen as opportunities to disengage from others. We discuss how location-based technologies can be designed to display factual community information before, during, and at the end of transit commutes.
Dynamics of Visual Attention in Multiparty Collaborative Problem Solving using Multidimensional Recurrence Quantification Analysis
Multiparty collaborative problem solving – an increasingly important context in the 21st century workforce – suffers from a degradation of social and behavioral signals when attempted remotely, resulting in suboptimal outcomes. We investigate teams’ multidimensional patterns of visual attention during a collaborative problem-solving task with an eye for leveraging insights to improve collaborative interfaces. Fifty-seven novices (forming 19 triads) engaged in a challenging programming task (Minecraft Hour of Code) using videoconferencing software with screen sharing. To discover patterns of individual-level gaze-UI coupling(coordination of a teammate’s attention with respect to changes in the user interface) and team-level gaze-UI regularity (dynamics of teams’ collective attention in context with changes in the user interface), we applied cross- and multidimensional recurrence quantification analyses, respectively. Individuals’ eye gaze was significantly coupled with the ongoing screen activity whereas teams displayed significant patterns of gaze regularity, suggesting repetitive patterns in teams’ attention. These measures predicted expert-coded collaborative processes of constructing shared knowledge and negotiation and coordination (but not maintaining team function) and correlated with task score (r = .425). They also predicted individually assessed subjective perceptions of team performance and the collaboration process, but not individual’s learning or team’s task scores. We discuss implications of our findings for the design of intelligent collaborative interfaces.
Older adults are rapidly increasing their use of online services such as banking, social media, and email – services that come with subtle and serious security and privacy risks. Older adults with mild cognitive impairment (MCI) are particularly vulnerable to these risks because MCI can reduce their ability to recognize scams such as email phishing, follow recommended password guidelines, and consider the implications of sharing personal information. Older adults with MCI often cope with their impairments with the help of caregivers, including partners, children, and professional health personnel, when using and managing online services. Yet, this too carries security and privacy risks: sharing personal information with caregivers can create issues of agency, autonomy, and even risk embarrassment and information leakage; caregivers also do not always act in their charges’ best interest. Through a series of interviews conducted in the US, we identify a spectrum of safeguarding strategies used and consider them through the lens of ‘upside and downside risk’ where there are tradeoffs between reduced privacy and maintaining older adults’ autonomy and access to online services.
Participants in online communities often enact different roles when participating in their communities. For example, some in cancer support communities specialize in providing disease-related information or socializing new members. This work clusters the behavioral patterns of users of a cancer support community into specific functional roles. Based on a series of quantitative and qualitative evaluations, this research identified eleven roles that members occupy, such as welcomer and story sharer. We investigated role dynamics, including how roles change over members’ lifecycles, and how roles predict long-term participation in the community. We found that members frequently change roles over their history, from ones that seek resources to ones offering help, while the distribution of roles is stable over the community’s history. Adopting certain roles early on predicts members’ continued participation in the community. Our methodology will be useful for facilitating better use of members’ skills and interests in support of community-building efforts.
Woven smart textiles are useful in creating flexible electronics because they integrate circuitry into the structure of the fabric itself. However, there do not yet exist tools that support the specific needs of smart textiles weavers. This paper describes the process and development of AdaCAD, an application for composing smart textile weave drafts. By augmenting traditional weaving drafts, AdaCAD allows weavers to design woven structures and circuitry in tandem and offers specific support for common smart textiles techniques. We describe these techniques, how our tool supports them alongside feedback from smart textiles weavers. We conclude with a reflection on smart textiles practice more broadly and suggest that the metaphor of coproduction can be fruitful in creating effective tools and envisioning future applications in this space.
Visualizations are emerging as a means of spreading digital misinformation. Prior work has shown that visualization interpretation can be manipulated through slanted titles that favor only one side of the visual story, yet people still think the visualization is impartial. In this work, we study whether such effects continue to exist when titles and visualizations exhibit greater degrees of misalignment: titles whose message differs from the visually cued message in the visualization, and titles whose message contradicts the visualization. We found that although titles with a contradictory slant triggered more people to identify bias compared to titles with a miscued slant, visualizations were persistently perceived as impartial by the majority. Further, people’s recall of the visualization’s message more frequently aligned with the titles than the visualization. Based on these results, we discuss the potential of leveraging textual components to detect and combat visual-based misinformation with text-based slants.
Today’s virtual reality (VR) systems offer chaperone rendering techniques that prevent the user from colliding with physical objects. Without a detailed geometric model of the physical world, these techniques offer limited possibility for more advanced compositing between the real world and the virtual. We explore this using a realtime 3D reconstruction of the real world that can be combined with a virtual environment. RealityCheck allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical space without losing the sense of immersion or presence inside their virtual world. We demonstrate RealityCheck with seven existing VR titles, and describe compositing approaches that address the potential conflicts when rendering the real world and a virtual environment together. A study with frequent VR users demonstrate the affordances provided by our system and how it can be used to enhance current VR experiences.
A key challenge for virtual reality level designers is striking a balance between maintaining the immersiveness of VR and providing users with on-screen aids after designing a virtual experience. These aids are often necessary for wayfinding in virtual environments with complex paths. We introduce a novel adaptive aid that maintains the effectiveness of traditional aids, while equipping designers and users with the controls of how often help is displayed. Our adaptive aid uses gaze patterns in predicting user’s need for navigation aid in VR and displays mini-maps or arrows accordingly. Using a dataset of gaze angle sequences of users navigating a VR environment and markers of when users requested aid, we trained an LSTM to classify user’s gaze sequences as needing navigation help and display an aid. We validated the efficacy of the adaptive aid for wayfinding compared to other commonly-used wayfinding aids.
Older adults are increasingly vulnerable to cybersecurity attacks and scams. Yet we know relatively little about their understanding of cybersecurity, their information-seeking behaviours, and their trusted sources of information and advice in this domain. We conducted 22 semi-structured interviews with community-dwelling older adults in order to explore their cybersecurity information seeking behaviours. Following a thematic analysis of these interviews, we developed a cybersecurity information access framework that highlights shortcomings in older adults’ choice of information resources. Specifically, we find that older users prioritise social resources based on availability, rather than cybersecurity expertise, and that they avoid using the Internet for cybersecurity information searches despite using it for other domains. Finally, we discuss the design of cybersecurity information dissemination strategies for older users, incorporating favoured sources such as TV adverts and radio programming.
We examined the integration of VR into informal and less-structured learning environments in Atlanta (USA) and Mumbai (India) through a process of co-design, co-creation, and co-learning with students and teachers where students learned to use VR to engage with their economic, social, and cultural realities. Using qualitative methods, we engaged students and teachers at both sites in VR content creation activities; through these activities, we attempt to uncover a deeper understanding of the challenges and opportunities of introducing low-cost mobile VR for content generation, consumption, and sharing in underserved learning contexts. We also motivate future work that looks at integrating VR in new contexts, using flexible methods, across borders. The larger vision of our research is to advance us towards greater accessibility and inclusivity of VR across diverse learning environments.
We report on the findings of a co-speculative design inquiry that investigates alternative visions of the Internet of Things (IoT) for the home. We worked with 16 people living in non-stereotypical homes to develop situated and personal concepts attuned to their home. As a prompt for co-speculation and discussion, we created handmade booklets where we took turns overlaying sketched design concepts on top of photos taken with participants in their homes. Our findings reveal new avenues for the design of IoT systems such as: acknowledging porous boundaries of the home, exposing neighborly relations, exploring diverse timescales, revisiting agency, and embracing imaginary and potential uses. We invite human-computer interaction and design researchers to use these avenues as starting points to broaden current assumptions embedded in design and research practices for domestic technologies. We conclude by highlighting the value of examining divergent perspectives and surfacing the unseen.
Designing new technologies to support the lived experience of dementia is of increasing interest within HCI. While there is guidance on qualitative research methods to use in areas such as dementia, there is a need for more appropriate ways to research in the younger demographic. In Younger Onset Dementia (YOD), the circumstances and experiences are markedly different from dementia in the later stage of life requiring a different approach. This paper presents insights into the methods and approaches used in fieldwork with five people living with YOD; where they engaged as co-researchers in a co-directed inquiry into their lived experiences. Through this, we make a number of methodological contributions to HCI and Participatory Action Research (PAR) for research in the YOD setting. This includes productive approaches that are sensitive, respectful and empowering to the participants. It also extends current approaches to using probes in HCI and dementia research.
"I was really, really nervous posting it": Communicating about Invisible Chronic Illnesses across Social Media Platforms
People with invisible chronic illnesses (ICIs) can use social media to seek both informational and emotional support, but these individuals also face social and health-related challenges in posting about their often-stigmatized conditions online. To understand how they evaluate different platforms for disclosure, we interviewed 19 people with ICIs who post on general social media about their illnesses, such as Facebook, Instagram, and Twitter. We present a cross-platform analysis of how platforms varied in their suitability to achieve participants’ goals, as well as the challenges posed by each platform. We also found that as participants’ ICIs progressed, their goals, challenges, and social media use similarly evolved over time. Our findings highlight how people with ICIs select platforms from a broader ecology of social media and suggest a general need to understand shifts in social media use for populations with chronic but changing health concerns.
The Effect of Field-of-View Restriction on Sex Bias in VR Sickness and Spatial Navigation Performance
Recent studies show that women are more susceptible to visually-induced VR sickness, which might explain the low adoption rate of VR technology among women. Reducing field-of-view (FOV) during locomotion is already a widely used strategy to reduce VR sickness as it blocks peripheral optical flow perception and mitigates visual/vestibular conflict. Prior studies show that men are more adept at 3D spatial navigation than women, though this sex bias can be minimized by providing women with a larger FOV. Our study provides insight into the relationship between sex and FOV restriction with respect to VR sickness and spatial navigation performance which seem to conflict. We find the use of an FOV restrictor to be effective in mitigating VR sickness in both sexes while we did not find a negative effect of FOV restriction on spatial navigation performance.
Multi-touch gestures can be very difficult to program correctly because they require that developers build high-level abstractions from low-level touch events. In this paper, we introduce programming primitives that enable programmers to implement multi-touch gestures in a more understandable way by helping them build these abstractions. Our design of these primitives was guided by a formative study, in which we observed developers’ natural implementations of custom gestures. Touch groups provide summaries of multiple fingers rather than requiring that programmers track them manually. Cross events allow programmers to summarize the movement of one or a group of fingers. We implemented these two primitives in two environments: a declarative programming system and in a standard imperative programming language. We found that these primitives are capable of defining nuanced multi-touch gestures, which we illustrate through a series of examples. Further, in two user evaluations of these programming primitives, we found that multi-touch behaviors implemented in these programming primitives are more understandable than those implemented with standard touch events.
While Human-Computer Interaction (HCI) research on health and well-being is increasingly becoming more aware and inclusive of its social and political dimensions, spiritual practices are still largely overlooked there. For a large number of people around the world, especially in the global south, witchcraft, sorcery, and other occult practices are the primary means of achieving health, wealth, satisfaction, and happiness. Building on an eight-month long ethnography in six villages in Jessore, Bangladesh, this paper explores the knowledge, materials, and politics involved in the local witchcraft practices there. By drawing from a rich body of anthropological work on witchcraft, this paper discusses how those findings contribute to the broader issues in HCI around morality, modernity, and postcolonial computing. This paper concludes by recommending ways for smooth integration of traditional occult practices with HCI through design and policy. We argue for occult practices as an under-appreciated site for HCI to learn how to combat ideological hegemony.
Advances in conversational AI have the potential to enable more engaging and effective ways to teach factual knowledge. To investigate this hypothesis, we created QuizBot, a dialogue-based agent that helps students learn factual knowledge in science, safety, and English vocabulary. We evaluated QuizBot with 76 students through two within-subject studies against a flashcard app, the traditional medium for learning factual knowledge. Though both systems used the same algorithm for sequencing materials, QuizBot led to students recognizing (and recalling) over 20% more correct answers than when students used the flashcard app. Using a conversational agent is more time consuming to practice with, but in a second study, of their own volition, students spent 2.6x more time learning with QuizBot than with flashcards and reported preferring it strongly for casual learning. Our results in this second study showed QuizBot yielded improved learning gains over flashcards on recall. These results suggest that educational chatbot systems may have beneficial use, particularly for learning outside of traditional settings.
This paper presents co-design fiction as an approach to engaging users in imagining, envisioning and speculating not just on future technology but future life through co-created fictional works. Design fiction in research is often created or written by researchers. There is relatively little critical discussion of how to co-create design fictions with end-users, with the concomitant opportunities and challenges this poses. To fill this gap in knowledge, we conducted co-design fiction workshops with nine older creative writers, utilising prompts to inspire discussion and engage their imaginative writing about the trend towards tracking and monitoring older people. Their stories revealed futures of neither dystopia nor utopia but of social and moral dilemmas narrating their wish not just to “maintain their independence”, but a palpable desire for adventure and very nuanced senses of how they wish to take control. We discuss inherent tensions in the control of the co-design fiction process; balancing the author’s need for freedom and creativity with the researcher’s desire to guide the process toward the design investigation at hand.
Beyond The Force: Using Quadcopters to Appropriate Objects and the Environment for Haptics in Virtual Reality
Quadcopters have been used as hovering encountered-type haptic devices in virtual reality. We suggest that quadcopters can facilitate rich haptic interactions beyond force feedback by appropriating physical objects and the environment. We present HoverHaptics, an autonomous safe-to-touch quadcopter and its integration with a virtual shopping experience. HoverHaptics highlights three affordances of quadcopters that enable these rich haptic interactions: (1) dynamic positioning of passive haptics, (2) texture mapping, and (3) animating passive props. We identify inherent challenges of hovering encountered-type haptic devices, such as their limited speed, inadequate control accuracy, and safety concerns. We then detail our approach for tackling these challenges, including the use of display techniques, visuo-haptic illusions, and collision avoidance. We conclude by describing a preliminary study (n = 9) to better understand the subjective user experience when interacting with a quadcopter in virtual reality using these techniques.
Virtual Reality (VR) is gaining increasing importance in science, education, and entertainment. A fundamental characteristic of VR is creating presence, the experience of ‘being’ or ‘acting’, when physically situated in another place. Measuring presence is vital for VR research and development. It is typically repeatedly assessed through questionnaires completed after leaving a VR scene. Requiring participants to leave and re-enter the VR costs time and can cause disorientation. In this paper, we investigate the effect of completing presence questionnaires directly in VR. Thirty-six participants experienced two immersion levels and filled three standardized presence questionnaires in the real world or VR. We found no effect on the questionnaires’ mean scores; however, we found that the variance of those measures significantly depends on the realism of the virtual scene and if the subjects had left the VR. The results indicate that, besides reducing a study’s duration and reducing disorientation, completing questionnaires in VR does not change the measured presence but can increase the consistency of the variance.
Understanding social perception is important for designing mobile devices that are socially acceptable. Previous work not only investigated the social acceptability of mobile devices and interaction techniques but also provided tools to measure social acceptance. However, we lack a robust model that explains the underlying factors that make devices socially acceptable. In this paper, we consider mobile devices as social objects and investigate if the stereotype content model (SCM) can be applied to those devices. Through a study that assesses combinations of mobile devices and group stereotypes, we show that mobile devices have a systematic effect on the stereotypes’ warmth and competence. Supported by a second study, which combined mobile devices without a specific stereotypical user, our result suggests that mobile devices are perceived stereotypically by themselves. Our combined results highlight mobile devices as social objects and the importance of considering stereotypes when assessing social acceptance of mobile devices.
Within-subjects experiments are prone to asymmetric transfer, which confounds results interpretation. While HCI researchers routinely test asymmetric transfer in objective data, doing so for subjective data is rare. Yet literature suggests that anchoring effects should make subjective measures particularly susceptible to asymmetric transfer. We report on four analyses of NASA-TLX data from four previously published HCI papers, with four main findings. First, asymmetric transfer is common, occurring in 42% of tests analysed. Second, the data conforms to predictions of anchoring effects. Third, the magnitude of the anchor’s effect correlates with the magnitude of the difference between the interface ratings — that is, the anchor’s ‘pull’ correlates with the anchoring stimulus. Fourth, several of the previously published findings are changed when data are reanalysed using between-subjects treatment. We urge caution when analysing within-subjects subjective measures and recommend that researchers test for and report the occurrence of asymmetric transfer.
Playtesting is a key component in the game development process aimed at improving the quality of games through the collection of gameplay data and identification of design issues. Visualization techniques are currently being employed to help integrate quantitative and qualitative data. Despite that, two existing challenges are to determine the level of detail to be presented to developers based on their needs and to effectively communicate the collected data so that informed design changes can be reached. In this paper, we first propose an aggregated visualization technique that makes use of clustering, territory tessellation, and trajectory aggregation to simultaneously display mixed playtesting data. Secondly, to assess the usefulness of our technique we evaluate it through interviews with professional game developers and compare it to a non-aggregated visualization. The results of this study also provide an important contribution towards identifying areas of improvement in the portrayal of gameplay data.
We introduce three lightweight interactive camera control techniques for 3D terrain maps on touch devices based on a look-from metaphor (Discrete Look-From-At, Continuous Look-From-Forwards, and Continuous Look-From-Towards). These techniques complement traditional touch screen pan, zoom, rotate, and pitch controls allowing viewers to quickly transition between top-down, oblique, and ground-level views. We present the results of a study in which we asked participants to perform elevation comparison and line-of-sight determination tasks using each technique. Our results highlight how look-from techniques can be integrated on top of current direct manipulation navigation approaches by combining several direct manipulation operations into a single look-from operation. Additionally, they show how look-from techniques help viewers complete a variety of common and challenging map-based tasks.
We present ZeRONE, a new indoor drone that does not use rotating blades for propulsion. The proposed device is a helium blimp type drone that uses the wind generated by the ultrasonic vibration of piezo elements for propulsion. Compared to normal drones with rotating propellers, the drone is much safer because its only moving parts are the piezo elements whose surfaces vibrate at the order of micrometers. The drone can float for a few weeks and the ultrasonic propulsion system is quiet. We implement a prototype of the drone and evaluate its performance and unique characteristics in experiments. Moreover, application scenarios in which ZeRONE coexists with people are also discussed.
Although patient portals-technologies that give patients access to their health information-are recognized as key to increasing patient engagement, we have a limited understanding of how these technologies should be designed to meet the needs of hospitalized patients and caregivers. Through semi-structured interviews with 30 patients and caregivers, we examine how future patient portals can best align with their needs and support engagement in their care. Our findings reveal six needs that existing patient portals do not support: (1) transitioning from home to hospital, (2) adjusting schedules and receiving status updates, (3) understanding and remembering care, (4) asking questions and flagging problems, (5) collaborating with providers and care- givers, and (6) preparing for discharge and at-home care. Based on these findings, we discuss three design implications: highlight patient-centric goals and preferences, provide dynamic information about care events, and design for situationally-impaired users. Our contributions guide future patient portals in engaging hospitalized patients and care- givers as primary stakeholders in their health care.
Can Privacy Be Satisfying?: On Improving Viewer Satisfaction for Privacy-Enhanced Photos Using Aesthetic Transforms
Pervasive photo sharing in online social media platforms can cause unintended privacy violations when elements of an image reveal sensitive information. Prior studies have identified image obfuscation methods (e.g., blurring) to enhance privacy, but many of these methods adversely affect viewers’ satisfaction with the photo, which may cause people to avoid using them. In this paper, we study the novel hypothesis that it may be possible to restore viewers’ satisfaction by ‘boosting’ or enhancing the aesthetics of an obscured image, thereby compensating for the negative effects of a privacy transform. Using a between-subjects online experiment, we studied the effects of three artistic transformations on images that had objects obscured using three popular obfuscation methods validated by prior research. Our findings suggest that using artistic transformations can mitigate some negative effects of obfuscation methods, but more exploration is needed to retain viewer satisfaction.
Hierarchy structures such as file systems are widespread interfaces for item retrieval and selection tasks. Some hierarchies can be modified by end-users, such as application launchers on smartphones or pictures in a file folder. These modifiable hierarchies cannot benefit from an optimization made beforehand as their content, unknown during the design process, is constantly evolving. We hence propose an analytic model which designers can integrate in their system to recommend a range of local structure modifications (e.g., creating new folders) to end-users. Proposing a range of modifications gives flexibility to end-users regarding their own meaningful grouping and labeling choices to follow a recommendation. A first experiment confirms that the recommendations built on our model can lead to modified hierarchies resulting in faster theoretical selection times. A second experiment confirms that the theoretical selection times fit empirical selection times in different hierarchy visual layouts: linear, radial, and grid.
Expressive robots are useful in many contexts, from industrial to entertainment applications. However, designing expressive robot behaviors requires editing a large number of unintuitive control parameters. We present an interactive, data-driven system that allows editing of these complex parameters in a semantic space. Our system combines a physics-based simulation that captures the robot’s motion capabilities, and a crowd-powered framework that extracts relationships between the robot’s motion parameters and the desired semantic behavior. These relationships enable mixed-initiative exploration of possible robot motions. We specifically demonstrate our system in the context of designing emotionally expressive behaviors. A user-study finds the system to be useful for more quickly developing desirable robot behaviors, compared to manual parameter editing.
The increasing availability of health data and knowledge about computationally modeling human physiology opens new opportunities for personalized predictions in health. Yet little is known about how individuals interact and reason with personalized predictions. To explore these questions, we developed a smartphone app, GlucOracle, that uses self-tracking data of individuals with type 2 diabetes to generate personalized forecasts for post-meal blood glucose levels. We pilot-tested GlucOracle with two populations: members of an online diabetes community, knowledgeable about diabetes and technologically savvy; and individuals from a low socio-economic status community, characterized by high prevalence of diabetes, low literacy and limited experience with mobile apps. Individuals in both communities engaged with personal glucose forecasts and found them useful for adjusting immediate meal options, and planning future meals. However, the study raised new questions as to appropriate time, form, and focus of forecasts and suggested new research directions for personalized predictions in health.
Exploring Factors that Influence Connected Drivers to (Not) Use or Follow Recommended Optimal Routes
Navigation applications are becoming ubiquitous in our daily navigation experiences. With the intention to circumnavigate congested roads, their route guidance always follows the basic assumption that drivers always want the fastest route. However, it is unclear how their recommendations are followed and what factors affect their adoption. We present the results of a semi-structured qualitative study with 17 drivers, mostly from the Philippines and Japan. We recorded their daily commutes and occasional trips, and inquired into their navigation practices, route choices and on-the-fly decision-making. We found that while drivers choose a recommended route in urgent situations, many still preferred to follow familiar routes. Drivers deviated because of a recommendation’s use of unfamiliar roads, lack of local context, perceived driving unsuitability, and inconsistencies with realized navigation experiences. Our findings and implications emphasize their personalization needs, and how the right amount of algorithmic sophistication can encourage behavioral adaptation.
Online shopping, by reducing the needs for traveling, has become an essential part of lives for people with visual impairments. However, in HCI, research on online shopping for them has only been limited to the analysis of accessibility and usability issues. To develop a broader and better understanding of how visually impaired people shop online and design accordingly, we conducted a qualitative study with twenty blind people. Our study highlighted that blind people’s desire of being treated as ordinary had significantly shaped their online shopping practices: very attentive to the visual appearance of the goods even they themselves could not see and taking great pain to find and learn what commodities are visually appropriate for them. This paper reports how their trying to appear ordinary is manifested in online shopping and suggests design implications to support these practices.
We study the effects of haptic augmentation on tapping, path following, and drag & drop tasks based on a recent flagship smartphone with refined touch sensing and haptic actuator technologies. Results show actuated haptic confirmation on tapping targets was subjectively appreciated by some users but did not improve tapping speed or accuracy. For drag & drop, a clear performance improvement was measured when haptic feedback is applied to target boundary crossing, particularly when the targets are small. For path following tasks, virtual haptic feedback improved accuracy at a reduced speed in a sitting condition. Stronger results were achieved in a physical haptic mock-up. Overall, we found actuated touchscreen haptic feedback particularly effective when the touched object was visually interfered by the finger. Participants subjective experience of haptic feedback in all tasks tended to be more positive than their time or accuracy performance suggests. We compare and discuss these findings with previous results on early generations of devices. The work provides an empirical foundation to product design and future research of touch input and haptic systems.
Email management consumes significant effort from senders and recipients. Some of this work might be automatable. We performed a mixed-methods need-finding study to learn: (i) what sort of automatic email handling users want, and (ii) what kinds of information and computation are needed to support that automation. Our investigation included a design workshop to identify categories of needs, a survey to better understand those categories, and a classification of existing email automation software to determine which needs have been addressed. Our results highlight the need for: a richer data model for rules, more ways to manage attention, leveraging internal and external email context, complex processing such as response aggregation, and affordances for senders. To further investigate our findings, we developed a platform for authoring small scripts over a user’s inbox. Of the automations found in our studies, half are impossible in popular email clients, motivating new design directions.
With mobile apps rapidly permeating all aspects of daily living with use by all segments of the population, it is crucial to support the evaluation of app usability for specific impaired users to improve app accessibility. In this work, we examine the effects of using our augmented virtuality impairment simulation system–Empath-D–to support experienced designer-developers to redesign a mockup of commonly used mobile application for cataract-impaired users, comparing this with existing tools that aid designing for accessibility. We show that the use of augmented virtuality for assessing usability supports enhanced usability challenge identification, finding more defects and doing so more accurately than with existing methods. Through our user interviews, we also show that augmented virtuality impairment simulation supports realistic interaction and evaluation to provide a concrete understanding over the usability challenges that impaired users face, and complements the existing guidelines-based approaches meant for general accessibility.
Gesture typing–entering a word by gliding the finger sequentially over letter to letter– has been widely supported on smartphones for sighted users. However, this input paradigm is currently inaccessible to blind users: it is difficult to draw shape gestures on a virtual keyboard without access to key visuals. This paper describes the design of accessible gesture typing, to bring this input paradigm to blind users. To help blind users figure out key locations, the design incorporates the familiar screen-reader supported touch exploration that narrates the keys as the user drags the finger across the keyboard. The design allows users to seamlessly switch between exploration and gesture typing mode by simply lifting the finger. Continuous touch-exploration like audio feedback is provided during word shape construction that helps the user glide in the right direction of the key locations constituting the word. Exploration mode resumes once word shape is completed. Distinct earcons help distinguish gesture typing mode from touch exploration mode, and thereby avoid unintended mix-ups. A user study with 14 blind people shows 35% increment in their typing speed, indicative of the promise and potential of gesture typing technology for non-visual text entry.
We describe the iterative design, development and learning process we undertook to produce Gabber, a digital platform that aims to support distributed capture of spoken interviews and discussions, and their qualitative analysis. Our aim is to reduce both expertise and cost barriers associated with existing technologies, making the process more inclusive. Gabber structures distributed audio data capture, facilitates participatory sensemaking, and supports collaborative reuse of audio. We describe our design and development journey across three distinct field trials over a two-year period. Reflecting on the iterative design process, we offer insights into the challenges faced by non-experts throughout their qualitative practices, and provide guidance for researchers designing systems to support engagement in these practices.
Voice User Interfaces in Schools: Co-designing for Inclusion with Visually-Impaired and Sighted Pupils
Voice user interfaces (VUIs) are increasingly popular, particularly in homes. However, little research has investigated their potential in other settings, such as schools. We investigated how VUIs could support inclusive education, particularly for pupils with visual impairments (VIs). We organised focused discussions with educators at a school, with support staff from local authorities and, through bodystorming, with a class of 27 pupils. We then ran a series of co-design workshops with participants with mixed-visual abilities to design an educational VUI application. This provided insights into challenges faced by pupils with VIs in mainstream schools, and opened a space for educators, sighted and visually impaired pupils to reflect on and design for their shared learning experiences through VUIs. We present scenarios, a design space and an example application that show novel ways of using VUIs for inclusive education. We also reflect on co-designing with mixed-visual-ability groups in this space.
This paper reports on a co-speculative interview study with charitable donors to explore the future of programmable, conditional and data-driven donations. Responding to the rapid emergence of blockchain-based and AI-supported financial technologies, we specifically examine the potential of automated, third-party ‘escrows’, where donations are held before they are released or returned based on specified rules and conditions. To explore this we conducted pilot workshops with 9 participants and an interview study in which 14 further participants were asked about their experiences of donating money, and invited to co-speculate on a service for programmable giving. The study elicited how data-driven conditionality and automation could be leveraged to create novel donor experiences, however also illustrated the inherent tensions and challenges involved in giving programmatically. Reflecting on these findings, our paper contributes implications both for the design of programmable aid platforms, and the design of escrow-based financial services in general.
The emerging class of epidermal devices opens up new opportunities for skin-based sensing, computing, and interaction. Future design of these devices requires an understanding of how skin-worn devices affect the natural tactile perception. In this study, we approach this research challenge by proposing a novel classification system for epidermal devices based on flexural rigidity and by testing advanced adhesive materials, including tattoo paper and thin films of poly (dimethylsiloxane) (PDMS). We report on the results of three psychophysical experiments that investigated the effect of epidermal devices of different rigidity on passive and active tactile perception. We analyzed human tactile sensitivity thresholds, two-point discrimination thresholds, and roughness discrimination abilities on three different body locations (fingertip, hand, forearm). Generally, a correlation was found between device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Surprisingly, thin epidermal devices based on PDMS with a hundred times the rigidity of commonly used tattoo paper resulted in comparable levels of tactile acuity. The material offers the benefit of increased robustness against wear and the option to re-use the device. Based on our findings, we derive design recommendations for epidermal devices that combine tactile perception with device robustness.
While researchers have studied the benefits and hazards of crowdsourcing for diverse classes of workers, most work has focused on those having high familiarity with both computers and English. We explore whether paid crowdsourcing can be inclusive of individuals in rural India, who are relatively new to digital devices and literate mainly in local languages. We built an Android application to measure the accuracy with which participants can digitize handwritten Marathi/Hindi words. The tasks were based on the real-world need for digitizing handwritten Devanagari script documents. Results from a two-week, mixed-methods study show that participants achieved 96.7% accuracy in digitizing handwritten words on low-end smartphones. A crowdsourcing platform that employs these users performs comparably to a professional transcription firm. Participants showed overwhelming enthusiasm for completing tasks, so much so that we recommend imposing limits to prevent overuse of the application. We discuss the implications of these results for crowdsourcing in low-resource areas.
Stroke is one of the most common cause of long-term disability in the world, significantly reducing quality of life through impairing motor functions and cognitive abilities. Whilst rehabilitation exercises can help in the recovery of motor function impairments, stroke survivors rarely exercise enough, leading to far from optimal recovery. In this paper, we investigate how upper limb stroke rehabilitation can be supported using interactive tangible bimanual devices in the home. We customise the rehabilitation activities based on individual rehabilitation requirements and motivation of stroke survivors. Through evaluation with five stroke survivors, we uncovered insight into how tangible stroke rehabilitation systems for the home should be designed. The evaluation revealed the special importance of tailorable form factor and supporting self-awareness and grip exercises in order to increase the independency of stroke survivors to carry out activities of daily living.
Augmented fabrication is the practice of designing and fabricating an artifact to work with existing objects. Although common both in the wild and as an area for research tools, little is known about how novices approach the task of designing under the constraints of interfacing with real-world objects. In this paper, we report the results of a study of fifteen novice end users in an augmented fabrication design task. We discuss obstacles encountered in four contexts: capturing information about physical objects, transferring information to 3D~modeling software, digitally modeling a new object, and evaluating whether the new object will work when fabricated. Based on our findings, we suggest how future tools can better support augmented fabrication in each of these contexts.
Bookly: An Interactive Everyday Artifact Showing the Time of Physically Accumulated Reading Activity
We introduce Bookly, an interactive artifact that physically represents the accumulated time of users’ reading activity through abstract volumetric changes. Bookly accumulates the time of actions (e.g., picking up and putting down books) that users performed for reading and provides a designated space for the ongoing book being read. The results of our 2-week in-field study with six participants showed that continuous exposure to volumetric changes representing the accumulated time of reading activities helped the users to understand their unsettled reading patterns. Bookly also motivated the users to improve their reading behavior by gradually making reading part of their schedules. Additionally, the definite distinction of the ongoing book improved its visual affordance and accessibility for the users to start reading books. Based on the findings, we confirmed the possibility of making intangible data physical for self-reflection to enhance changes in behaviors that are difficult to perform due to weak motivation.
Creative activities allow people to express themselves in rich, nuanced ways. However, being creative does not always come easily. For example, people with speech and language impairments, such as aphasia, face challenges in creative activities that involve language. In this paper, we explore the concept of constrained creativity as a way of addressing this challenge and enabling creative writing. We report an app, MakeWrite, that supports the constrained creation of digital texts through automated redaction. The app was co-designed with and for people with aphasia and was subsequently explored in a workshop with a group of people with aphasia. Participants were not only successful in crafting novel language, but, importantly, self-reported that the app was crucial in enabling them to do so. We refect on the potential of technology-supported constrained creativity as a means of empowering expression amongst users with diverse needs.
As smartphone use increases dramatically, so do studies about technology overuse. Many different mobile apps for breaking “smartphone addiction” and achieving “digital wellbeing” are available. However, it is still not clear whether and how such solutions work. Which functionality do they have? Are they effective and appreciated? Do they have a relevant impact on users’ behavior? To answer these questions, (i) we reviewed the features of 42 digital wellbeing apps, (ii) we performed a thematic analysis on 1,128 user reviews of such apps, and (iii) we conducted a 3-week-long in-the-wild study of Socialize, an app that includes the most common digital wellbeing features, with 38 participants. We discovered that digital wellbeing apps are appreciated and useful for some specific situations. However, they do not promote the formation of new habits and they are perceived as not restrictive enough, thus not effectively helping users to change their behavior with smartphones.
Autonomous Distributed Energy Systems: Problematising the Invisible through Design, Drama and Deliberation
Technologies such as blockchains, smart contracts and programmable batteries facilitate emerging models of energy distribution, trade and consumption, and generate a considerable number of opportunities for energy markets. However, these developments complicate relationships between stakeholders, disrupting traditional notions of value, control and ownership. Discussing these issues with the public is particularly challenging as energy consumption habits often obscure the competing values and interests that shape stakeholders’ relationships. To make such difficult discussions more approachable and examine the missing relational aspect of autonomous energy systems, we combined the design of speculative hairdryers with performance and deliberation. This integrated method of inquiry makes visible the competing values and interests, eliciting people’s wishes to negotiate these terms. We argue that the complexity of mediated energy distribution and its convoluted stakeholder relationships requires more sophisticated methods of inquiry to engage people in debates concerning distributed energy systems.
End users can program trigger-action rules to personalize the joint behavior of their smart devices and online services. Trigger-action programming is, however, a complex task for non-programmers and errors made during the composition of rules may lead to unpredictable behaviors and security issues, e.g., a lamp that is continuously flashing or a door that is unexpectedly unlocked. In this paper, we introduce EUDebug, a system that enables end users to debug trigger-action rules. With EUDebug, users compose rules in a web-based application like IFTTT. EUDebug highlights possible problems that the set of all defined rules may generate and allows their step-by-step simulation. Under the hood, a hybrid Semantic Colored Petri Net (SCPN) models, checks, and simulates trigger-action rules and their interactions. An exploratory study on 15 end users shows that EUDebug helps identifying and understanding problems in trigger-action rules, which are not easily discoverable in existing platforms.
Creativity Support Tools (CSTs) play a fundamental role in the study of creativity in Human-Computer Interaction (HCI). Even so, there is no consensus definition of the term ‘CST’ in HCI, and in most studies, CSTs have been construed as one-off exploratory prototypes, typically built by the researchers themselves. This makes it difficult to clearly demarcate CST research, but also to compare findings across studies, which impedes advancement in digital creativity as a growing field of research. Based on a literature review of 143 papers from the ACM Digital Library (1999-2018), we contribute a first overview of the key characteristics of CSTs developed by the HCI community. Moreover, we propose a tentative definition of a CST to help strengthen knowledge sharing across CST studies. We end by discussing our study’s implications for future HCI research on CSTs and digital creativity.
The move towards digital payments and mobile money, and away from physical cash and banking services offers users opportunities to change the ways that they can spend, save and manage their money through a variety of personal financial management services. However, set against ordinary, everyday patterns of spending, saving and other forms of financial transaction, it is not clear how users might interact with, understand, or value financial management services that utilise rich data and connected digital content for their personal use. In order to explore how people might engage with such systems, we conducted a study of financial activity, following people’s transactional activity over time, and interviewing them about their practices, understandings, needs, concerns and expectations of current and future financial technologies. Drawing from the everyday activities and practices observed, we identify implications for the design of digitally enabled, personal financial systems.
The nature of work is changing. As labor increasingly trends to casual work in the emerging gig economy, understanding the broader economic context is crucial to effective engagement with a contingent workforce. Crowdsourcing represents an early manifestation of this fluid, laisser-faire, on-demand workforce. This work analyzes the results of four large-scale surveys of US-based Amazon Mechanical Turk workers recorded over a six-year period, providing comparable measures to national statistics. Our results show that despite unemployment far higher than national levels, crowdworkers are seeing positive shifts in employment status and household income. Our most recent surveys indicate a trend away from full-time-equivalent crowdwork, coupled with a reduction in estimated poverty levels to below national figures. These trends are indicative of an increasingly flexible workforce, able to maximize their opportunities in a rapidly changing national labor market, which may have material impacts on existing models of crowdworker behavior.
Recent scandals involving data from participatory research have contributed to broader public concern about online privacy. Such concerns might make people more reluctant to participate in research that asks them to volunteer personal data, compromising many researchers’ data collection. We tested several motivational messages that encouraged participation in a citizen science project. We measured people’s willingness to disclose personal information. While participants were less likely to share sensitive data than neutral data, disclosure behaviour was not affected by attitudes to privacy. Importantly, we found that citizen scientists who were exposed to a motivational message that emphasised ‘learning’ were more likely to share sensitive information than those presented with other types of motivational cues. Our results suggest that priming individuals with motivational messages can increase their willingness to contribute personal data to a project, even if the request pertains to sensitive information.
Voice control is an increasingly common feature of digital games, but the experience of playing with voice control is often hampered by feelings of embarrassment and dissonance. Past research has recognised these tensions, but has not offered a general model of how they arise and how players respond to them. In this study, we use Erving Goffman’s frame analysis, as adapted to the study of games by Conway and Trevillian, to understand the social experience of playing games by voice. Based on 24 interviews with participants who played voice-controlled games in a social setting, we put forward a frame analytic model of gameplay as a social event, along with seven themes that describe how voice interaction enhances or disrupts the player experience. Our results demonstrate the utility of frame analysis for understanding social dissonance in voice interaction gameplay, and point to practical considerations for designers to improve engagement with voice-controlled games.
A Tale of Two Perspectives: A Conceptual Framework of User Expectations and Experiences of Instructional Fitness Apps
We present a conceptual framework grounded in both users’ reviews and HCI theories, residing between practices and theories as a form of intermediate-level knowledge in interaction design. Previous research has examined different forms of intermediary knowledge such as conceptual structures, strong concepts, and bridging concepts. Within HCI, these forms are generic and rise either from theories or particular instances. In this work, we created and evaluated a conceptual framework for a specific domain (instructional fitness apps). We first extracted the particular instances using users’ online reviews and conceptualised them as an expectations and experiences framework. Second, within the framework, we evaluated the artefact related constructs using Norman’s design principles. Third, we evaluated beyond the artefact related constructs using distributed cognition theory. We present an analysis of such intermediate-level knowledge with the aim of informing future designs.
PicMe is a mobile application that provides interactive on-screen guidance that helps the user take pictures of a composition that another person requires. Once the requester captures a picture of the desired composition and delivers it to the user (photographer), a 2.5D guidance system, called the virtual frame, guides the user in real-time by showing a three-dimensional composition of the target image (i.e., size and shape). In addition, according to the matching accuracy rate, we provide a small-sized target image in an inset window as feedback and edge visualization for further alignment of the detail elements. We implemented PicMe to work fully in mobile environments. We then conducted a preliminary user study to evaluate the effectiveness of PicMe compared to traditional 2D guidance methods. The results show that PicMe helps users reach their target images more accurately and quickly by giving participants more confidence in their tasks.
Power Struggles and Disciplined Designers – A Nexus Analytic Inquiry on Cross-Disciplinary Research and Design
Design is at the heart of Human Computer Interaction research and practice. In the research community, there has emerged an increasing interest in understanding and conceptualizing our research practice, particularly such entailing design. However, reflective discussion around the associated challenges and practicalities is yet limited. Moreover, so far there is limited discussion on the cross-disciplinary nature of our research and design practices: although cross-disciplinarity has been brought up as an ideal and a necessity, its practicalities and complexities remain yet poorly explored. This study examines a cross-disciplinary research project with a number of researcher-designers representing different disciplines acting as ‘designers’, while having a divergent understanding of it and of who has authority to do it. The study relies on nexus analysis as a sensitizing device and shows how various discourses, epistemologies and histories shape cross-disciplinary research and design. Critical reflection around our research practice entailing design is called for.
Recent research has advocated for a broader conception of evaluation for Sustainable HCI (SHCI), using interdisciplinary insights and methods. In this paper, we put this into practice to conduct an evaluation of Sustainable Interaction Design (SID) of digital services. We explore how SID can contribute to corporate greenhouse gas (GHG) reduction strategies. We show how a Digital Service Provider (DSP) might incorporate SID into their design process and quantitatively evaluate a specific SID intervention by combining user analytics data with environmental life cycle assessment. We illustrate this by considering YouTube. Replacing user analytics data with aggregate estimates from publicly available sources, we estimate emissions associated with the deployment of YouTube to be approximately 10MtCO2e p.a. We estimate emissions reductions enabled through the use of an SID intervention from prior literature to be approximately 300KtCO2e p.a., and demonstrate that this is significant when considered alongside other emissions reduction interventions used by DSPs.
Despite the availability of software to support Affinity Diagramming (AD), practitioners still largely favor physical sticky-notes. Physical notes are easy to set-up, can be moved around in space and offer flexibility when clustering un-structured data. However, when working with mixed data sources such as surveys, designers often trade off the physicality of notes for analytical power. We propose AffinityLens, a mobile-based augmented reality (AR) application for Data-Assisted Affinity Diagramming (DAAD). Our application provides just-in-time quantitative insights overlaid on physical notes. Affinity Lens uses several different types of AR overlays (called lenses) to help users find specific notes, cluster information, and summarize insights from clusters. Through a formative study of AD users, we developed design principles for data-assisted AD and an initial collection of lenses. Based on our prototype, we find that Affinity Lens supports easy switching between qualitative and quantitative ‘views’ of data, without surrendering the lightweight benefits of existing AD practice.
Managing Multimorbidity: Identifying Design Requirements for a Digital Self-Management Tool to Support Older Adults with Multiple Chronic Conditions
Older adults with multiple chronic conditions (multimorbidity) face complex self-management routines, including symptom monitoring, managing multiple medications, coordinating healthcare visits, communicating with multiple healthcare providers and processing and managing potentially conflicting advice on conditions. While much research exists on single disease management, little, if any research has explored the topic of technology to support those with multimorbidity, particularly older adults, to self-manage with support from a care network. This paper describes a large qualitative study with 125 participants, including older adults with multimorbidity and those who care for them, across two European countries. Key findings related to the: impact of multimorbidity, complexities involved in self-management, motivators and barriers to self-management, sources of support and poor communication as a barrier to care coordination. We present important concepts and design features for a digital health system that aim to address requirements derived from this study.
Interactive mirrors, typically combining semi-transparent mirrors, digital screens and interaction mechanisms have been developed for a variety of application areas. Drawing on existing techniques to create interactive mirror spaces, we investigated their performative qualities through artistic discovery and collaborative prototyping. We document a linked set of design explorations and two public, site-specific experiences that brought together artists, communities, and HCI researchers. We illustrate the abstracted interactive mirror space that practitioners in the performance art, theatre and museum sectors can work with. In turn, we also discuss six performative design strategies concerning the use of physical context, movement and narrative that HCI researchers who wish to deploy interactive mirrors in more mainstream settings need to consider.
Research on product experience has a history in investigating the sensory and emotional qualities of interacting with objects. However, this notion has not been fully expanded to the design space of co-designing smart objects. In this paper, we report on findings from a series of co-design workshops where we used the toolkit Loaded Dice in conjunction with a card set that aimed to support participants in reflecting the sensory qualities of domestic smart objects. We synthesize and interpret findings from our study to help illustrate how the workshops supported co-designers in creatively ideating concepts for emotionally valuable smart objects that better connect personal inputs with the output of smart objects. Our work contributes a case example of how a co-design approach that emphasizes situated sensory exploration can be effective in enabling co-designers to ideate concepts of idiosyncratic smart objects that closely relate to the characteristics of their domestic living situations.
Single-hand microgestures have been recognized for their potential to support direct and subtle interactions. While pioneering work has investigated sensing techniques and presented first sets of intuitive gestures, we still lack a systematic understanding of the complex relationship between microgestures and various types of grasps. This paper presents results from a user elicitation study of microgestures that are performed while the user is holding an object. We present an analysis of over 2,400 microgestures performed by 20 participants, using six different types of grasp and a total of 12 representative handheld objects of varied geometries and size. We expand the existing elicitation method by proposing statistical clustering on the elicited gestures. We contribute detailed results on how grasps and object geometries affect single-hand microgestures, preferred locations, and fingers used. We also present consolidated gesture sets for different grasps and object size. From our findings, we derive recommendations for the design of microgestures compatible with a large variety of handheld objects.
We propose autocomplete for the design and development of virtual breadboard circuits using software prototyping tools. With our system, a user inserts a component into the virtual breadboard, and it automatically provides a user with a list of suggested components. These suggestions complete or ex- tend the electronic functionality of the inserted component to save the user’s time and reduce circuit error. To demon- strate the effectiveness of autocomplete, we implemented our system on Fritzing, a popular open source breadboard circuit prototyping software, used by novice makers. Our autocomplete suggestions were implemented based upon schematics from datasheets for standard components, as well as how components are used together from over 4000 circuit projects from the Fritzing community. We report the results of a controlled study with 16 participants, evaluating the effectiveness of autocomplete in the creation of virtual breadboard circuits, and conclude by sharing insights and directions for future research.
Whereas there have been significant improvements in the quality of care provided for people with dementia, limited attention to the importance for people with dementia being enabled to make positive social contributions within care home contexts can restrict their sense of agency. In this paper we describe the design and deployment of ‘Printer Pals’ a receipt-based print media device, which encourages social contribution and agency within a care home environment. The design followed a two-year ethnography, from which the need for highlighting participation and supporting agency for residents within the care home became clear. The residents use of Printer Pals mediated participation in a number of different ways, such as engaging with the technology itself, offering shared experiences and participating in co-constructive and meaningful ways, each of which is discussed. We conclude with a series of design consideration to support agentic and caring interactions through inclusive design practices.
The alternative use of travel time is a widely discussed benefits of driverless cars. We therefore conducted 14 co-design sessions to examine how people manage their time, to determine how they perceive the value of time in driverless cars and derive design implications. Our findings suggest that driverless mobility will affect people’s use of travel time and their time management in general. The participants repeatedly stated the desire of completing tasks while traveling to save time for activities that are normally neglected in everyday life. Using travel time efficiently requires using car space efficiently. We found out that the design concept of tiny houses could serve as common design pattern to deal with the limited space within cars and support diverse needs.
Curiosity-the intrinsic desire for new information-can enhance learning, memory, and exploration. Therefore, understanding how to elicit curiosity can inform the design of educational technologies. In this work, we investigate how a social peer robot’s verbal expression of curiosity is perceived, whether it can affect the emotional feeling and behavioural expression of curiosity in students, and how it impacts learning. In a between-subjects experiment, 30 participants played the game LinkIt!, a game we designed for teaching rock classification, with a robot verbally expressing: curiosity, curiosity plus rationale, or no curiosity. Results indicate that participants could recognize the robot’s curiosity and that curious robots produced both emotional and behavioural curiosity contagion effects in participants.
Crowdsourced data acquired from tasks that comprise a subjective component (e.g. opinion detection, sentiment analysis) is potentially affected by the inherent bias of crowd workers who contribute to the tasks. This can lead to biased and noisy ground-truth data, propagating the undesirable bias and noise when used in turn to train machine learning models or evaluate systems. In this work, we aim to understand the influence of workers’ own opinions on their performance in the subjective task of bias detection. We analyze the influence of workers’ opinions on their annotations corresponding to different topics. Our findings reveal that workers with strong opinions tend to produce biased annotations. We show that such bias can be mitigated to improve the overall quality of the data collected. Experienced crowd workers also fail to distance themselves from their own opinions to provide unbiased annotations.
Around-device interaction methods expand the available interaction space for mobile devices; however, there is currently no way to simultaneously track a user’s input and provide haptic feedback at the tracked point away from the device. We present Magnetips, a simple, mobile solution for around-device tracking and mid-air haptic feedback. Magnetips combines magnetic tracking and electromagnetic feedback that works regardless of visual occlusion, through most common materials, and at a size that allows for integration with mobile devices. We demonstrate: (1) high-frequency around-device tracking and haptic feedback; (2) the accuracy and range of our tracking solution which corrects for the effects of geomagnetism, necessary for enabling mobile use; and (3) guidelines for maximising strength of haptic feedback, given a desired tracking frequency. We present technical and usability evaluations of our prototype, and demonstrate four example applications of its use.
Measuring the Influences of Musical Parameters on Cognitive and Behavioral Responses to Audio Notifications Using EEG and Large-scale Online Studies
Prior studies have evaluated various designs for audio notifications. However, calls for more in-depth research on how such notifications work, especially at the level of users’ cognitive states, have gone unanswered; and studies evaluating audio notifications with large numbers of participants in multiple environments have been rare. This study conducted an electroencephalography study (N=20) and an online study (N=967) to enhance understandings of how three musical parameters – melody (simple, complex), pitch (high, low), and tempo (fast, slow) – influenced users’ cognition and behaviors. There are eight different notifications with different combinations of these parameters. The online study analyzed the effects of user-specific and environmental information on users’ behaviors while they listened to these notifications. The results revealed that tempo and pitch have the main effect on the speed and strength (accuracy) of users’ cognition and behaviors. The users’ characteristics and environments influenced the effects of these musical parameters.
Design Considerations for Interactive Office Lighting: Interface Characteristics, Shared and Hybrid Control
The inclusion of IoT in office lighting allows people to have personal lighting control at their workplace. To design lighting control interfaces that fit people’s everyday living, we need a better understanding of how people experience lighting interaction in the real world. Still, lighting control is often explored in controlled settings. This work presents a qualitative field study concerning the user experience of two control interfaces for a state-of-the-art lighting system of 400+ luminaires in a real-life office. In ten weeks, 43 people interacted 3937 times. The findings illustrate the effects of using a smartphone for lighting control, how people experience lighting control in shared situations, and issues with automatic system behavior. We define design considerations for interface characteristics, shared control, and hybrid control. The work contributes to making the potential benefits of interactive office lighting a reality.
Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems
AI technologies have been incorporated into many end-user applications. However, expectations of the capabilities of such systems vary among people. Furthermore, bloated expectations have been identified as negatively affecting perception and acceptance of such systems. Although the intelligibility of ML algorithms has been well studied, there has been little work on methods for setting appropriate expectations before the initial use of an AI-based system. In this work, we use a Scheduling Assistant – an AI system for automated meeting request detection in free-text email – to study the impact of several methods of expectation setting. We explore two versions of this system with the same 50% level of accuracy of the AI component but each designed with a different focus on the types of errors to avoid (avoiding False Positives vs. False Negatives). We show that such different focus can lead to vastly different subjective perceptions of accuracy and acceptance. Further, we design expectation adjustment techniques that prepare users for AI imperfections and result in a significant increase in acceptance.
Information dissemination using automated phone calls allows reaching low-literate and tech-naive populations. Open challenges include rapid verification of expected knowledge gaps in the community, dissemination of specific information to address these gaps, and follow-up measurement of knowledge retention. We report Sawaal, a voice-based telephone service that uses audio-quizzes to address these challenges. Sawaal allows its open community of users to post and attempt multiple-choice questions and to vote and comment on them. Sawaal spreads virally as users challenge friends to quiz competitions. Administrator-posted questions allow confirming specific knowledge gaps, spreading correct information and measuring knowledge retention via rephrased, repeated questions. In 14 weeks and with no advertisement, Sawaal reached 3,433 users (120,119 calls) in Pakistan, who contributed 13,276 questions that were attempted 455,158 times by 2,027 users. Knowledge retention remained significant for up to two weeks. Surveys revealed that 71% of the mostly low-literate, young, male users were blind.
We propose a novel approach for constraint-based graphical user interface (GUI) layout based on OR-constraints (ORC) in standard soft/hard linear constraint systems. ORC layout unifies grid layout and flow layout, supporting both their features as well as cases where grid and flow layouts individually fail. We describe ORC design patterns that enable designers to safely create flexible layouts that work across different screen sizes and orientations. We also present the ORC Editor, a GUI editor that enables designers to apply ORC in a safe and effective manner, mixing grid, flow and new ORC layout features as appropriate. We demonstrate that our prototype can adapt layouts to screens with different aspect ratios with only a single layout specification, easing the burden of GUI maintenance. Finally, we show that ORC specifications can be modified interactively and solved efficiently at runtime.
High degrees of interaction fidelity (IF) in virtual reality (VR) are said to improve user experience and immersion, but there is also evidence of low IF providing comparable experiences. VR games are now increasingly prevalent, yet we still do not fully understand the trade-off between realism and abstraction in this context. We conducted a lab study comparing high and low IF for object manipulation tasks in a VR game. In a second study, we investigated players’ experiences of IF for whole-body movements in a VR game that allowed players to crawl underneath virtual boulders and “dangle” along monkey bars. Our findings show that high IF is preferred for object manipulation, but for whole-body movements, moderate IF can suffice, as there is a trade-off with usability and social factors. We provide guidelines for the development of VR games based on our results.
Machine Learning services are integrated into various aspects of everyday life. Their underlying processes are typically black-boxed to increase ease-of-use. Consequently, children lack the opportunity to explore such processes and develop essential mental models. We present a gesture recognition research platform, designed to support learning from experience by uncovering Machine Learning building blocks: Data Labeling and Evaluation. Children used the platform to perform physical gestures, iterating between sampling and evaluation. Their understanding was tested in a pre/post experimental design, in three conditions: learning activity uncovering Data Labeling only, Evaluation only, or both. Our findings show that both building blocks are imperative to enhance children’s understanding of basic Machine Learning concepts. Children were able to apply their new knowledge to everyday life context, including personally meaningful applications. We conclude that children’s interaction with uncovered black boxes of Machine Learning contributes to a better understanding of the world around them.
Appearance-based gaze estimation methods that only require an off-the-shelf camera have significantly improved but they are still not yet widely used in the human-computer interaction (HCI) community. This is partly because it remains unclear how they perform compared to model-based approaches as well as dominant, special-purpose eye tracking equipment. To address this limitation, we evaluate the performance of state-of-the-art appearance-based gaze estimation for interaction scenarios with and without personal calibration, indoors and outdoors, for different sensing distances, as well as for users with and without glasses. We discuss the obtained findings and their implications for the most important gaze-based applications, namely explicit eye input, attentive user interfaces, gaze-based user modelling, and passive eye monitoring. To democratise the use of appearance-based gaze estimation and interaction in HCI, we finally present OpenGaze (www.opengaze.org), the first software toolkit for appearance-based gaze estimation and interaction.
The term implicit interaction is often used to denote interactions that differ from traditional purposeful and attention demanding ways of interacting with computers. However, there is a lack of agreement about the term’s precise meaning. This paper develops implicit interaction further as an analytic concept and identifies the methodological challenges related to HCI’s particular design orientation. We first review meanings of implicit as unintentional, attentional background, unawareness, unconsciousness and implicature, and compare them in regards to the entity they qualify, the design motivation they emphasize and their constructive validity for what makes good interaction. We then demonstrate how the methodological challenges can be addressed with greater precision by using an updated, intentionality-based definition that specifies an input-effect relationship as the entity of implicit. We conclude by identifying a number of new considerations for design and evaluation, and by reflecting on the concepts of user and system agency in HCI.
Human-computer interaction is replete with ways of talking about qualities of interaction or interfaces, including if they are expressive, rich, fluid, or playful. An example of such a quality is subtle. While this word is frequently used in the literature, we lack a coherent account of what it means to be subtle, how to achieve subtleness in an interface, and what theoretical backing subtleness has. To create such an account, we analyze a sample of 55 publications that use the word subtle. We describe the variants of subtle interaction in the literature, including claimed benefits, empirical approaches, and ethical considerations. Not only does this create a basis for thinking about subtleness as a quality of interaction, it also works to show how to solidify varieties of quality in HCI. We conclude by outlining some open empirical and conceptual questions about subtleness.
Crowdworkers receive no formal training for managing their tasks, time or working environment. To develop tools that support such workers, an understanding of their preferences and the constraints they are under is essential. We asked 317 experienced Amazon Mechanical Turk workers about factors that influence their task and time management. We found that a large number of the crowdworkers score highly on a measure of polychronicity; this means that they prefer to frequently switch tasks and happily accommodate regular work and non-work interruptions. While a preference for polychronicity might equip people well to deal with the structural demands of crowdworking platforms, we also know that multitasking negatively affects workers’ productivity. This puts crowdworkers’ working preferences into conflict with the desire of requesters to maximize workers’ productivity. Combining the findings of prior research with the new knowledge obtained from our participants, we enumerate practical design options that could enable workers, requesters and platform developers to make adjustments that would improve crowdworkers’ experiences.
We propose a method for accurately and precisely measuring the intrinsic latency of input devices and document measurements for 36 keyboards, mice and gamepads connected via USB. Our research shows that devices differ not only in average latency, but also in the distribution of their latencies, and that forced polling at 1000 Hz decreases latency for some but not all devices. Existing practices – measuring end-to-end latency as a proxy of input latency and reporting only mean values and standard deviations – hide these characteristic latency distributions caused by device intrinsics and polling rates. A probabilistic model of input device latency demonstrates these issues and matches our measurements. Thus, our work offers guidance for researchers, engineers, and hobbyists who want to measure the latency of input devices or select devices with low latency.
Camera glasses enable people to capture point-of-view videos using a common accessory, hands-free. In this paper, we investigate how, when, and why people used one such product: Spectacles. We conducted 39 semi-structured interviews and surveys with 191 owners of Spectacles. We found that the form factor elicits sustained usage behaviors, and opens opportunities for new use-cases and types of content captured. We provide a usage typology, and highlight societal and individual factors that influence the classification of behaviors.
This paper explores how a design fiction can be designed to be used as a pragmatic user-centred design method to generate insights on future technology use. We built HawkEye, a design fiction probe that embodies a future fiction of dementia care. To learn how participants respond to the probe, we employed it with eight participants for three weeks in their own homes as well as evaluating it with six HCI experts in sessions of 1.5h. In addition to presenting the probe in detail, we share insights into the process of building it and discuss the utility of design fiction as a tool to elicit empathetic and rich discussions about potential outcomes of future technologies.
Although designing interactive media experiences for people with dementia has become a growing interest in HCI, a strong focus on family members has rarely been recognised as worthy of design intervention. This paper presents a research through design (RTD) approach working closely with families living with dementia in order to create personalised media experiences. Three families took part in day trips, which they co-planned, with data collection during these days providing insights into their shared social experiences. Workshops were also held in order to personalise the experience of the media created during these days out. Our qualitative analysis outlines themes focusing on individuality, relationships, and accepting changed realities. Furthermore, we outline directions for future research focusing on designing for contested realities, the personhood of carers, and the ageing body and immersion.
Understanding the Impact of TVIs on Technology Use and Selection by Children with Visual Impairments
The use of technology in educational settings is extremely common. For many visually impaired children, educational settings are the first place they are exposed to the assistive technology that they will need to access mainstream computing devices. Current laws provide support for students to receive training from Teachers of the Visually Impaired (TVIs) on these assistive devices. Therefore, TVIs play an important role in the selection and training of technology. Through our interviews with TVIs, we discovered the factors that impact which technologies they select, how they attempt to mitigate the stigma associated with certain technologies, and the challenges that students face in learning assistive technologies. Through this research, we identified three needs that future research on assistive technology should address: (1) increasing focus on built-in accessibility features, (2) providing support for independent learning and exploration, and (3) creating technologies that can support users with progressive vision loss.
Student Perspectives on Digital Phenotyping: The Acceptability of Using Smartphone Data to Assess Mental Health
There is a mental health crisis facing universities internationally. A growing body of interdisciplinary research has successfully demonstrated that using sensor and interaction data from students’ smartphones can give insight into stress, depression, mood, suicide risk and more. The approach, which is sometimes termed Digital Phenotyping, has potential to transform how mental health and wellbeing can be monitored and understood. The approach could also transform how interventions are designed, delivered and evaluated. To date, little work has addressed the human and ethical side of digital phenotyping, including how students feel about being monitored. In this paper we report findings from in-depth focus groups, prototyping and interviews with students. We find they are positive about mental health technology, but also that there are multi-layered issues to address if digital phenotyping is to become acceptable. Using an acceptability framework, we set out the key design challenges that need to be addressed.
This paper presents A-line, a 4D printing system for designing and fabricating morphing three-dimensional shapes out of simple linear elements. In addition to the commonly known benefit of 4D printing to save printing time, printing materials, and packaging space, A-line also takes advantage of the unique properties of thin lines, including their suitability for compliant mechanisms and ability to travel through narrow spaces and self-deploy or self-lock on site. A-line integrates a method of bending angle control in up to eight directions for one printed line segment, using a single type of thermoplastic material. A software platform to support the design, simulation and tool path generation is developed to support the design and manufacturing of various A-line structures. Finally, the design space of A-line is explored through four application areas, including line sculpting, compliant mechanisms, self-deploying, and self-locking structures.
Detecting Visuo-Haptic Mismatches in Virtual Reality using the Prediction Error Negativity of Event-Related Brain Potentials
Designing immersion is the key challenge in virtual reality; this challenge has driven advancements in displays, rendering and recently, haptics. To increase our sense of physical immersion, for instance, vibrotactile gloves render the sense of touching, while electrical muscle stimulation (EMS) renders forces. Unfortunately, the established metric to assess the effectiveness of haptic devices relies on the user’s subjective interpretation of unspecific, yet standardized, questions.
Here, we explore a new approach to detect a conflict in visuo-haptic integration (e.g., inadequate haptic feedback based on poorly configured collision detection) using electroencephalography (EEG). We propose analyzing event-related potentials (ERPs) during interaction with virtual objects. In our study, participants touched virtual objects in three conditions and received either no haptic feedback, vibration, or vibration and EMS feedback. To provoke a brain response in unrealistic VR interaction, we also presented the feedback prematurely in 25% of the trials.
We found that the early negativity component of the ERP (so called prediction error) was more pronounced in the mismatch trials, indicating we successfully detected haptic conflicts using our technique. Our results are a first step towards using ERPs to automatically detect visuo-haptic mismatches in VR, such as those that can cause a loss of the user’s immersion.
We present a new framework describing how teachers use ST Math, a curriculum-integrated, year-long educational game, in 3rd-4th grade classrooms. We combined authentic classroom observations with teacher interviews to identify teacher needs and practices. Our findings extended and contrasted with prior work on teachers’ behaviors around classroom games, identifying differences likely arising from a digital platform and year-long curricular integration. We suggest practical ways that curriculum-integrated games can be designed to help teachers support effective classroom culture and practice.
This paper presents a novel design for a large-scale interactive tactile display. Fast dynamic tactile effects are created at high spatial resolution on a flexible screen, using directable nozzles that spray water jets onto the rear of the screen. The screen further has back-projected visual content and touch interaction. The technology is demonstrated in Feeling Fireworks, a tactile firework show. The goal is to make fireworks more inclusive for the Blind and Low-Vision (BLV) community. A BLV focus group provided input during the development process, and a user study with BLV users showed that Feeling Fireworks is an enjoyable and meaningful experience. A user study with sighted users showed that users could accurately label the correspondence between the designed tactile firework effects and corresponding visual fireworks. Beyond the Feeling Fireworks application, this is a novel approach for scalable tactile displays with potential for broader use.
The general trend in exercise interventions, including those based on exergames, is to see high initial enthusiasm but significantly declining adherence. Social play is considered a core tenet of the design of exercise interventions help foster motivation to play. To determine whether social play aids in adherence to exergames, we analyzed data from a study involving five waves of six-week exergame trials between a single-player and multiplayer group. In this paper, we examine the multiplayer group to determine who might benefit from social play and why. We found that people who primarily engage in group play have superior adherence to people who primarily play alone. People who play alone in a multiplayer exergame have worse adherence than playing a single-player version, which can undo any potential benefit of social play. The primary construct distinguishing group versus alone players is their sense of program belonging. Program belonging is, thus, crucial to multiplayer exergame design.
Virtual worlds are infinite environments in which the user can move around freely. When shifting from controller-based movement to regular walking as an input, the limitation of the real world also limits the virtual world. Tackling this challenge, we propose the use of electrical muscle stimulation to limit the necessary real-world space to create an unlimited walking experience. We thereby actuate the users` legs in a way that they deviate from their straight route and thus, walk in circles in the real world while still walking straight in the virtual world. We report on a study comparing this approach to vision shift – the state of the art approach – as well as combining both approaches. The results show that particularly combining both approaches yield high potential to create an infinite walking experience.
Distance learners often experience social isolation and impoverished social interaction with their remote peers. To better understand the connections that distance learners are able to build with peers, we interviewed them about whether and how they perceive or cultivate connections with one another. Our analysis reveals how connections in an online learning environment are formed and experienced across different social contexts and technology affordances, and what strategies and practices enable and inhibit these connections. We discuss the implications of our findings for concepts of shared identity and evolving peer relationships among online learners and for design directions that might address their social needs.
Security managers are leading employees whose decisions shape security measures and thus influence the everyday work of all users in their organizations. To understand how security managers handle user requirements and behavior, we conducted semi-structured interviews with seven security managers from large-scale German companies. Our results indicate that due to the absence of organizational structures that include users into security development processes, security managers unintentionally obtain a negative view on users. Their distrust towards users leads to the creation of technical security measures that cannot be influenced by users in any way. However, as previous research has repeatedly shown, rigid security measures lead to frustration and discouragement of users, and also to creative (but usually insecure) methods of security circumvention. We conclude that in order to break through this vicious cycle, security managers need organizational structures, methods and tools that facilitate systematic feedback from users.
Community radio can support the process of having a voice in one’s community as a part of civic action, and promote community dialogue. However, older adults are underrepresented as producers of community radio shows in the UK, and face different challenges to their younger colleagues. By working within the radio production group of an existing organisation of older adults, we identify the motivations and challenges in supporting this type of civic participation in media in later life. Key challenges were identified, including audience engagement, content persistence and process sustainability. In response, we 1) supported the group’s audience engagement using Facebook Live and a phone-in option, and 2) developed a digital production tool. Reporting on the continued use of the tool by the organisation, we discuss how tailored and non-intrusive processes mediated by digital technology can support older adults in delivering richer media experiences whilst serving their civic participatory interests.
Recording experiences and memories is an important role for digital photography, with smartphone cameras leading to individuals taking increasing numbers of pictures of everyday experiences. Increasingly, these are automatically stored in personal, cloud-backed, photo repositories. However, such experiences can be forgotten quickly, with images ‘lost’ within the user’s library, loosing their role in supporting reminiscing. We investigate how users might be provoked to view these images and the benefits they bring through the development and evaluation of a proactive, location-based reminiscing tool, called Reveal. We outline how a location-based approach allowed participants to reflect more widely on their photo practice, and the potential of such reminiscing tools to support effective management and curation of individual’s increasingly large personal photo collections.
"What is Fair Shipping, Anyway?": Using Design Fiction to Raise Ethical Awareness in an Industrial Context
The HCI community cares for the human and social aspects of technologies. Ethical discussion on the social implications of new technologies often happen among researchers, but it is important to raise this discussion also in the industry that designs and implements new systems. In this paper, we introduce a case in which design fiction was used as an ethical discussion tool among company partners. We report the process of creating and prototyping a fictional world embedded with conflicting values that aimed to shift the focus from industrial merits towards societal values and raise discussion among participants. Moreover, we examine the challenges and propose suggestions in crafting critiques and friction to the industrial context. Our findings suggest why and how one should use design fiction as a means to raise ethical awareness in a technology- and profit-focused context, to support further activities on developing more humane technological futures.
Participatory Video (PV) is emerging as a rich and valuable method for monitoring and evaluating (M & E) projects in the International Development sector. Although shown to be useful for engaging communities within short-term monitoring exercises or promotion, PV in these contexts presents significant complexity and logistical challenges for sustained uptake by Development organizations. In this paper, we present Our Story, a digitally mediated work flow iteratively designed and deployed on initiatives in Indonesia and Namibia. Developed in collaboration with the International Federation of Red Cross and Red Crescent (IFRC), it supports end-to-end PV production in the field, and was specifically developed to make PV a more sustainable tool for monitoring. We discuss and evaluate Our Story, reporting on how by lowering skills barriers for facilitators and leveraging consumer technology, PV can be delivered at scale.
Online language lessons have adopted live broadcasted videos to provide more real-time interactive experiences between language teachers and learners. However, learner interactions are primarily limited to the built-in text chat in the live stream. Using text alone, learners cannot get feedback on important aspects of a language, such as speaking skills, that are afforded only by offering richer types of interactions. We present results from a 2-week in-the-wild study, in which we investigate the use of text, audio, video, image, and stickers as interaction tools for language teachers and learners in live streaming. Our language teacher explored three different teaching strategies over four live streamed English lessons, while nine students watched and interacted using multimodal tools. The findings reveal that multimodal communication yields instant feedback and increased engagement, but its use is dependent on factors such as group size, surroundings, time, and online identity.
Smart home technologies are beginning to become more widespread and common, even as their deployment and implementation remain complex and spread across different competing commercial ecosystems. Looking beyond the middle-class, single-family home often at the center of the smart home narrative, we report on a series of participatory design workshops held with residents and building managers to better understand the role of smart home technologies in the context of public housing in the U.S. The design workshops enabled us to gather insight into the specific challenges and opportunities of deploying smart home technologies in a setting where issues of privacy, data collection and ownership, and autonomy collide with diverse living arrangements, where income, age, and the consequences of monitoring and data aggregation setup an expanding collection of design implications in the ecosystems of smart home technologies.
This paper investigates how haptic and auditory stimulation can be playfully implemented as an accessible and stimulating form of interaction for children. We present the design of Mazi, a sonic Tangible User Interface (TUI) designed to encourage spontaneous and collaborative play between children with high support needs autism. We report on a five week study of Mazi with five children aged between 6 and 9 years old at a Special Education Needs (SEN) school in London, UK. We found that collaborative play emerged from the interaction with the system especially in regards to socialization and engagement. Our study contributes to exploring the potential of user-centered TUI development as a channel to facilitate social interaction while providing sensory regulation for children with SENs.
Younger children (under 9 years) with type-1 diabetes are often very passive in the management of their condition and can face difficulties in accessing and understanding basic diabetes related information. This can make transitioning to self-management in later years very challenging. Previous research has mostly focused on educational interventions for older children.
To create an educational tool which can support the diabetes educational process of younger children, we conducted a multiphase and multi-stakeholder user-centred design process. The result is an interactive tool that illustrates diabetes concepts in an age-appropriate way with the use of tangible toys. The tool was evaluated inside a paediatric diabetes clinic with clinicians, children and parents and was found to be engaging, acceptable and effective. In addition to providing implications for the design and adoption of educational tools for children in a clinical setting, we discuss the challenges for conducting user-centred design in such a setting.
People receive a tremendous number of messages through mobile instant messaging (MIM), which generates crowded notifications. This study highlights our attempt to create a new notification rule to reduce this crowdedness, which can be recognized by both senders and recipients. We developed an MIM app that provides only one notification per conversation session, which is a group of consecutive messages distinguished based on a ten-minute silence period. Through the two-week field study, 20,957 message logs and interview data from 17 participants revealed that MIM notifications affect not only the recipients’ experiences before opening the app but also the entire conversation experience, including that of the senders. The new notification rule created new social norms for the participants’ use of MIM. We report themes about the changes in the MIM experience, which will expand the role of notifications for future MIM apps.
Virtual reality (VR) strives to replicate the sensation of the physical environment by mimicking people’s perceptions and experience of being elsewhere. These experiences are of-ten mediated by the objects and tools we interact with in the virtual world (e.g., a controller). Evidence from psychology posits that when using the tool proficiently, it becomes em-bodied (i.e., an extension of one’s body). There is little work,however, on how to measure this phenomenon in VR, andon how different types of tools and controllers can affect the experience of interaction. In this work, we leverage cognitive psychology and philosophy literature to construct the Locus-of-Attention Index (LAI), a measure of tool embodiment. We designed and conducted a study that measures readiness-to-hand and unreadiness-to-hand for three VR interaction techniques: hands, a physical tool, and a VR controller. The study shows that LAI can measure differences in embodiment with working and broken tools and that using the hand directly results in more embodiment than using controllers.
We present DMove, directional motion-based interaction for Augmented Reality (AR) Head-Mounted Displays (HMDs) that is both hands- and device-free. It uses directional walk-ing as a way to interact with virtual objects. To use DMove, a user needs to perform directional motions such as mov-ing one foot forward or backward. In this research, we first investigate the recognition accuracy of the motion direc-tions of our method and the social acceptance of this type of interactions together with users’ comfort rating for each direction. We then optimize its design and conduct a sec-ond study to compare DMove in task performance and user preferences (workload, motion sickness, user experience), with two approaches-Hand interaction (Meta 2-like) and Head+Hand interaction (HoloLens-like) for menu selection tasks. Based on the results of these two studies, we provide a set of guidelines for DMove and further demonstrate two applications that utilize directional motions.
In the coming years humanoid robots will be increasingly used in a variety of contexts, thereby presenting many opportunities to exploit their capabilities in terms of what they can sense and do. One main challenge is to design technologies that enable those who are not programming experts to personalize robot behaviour. We propose an end user development solution based on trigger-action personalization rules. We describe how it supports editing such rules and its underlying software architecture, and report on a user test that involved end user developers. The test results show that users were able to perform the robot personalization tasks with limited effort, and found the trigger-action environment usable and suitable for the proposed tasks. Overall, we show the potential for using trigger-action programming to make robot behaviour personalization possible even to people who are not professional software developers.
We rely on our sight when manipulating objects. When objects are occluded, manipulation becomes difficult. Such occluded objects can be shown via augmented reality to re-enable visual guidance. However, it is unclear how to do so to best support object manipulation. We compare four views of occluded objects and their effect on performance and satisfaction across a set of everyday manipulation tasks of varying complexity. The best performing views were a see-through view and a displaced 3D view. The former enabled participants to observe the manipulated object through the occluder, while the latter showed the 3D view of the manipulated object offset from the object’s real location. The worst performing view showed remote imagery from a simulated hand-mounted camera. Our results suggest that alignment of virtual objects with their real-world location is less important than an appropriate point-of-view and view stability.
A is for Artificial Intelligence: The Impact of Artificial Intelligence Activities on Young Children’s Perceptions of Robots
We developed a novel early childhood artificial intelligence (AI) platform, PopBots, where preschool children train and interact with social robots to learn three AI concepts: knowledge-based systems, supervised machine learning, and generative AI. We evaluated how much children learned by using AI assessments we developed for each activity. The median score on the cumulative assessment was 70% and children understood knowledge-based systems the best. Then, we analyzed the impact of the activities on children’s perceptions of robots. Younger children came to see robots as toys that were smarter than them, but their older counterparts saw them more as people that were not as smart as them. Children who performed worse on the AI assessments believed that robots were like toys that were not as smart as them, however children who did better on the assessments saw robots as people who were smarter than them. We believe early AI education can empower children to understand the AI devices that are increasingly in their lives.
Entering text without having to pay attention to the keyboard is compelling but challenging due to the lack of visual guidance. We propose i’sFree to enable eyes-free gesture typing on a distant display from a touch-enabled remote control. i’sFree does not display the keyboard or gesture trace but decodes gestures drawn on the remote control into text according to an invisible and shifting Qwerty layout. i’sFree decodes gestures similar to a general gesture typing decoder, but learns from the instantaneous and historical input gestures to dynamically adjust the keyboard location. We designed it based on the understanding of how users perform eyes-free gesture typing. Our evaluation shows eyes-free gesture typing is feasible: reducing visual guidance on the distant display hardly affects the typing speed. Results also show that the i’sFree gesture decoding algorithm is effective, enabling an input speed of 23 WPM, 46% faster than the baseline eyes-free condition built on a general gesture decoder. Finally, i’sFree is easy to learn: participants reached 22 WPM in the first ten minutes, even though 40% of them were first-time gesture typing users.
Community intergenerational mentorship offers an opportunity to address older adults’ social isolation while providing valuable one-on-one or small group learning experiences for elementary school students. Current organizations that support this kind of engagement focus on in-person visits that place the burden of logistics and transportation on the older adult. However, as older adults become less independent while aging, coming to schools in person becomes more challenging. We present a qualitative analysis of current intergenerational mentorship practices to understand opportunities for technology to expand access to this experience. We highlight elements critical for building successful mentorship: the importance of relationship building between older adults and children during mentoring activities, the skills mentors acquired to carry out mentoring activities, and support needed from teachers and schools. We contribute a rich description of current intergenerational mentorship practices and provide insights for opportunities for novel HCI technologies in this context.
Chatbots have grown as a space for research and development in recent years due both to the realization of their commercial potential and to advancements in language processing that have facilitated more natural conversations. However, nearly all chatbots to date have been designed for dyadic, one-on-one communication with users. In this paper we present a comprehensive review of research on chatbots supplemented by a review of commercial and independent chatbots. We argue that chatbots’ social roles and conversational capabilities beyond dyadic interactions have been underexplored, and that expansion into this design space could support richer social interactions in online communities and help address the longstanding challenges of maintaining, moderating, and growing these communities. In order to identify opportunities beyond dyadic interactions, we used research-through-design methods to generate more than 400 concepts for new social chatbots, and we present seven categories that emerged from analysis of these ideas.
In fields where in situ performance cannot be measured, ecological validity is difficult to estimate. Drawing on theory from social psychology and virtual reality, we argue that face validity can be a useful proxy for ecological validity. We provide illustrative examples of this relationship from work in search-and-rescue HRI, and conclude with some practical guidelines for the construction of immersive simulations in general.
PaCaPa: A Handheld VR Device for Rendering Size, Shape, and Stiffness of Virtual Objects in Tool-based Interactions
We present PaCaPa, a handheld device that renders haptics on a user’s palm when the user interacts with virtual objects using virtual tools such as a stick. PaCaPa is a cuboid device with two wings that open and close. As the user’s stick makes contact with a virtual object, the wings open by a specific degree to dynamically change the pressure on the palm and fingers. The open angle of the wings is calculated from the angle between the virtual stick and hand direction. As the stick bites into the target object, a large force is generated. Our device enables three kinds of renderings: size, shape, and stiffness. We conducted user studies to evaluate the performance of our device. We also evaluated our device in two application scenarios. User feedback and qualitative ratings indicated that our device can make indirect interaction with handheld tools more realistic.
This paper presents an algorithm audit of the Google Top Stories box, a prominent component of search engine results and powerful driver of traffic to news publishers. As such, it is important in shaping user attention towards news outlets and topics. By analyzing the number of appearances of news article links we contribute a series of novel analyses that provide an in-depth characterization of news source diversity and its implications for attention via Google search. We present results indicating a considerable degree of source concentration (with variation among search terms), a slight exaggeration in the ideological skew of news in comparison to a baseline, and a quantification of how the presentation of items translates into traffic and attention for publishers. We contribute insights that underscore the power that Google wields in exposing users to diverse news information, and raise important questions and opportunities for future work on algorithmic news curation.
./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive Objects
Hover, touch, and force are promising input modalities that get increasingly integrated into screens and everyday objects. However, these interactions are often limited to flat surfaces and the integration of suitable sensors is time-consuming and costly. To alleviate these limitations, we contribute Trilaterate: A fabrication pipeline to 3D print custom objects that detect the 3D position of a finger hovering, touching, or forcing them by combining multiple capacitance measurements via capacitive trilateration. Trilaterate places and routes actively-shielded sensors inside the object and operates on consumer-level 3D printers. We present technical evaluations and example applications that validate and demonstrate the wide applicability of Trilaterate.
Designing for Reproducibility: A Qualitative Study of Challenges and Opportunities in High Energy Physics
Reproducibility should be a cornerstone of scientific research and is a growing concern among the scientific community and the public. Understanding how to design services and tools that support documentation, preservation and sharing is required to maximize the positive impact of scientific research. We conducted a study of user attitudes towards systems that support data preservation in High Energy Physics, one of science’s most data-intensive branches. We report on our interview study with 12 experimental physicists, studying requirements and opportunities in designing for research preservation and reproducibility. Our findings suggest that we need to design for motivation and benefits in order to stimulate contributions and to address the observed scalability challenge. Therefore, researchers’ attitudes towards communication, uncertainty, collaboration and automation need to be reflected in design. Based on our findings, we present a systematic view of user needs and constraints that define the design space of systems supporting reproducible practices.
Color themes or palettes are popular for sharing color combinations across many visual domains. We present a novel interface for creating color themes through direct manipulation of color swatches. Users can create and rearrange swatches, and combine them into smooth and step-based gradients and three-color blends — all using a seamless touch or mouse input. Analysis of existing solutions reveals a fragmented color design workflow, where separate software is used for swatches, smooth and discrete gradients and for in-context color visualization. Our design unifies these tasks, while encouraging playful creative exploration. Adjusting a color using standard color pickers can break this interaction flow with mechanical slider manipulation. To keep interaction seamless, we additionally design an in situ color tweaking interface for freeform exploration of an entire color neighborhood. We evaluate our interface with a group of professional designers and students majoring in this field.
A Review & Analysis of Mindfulness Research in HCI: Framing Current Lines of Research and Future Opportunities
Mindfulness is a term seen with increasing frequency in HCI literature, and yet the term itself is used almost as variously as the number of papers in which it appears. This diversity makes comparing or evaluating HCI approaches around mindfulness or understanding the design space itself a challenging task. We conducted a structured ACM literature search based on the term mindfulness. Our selection process yielded 38 relevant papers, which we analyzed for their definition, motivation, practice, evaluation and technology use around mindfulness. We identify similarities, divergences and areas of interest for each aspect, resulting in a framework composed of four perspectives and seven lines of research. We highlight challenges and opportunities for future HCI research and design.
The U.S. healthcare infrastructure is fragmented with various breakdowns. Patients or caregivers have to rely on their own to overcome barriers and fix breakdowns in order to obtain necessary service, that is, infrastructuring work to make the healthcare infrastructure work for them. So far little attention has been paid to such infrastructuring work in healthcare. We present an interview study of 32 U.S. parents of young children to discuss the work of infrastructuring our participants carry out to deal with breakdowns within the healthcare infrastructure. We report how they repaired unexpected failures happening at the individual level, aligned components at organizational and cross-organizational level, and circumvented infrastructural constraints (e.g., policy and financial ones) that were perceived as ambiguous and demanding. We discuss infrastructuring work in light of the literature on patients’ and caregivers’ work, reflect upon the notion of patient engagement, and explore nuances along several dimensions of infrastructuring work.
There is increasing interest in multisensory experiences in HCI. However, little research considers how sensory modalities interact with each other and how this may impact interactive experiences. We investigate how children associate emotions with scents and 3D shapes. 14 participants (10-17yrs) completed crossmodal association tasks to attribute emotional characteristics to variants of the “Bouba/Kiki” stimuli, presented as 3D tangible models, in conjunction with lemon and vanilla scents. Our findings support pre-existing mappings between shapes and scents, and confirm the associations between the combination of angular shapes (“Kiki”) and lemon scent with arousing emotion, and of round shapes (“Bouba”) and vanilla scent with calming emotion. This extends prior work on crossmodal correspondences in terms of stimuli (3D as opposed to 2D shapes), sample (children), and conveyed content (emotions). We outline how these findings can contribute to designing more inclusive interactive multisensory technologies.
The need for data preservation and reproducible research is widely recognized in the scientific community. Yet, researchers often struggle to find the motivation to contribute to data repositories and to use tools that foster reproducibility. In this paper, we explore possible uses of gamification to support reproducible practices in High Energy Physics. To understand how gamification can be effective in research tools, we participated in a workshop and performed interviews with data analysts. We then designed two interactive prototypes of a research preservation service that use contrasting gamification strategies. The evaluation of the prototypes showed that gamification needs to address core scientific challenges, in particular the fair reflection of quality and individual contribution. Through thematic analysis, we identified four themes which describe perceptions and requirements of gamification in research: Contribution, Metrics, Applications and Scientific practice. Based on these, we discuss design implications for gamification in science.
Increasingly popular, long-distance running events (LDRE) attract not just runners but an exponentially increasing number of spectators. Due to the long duration and broad geographic spread of such events, interactions between them are limited to brief moments when runners (R) pass by their supporting spectators (S). Current technology is limited in its potential for supporting interactions and mainly measures and displays basic running information to spectators who passively consume it. In this paper, we conducted qualitative studies for an in-depth understanding of the R&S’ shared experience during LDRE and how technology can enrich this experience. We propose a two-layer DyPECS framework, highlighting the rich dynamics of the R&S multi-faceted running journey and of their micro-encounters. DyPECS is enriched by the findings from our in depth qualitative studies. We finally present design implications for the multi-facet co-experience of R&S during LDRE.
Local Standards for Anonymization Practices in Health, Wellness, Accessibility, and Aging Research at CHI
When studying technologies pertaining to health, wellness, accessibility, and aging, researchers are often required to perform a balancing act between controlling and sharing sensitive data of the people in their studies and protecting the privacy of these participants. If the data can be anonymized and shared, it can boost the impact of the research by facilitating replication and extension. Despite anonymization, data reporting and sharing may lead to re-identification of participants, which can be particularly problematic when the research deals with sensitive topics, such as health. We analyzed 509 CHI papers in the domains of health, wellness, accessibility, and aging to examine data reporting and sharing practices. Our analysis revealed notable patterns and trends regarding the reporting of age, gender, participant types, sample sizes, methodology, ethical considerations, anonymization techniques, and data sharing. Based on our findings, we propose several suggestions for community standards and practices that could facilitate data reporting and sharing while limiting the privacy risks for study participants.
Difficulty is one of the major motivational pull of video games, and thus many games use Dynamic Difficulty Adjustment (DDA) systems to improve the game experience. This paper describes our research investigating the influence of DDA systems on player’s confidence, evaluated using an in-game bet system. Our hypothesis is that DDA systems may lead players to overconfidence, revealed by an overestimation of their success chances when betting. This boost of confidence may be a part of the positive impact of DDA systems on the quality of game experience. We explain our method to evaluate player’s confidence and implement it into three games related to logical, motor and sensory difficulties. We describe two experimental conditions where difficulty is either randomly chosen or adapted using a DDA algorithm. Results show how DDA systems can lead players to high level of overconfidence.
Continuous Alertness Assessments: Using EOG Glasses to Unobtrusively Monitor Fatigue Levels In-The-Wild
As the day progresses, cognitive functions are subject to fluctuations. While the circadian process results in diurnal peaks and drops, the homeostatic process manifests itself in a steady decline of alertness across the day. Awareness of these changes allows the design of proactive recommender and warning systems, which encourage demanding tasks during periods of high alertness and flag accident-prone activities in low alertness states. In contrast to conventional alertness assessments, which are often limited to lab conditions, bulky hardware, or interruptive self-assessments, we base our approach on eye blink frequency data known to directly relate to fatigue levels. Using electrooculography sensors integrated into regular glasses’ frames, we recorded the eye movements of 16 participants over the course of two weeks in-the-wild and built a robust model of diurnal alertness changes. Our proposed method allows for unobtrusive and continuous monitoring of alertness levels throughout the day.
"Everyone Brings Their Grain of Salt": Designing for Low-Literate Parental Engagement with a Mobile Literacy Technology in Côte d’Ivoire
Significant research has demonstrated the crucial role that parents play in supporting the development of children’s literacy, but in contexts where adults may lack sufficient literacy in the target language, it is not clear how to most effectively scaffold parental support for children’s literacy. Prior work has designed technologies to teach children literacy directly, but this work has not focused on designing for low-literate parents, particularly for multilingual and developing contexts. In this paper, we describe findings from a qualitative study conducted in several regions of rural Côte d’Ivoire to understand Ivorian parents’ beliefs, desires, and preferences for French literacy. We discuss themes that emerged from these interviews, surrounding ideas of trust, collaboration, and culturally-responsive design, and we highlight implications for the design of technology to scaffold low-literate parental support for children’s literacy.
Internet use and online services underpin everyday life, and the resultant energy demand is almost entirely hidden, yet significant and growing: it is anticipated to reach 21% of global electricity demand by 2030 and to eclipse half the greenhouse gas emissions of transportation by 2040. Driving this growth, real-time video streaming (‘watching’) is estimated at around 50% of all peak data traffic. Using a mixed-methods analysis of the use of 66 devices (e.g. smart TVs, tablets) across 20 participants in 9 households, we reveal the online activity of domestic watching and provide a detailed exploration of video-on-demand activities. We identify new ways in which watching is transitioning in more rather than less data demanding directions; and explore the role HCI may play in reducing this growing data demand. We further highlight implications for key HCI and societal stakeholders (policy makers, service providers, network engineers) to tackle this important issue.
Overcoming Distractions during Transitions from Break to Work using a Conversational Website-Blocking System
Work breaks–both physical and digital–play an important role in productivity and workplace wellbeing. Yet, the growing availability of digital distractions from online content can turn breaks into prolonged “cyberloafing”. In this paper, we present UpTime, a system that aims to support workers’ transitions from breaks back to work–moments susceptible to digital distractions. Combining a browser extension and chatbot, users interact with UpTime through proactive and reactive chat prompts. By sensing transitions from inactivity, UpTime helps workers avoid distractions by automatically blocking distracting websites temporarily, while still giving them control to take necessary digital breaks. We report findings from a 3-week comparative field study with 15 workers. Our results show that automatic, temporary blocking at transition points can significantly reduce digital distractions and stress without sacrificing workers’ sense of control. Our findings, however, also emphasize that overloading users’ existing communication channels for chatbot interaction should be done thoughtfully.
Techies Against Facebook: Understanding Negative Sentiment Toward Facebook via User Generated Content
Researchers have recognized the need to pay attention to negative aspects and non-use of social media services to uncover usage barriers and surface shortcomings of these systems. We contribute to these efforts by analyzing comments on posts related to Facebook on two blogs with a technically savvy readership: Slashdot and Schneier on Security. Our analysis indicates that technically savvy individuals exhibit notably large negative sentiment toward Facebook with nearly 45% of the 3,000 reader comments we coded expressing such views. Qualitative coding revealed Privacy and Security, User Experience, and Personal Disposition as key factors underlying the negative views. Our findings suggest that negative sentiment is an explicit higher level factor driving non-use practices. Further, we confirm several non-use practices reported in the literature and identify additional aspects connected to recent technological and societal developments. Our results demonstrate that analysis of user generated content can be useful for surfacing usage practices on a large scale.
Fourth-age residents in long-term care facilities (LTCF) are known to suffer declines in well-being due to their advanced age and resulting loss of independence. Using an action research approach, we set up a makerspace in a New Jersey LTCF for eight weeks to see whether it could improve well-being for residents. Based on engaged observation over 280 hours and semi-structured interviews with participants, we find that people aged 80-99 years will spend (sometimes significant) time in a makerspace for the purposes of making and companionship; that makerspaces can contribute to both autonomy and well-being for older residents; and participants produced not only decorative art, but novel artifacts that solved real challenges in their daily lives. We situate these findings in the literature on art and activity therapy for fourth-age people, and make recommendations for makerspaces in long-term care facilities.
Supporting Communication About Values Between People with Multiple Chronic Conditions and their Providers
People with multiple chronic conditions (MCC) often disagree with healthcare providers on priorities for care, leading to worse health outcomes. To align priorities, there is a need to support patient-provider communication about what patients consider important for their well-being and health (i.e., their personal values). To address barriers to communication about values, we conducted a two-part study with key stakeholders in MCC care: patients, informal caregivers, and providers. In Part I, co-design activities generated seven dimensions that characterize stakeholders’ diverse ideas for supporting communication about values: explicitness, effort, disclosure, guidance, intimacy, scale, and synchrony. In Part II, we used the dimensions to generate three design concepts and presented them in focus groups to further scrutinize findings from Part I. Based on these findings we outline directions for research and design to improve patient-provider communication about patients’ personal values.
SmartEye: Assisting Instant Photo Taking via Integrating User Preference with Deep View Proposal Network
Instant photo taking and sharing has become one of the most popular forms of social networking. However, taking high-quality photos is difficult as it requires knowledge and skill in photography that most non-expert users lack. In this paper we present SmartEye, a novel mobile system to help users take photos with good compositions in-situ. The back-end of SmartEye integrates the View Proposal Network (VPN), a deep learning based model that outputs composition suggestions in real time, and a novel, interactively updated module (P-Module) that adjusts the VPN outputs to account for personalized composition preferences. We also design a novel interface with functions at the front-end to enable real-time and informative interactions for photo taking. We conduct two user studies to investigate SmartEye qualitatively and quantitatively. Results show that SmartEye effectively models and predicts personalized composition preferences, provides instant high-quality compositions in-situ, and outperforms the non-personalized systems significantly.
Wikipedia is one of the most successful online communities in history, yet it struggles to attract and retain women editors-a phenomenon known as the gender gap. We investigate this gap by focusing on the voices of experienced women Wikipedians. In this interview-based study (N=25), we identify a core theme among these voices: safety. We reveal how our participants perceive safety within their community, how they manage their safety both conceptually and physically, and how they act on this understanding to create safe spaces on and off Wikipedia. Our analysis shows Wikipedia functions as both a multidimensional and porous space encompassing a spectrum of safety. Navigating this space requires these women to employ sophisticated tactics related to identity management, boundary management, and emotion work. We conclude with a set of provocations to spur the design of future online environments that encourage equity, inclusivity, and safety for historically marginalized users.
In this work, we challenge the Gaze interaction paradigm “What you see is what you get” to introduce “playing with peripheral vision”. We developed the conceptual framework to introduce this novel gaze-aware game dynamic. We illustrated the concept with SuperVision, a collection of three games that play with peripheral vision. We propose perceptual and interaction challenges that require players not to look and rely on their periphery. To validate the game dynamic and experience, we conducted a user study with twenty-four participants. Results show how the game concept created an engaging and playful experience playing with peripheral vision. Participants showed proficiency in overcoming the game challenges, developing clear strategies to succeed. Moreover, we found evidence that playing the game can affect our visual skills, with greater peripheral awareness.
Visual and verbal cues can reinforce barriers to access for women in science, technology, engineering, and math (STEM) disciplines. Psychologically inclusive design is an evidence-based approach to reduce psychological barriers by strategically placing content and design cues in the environment. Two large field experiments provide estimates of the behavioral impact of psychologically inclusive cues on women’s and men’s enrollment behaviors in an online learning environment. First, a gender-inclusive photo and statement in an online advertisement for a STEM course increased the click-through rate among women but not men by 26% (N=209,000). Second, an inclusivity statement with a gender-inclusive course image to the enrollment page raised the proportion of women enrolling in a STEM course by up to 18% (N=63,000). These findings contribute evidence of the behavioral impact of psychologically inclusive design to the literature and yield practical implications for the presentation of STEM opportunities.
Conversational agents promise conversational interaction but fail to deliver. Efforts often emulate functional rules from human speech, without considering key characteristics that conversation must encapsulate. Given its potential in supporting long-term human-agent relationships, it is paramount that HCI focuses efforts on delivering this promise. We aim to understand what people value in conversation and how this should manifest in agents. Findings from a series of semi-structured interviews show people make a clear dichotomy between social and functional roles of conversation, emphasising the long-term dynamics of bond and trust along with the importance of context and relationship stage in the types of conversations they have. People fundamentally questioned the need for bond and common ground in agent communication, shifting to more utilitarian definitions of conversational qualities. Drawing on these findings we discuss key challenges for conversational agent design, most notably the need to redefine the design parameters for conversational agent interaction.
Guitars are physical instruments that require skillful two-handed use. Their use is also supported by diverse digital and physical resources, such as videos and chord charts. To understand the challenges of interacting with supporting resources at the same time as playing we conducted an ethnographic study of the preparation activities of working musicians. We observe successive stages of individual and collaborative preparation, in which working musicians engage with a diverse range of digital and physical resources to support their preparation. Interaction with this complex ecology of digital and physical resources is finely interwoven into their embodied musical practices, which are usually encumbered by having their instrument in hand, and often by playing. We identify challenges for augmenting guitars within the rehearsal process by supporting interaction that is encumbered, contextual and connected, and suggest a range of possible responses.
From voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user’s feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.
Only one item left?: Heuristic Information Trumps Calorie Count When Supporting Healthy Snacking Under Low Self-Control
Pursuing the goal of a healthy diet may be challenging, especially when self-control resources are low. Yet many persuasive user interfaces fostering healthy choices are designed for situations with ample self-control, e.g. showing nutritional information to support reflective decision making. In this paper we propose that under low self-control, persuasive user interfaces need to rely on simple heuristic decision making to be successful. We report an experiment that tested this assumption in a 2 (low vs. high self-control) x 2 (calorie vs. heuristic information) design. The results reveal a significant interaction effect. Participants with low self-control resources chose the healthy snack more often when snacks were labelled with heuristic information than when they were labelled with calorie information. Both strategies were about equally successful for participants with high self-control. Exploiting situations of low self-control with heuristic information is a new and promising approach to designing persuasive technology for healthy eating.
Bayesian statistical analysis has gained attention in recent years, including in HCI. The Bayesian approach has several advantages over traditional statistics, including producing results with more intuitive interpretations. Despite growing interest, few papers in CHI use Bayesian analysis. Existing tools to learn Bayesian statistics require significant time investment, making it difficult to casually explore Bayesian methods. Here, we present a tool that lowers the barrier to exploration: a set of R code templates that guide Bayesian novices through their first analysis. The templates are tailored to CHI, supporting analyses found to be most common in recent CHI papers. In a user study, we found that the templates were easy to understand and use. However, we found that participants without a statistical background were not confident in their use. Together our contributions provide a concise analysis tool and empirical results for understanding and addressing barriers to using Bayesian analysis in HCI.
Menopause is a major life change affecting roughly half of the population, resulting in physiological, emotional, and social changes. To understand experiences with menopause holistically, we conducted a study of a subreddit forum. The project was informed by feminist social science methodologies, which center knowledge production on women’s lived experiences. Our central finding is that the lived experience of menopause is social: menopause is less about bodily experiences by themselves and more about how experiences with the body become meaningful over time in the social context. We find that gendered marginalization shapes diverse social relationships, leading to widespread feelings of alienation and negative transformation – often expressed in semantically dense figurative language. Research and design can accordingly address menopause not only as a women’s health concern, but also as a matter of facilitating social support and a social justice issue.
Turn to the Self in Human-Computer Interaction: Care of the Self in Negotiating the Human-Technology Relationship
Everyday life is increasingly mediated by technology. Technology is rapidly growing capacity and complexity, especially evident in developments in artificial intelligence and big data analytics. As human-computer interaction (HCI) endeavors to examine and theorize how people act and interact with the ever-evolving technology, an important, emerging concern is how the self-the totality of internal qualities such as consciousness and agency-plays out in relation to the technology-mediated external world. To analyze this question, we draw from Michel Foucault’s ethics of “care of the self,” which examines how the self is constituted through conscious and reflective work on self-transformation. We present three case studies to illustrate how individuals carry out practices of the self to reflect upon and negotiate their relationship with technology. We discuss the importance of examining the self and foreground the notion of care of the self in HCI research and design.
We created a quiz-based intervention to help secondary school students in Cameroon with exam practice. We sent regularly-spaced, multiple-choice questions to students’ own mobile devices and examined factors which influenced quiz participation. These quizzes were delivered via either SMS or WhatsApp per each student’s preference. We conducted a 3-week deployment with 546 students at 3 schools during their month of independent study prior to their graduating exam. We found that participation rates were heavily impacted by trust in the intervening organization and perceptions of personal security in the socio-technical environment. Parents also played a key gate-keeping role on students’ digital activities. We describe how this role – along with different perceptions of smartphones versus basic phones – may manifest in lower participation rates among WhatsApp-based users as compared to SMS. Finally, we discuss design implications for future educational interventions that target students’ personal cellphones outside of the classroom.
Centuries of development in optics have given us passive devices (i.e. lenses, mirrors and filters) to enrich audience immersivity with light effects, but there is nothing similar for sound. Beam-forming in concert halls and outdoor gigs still requires a large number of speakers, while headphones are still the state-of-the-art for personalized audio immersivity in VR. In this work, we show how 3D printed acoustic meta-surfaces, assembled into the equivalent of optical systems, may offer a different solution. We demonstrate how to build them and how to use simple design tools, like the thin-lens equation, also for sound. We present some key acoustic devices, like a “collimator”, to transform a standard computer speaker into an acoustic “spotlight”; and a “magnifying glass”, to create sound sources coming from distinct locations than the speaker itself. Finally, we demonstrate an acoustic varifocal lens, discussing applications equivalent to auto-focus cameras and VR headsets and the limitations of the technology.
Intelligent agents have become prevalent in everyday IT products and services. To improve an agent’s knowledge of a user and the quality of personalized service experience, it is important for the agent to cooperate with the user (e.g., asking users to provide their information and feedback). However, few works inform how to support such user-agent co-performance from a human-centered perspective. To fill this gap, we devised Co-Performing Agent, a Wizard-of-Oz-based research probe of an agent that cooperates with a user to learn by helping users to have a partnership mindset. By incorporating the probe, we conducted a two-month exploratory study, aiming to understand how users experience co-performing with their agent over time. Based on the findings, this paper presents the factors that affected users’ co-performing behaviors and discusses design implications for supporting constructive co-performance and building a resilient user-agent partnership over time.
Many touch based interactions provide limited opportunities for direct tactile feedback; examples include multi-user touch displays, augmented reality based projections on passive surfaces, and mid-air input. In this paper, we consider distal feedback, through vibrotactile stimulation on a smart-watch placed on the user’s non-dominant wrist, as an alternative feedback mechanism to interaction location vibrotactile feedback, under the user’s finger. We compare the effectiveness of interaction location feedback vs. distal feedback through a Fitts’s Law task completed on a smartphone. Results show that distal and interaction location feedback both reduce errors in target acquisition and exhibit statistically comparable performance, suggesting that distal vibrotactile feedback is a suitable alternative when interaction location feedback is not readily available.
The growing HCI interest in developing contexts and cultural craft practices is ripe to focus on their under-explored homegrown sociotechnical infrastructures. This paper explores the creative infrastructural actions embedded within the practices of songket’s supply chain in Terengganu, Malaysia. We report on contextual interviews with 92 participants including preparation workers, weavers, designers, merchants, and customers. Findings indicate that increased creative infrastructural actions are reflected in these actors’ resourcefulness for mobilizing information, materials, and equipment, and for making creative artifacts through new technologies weaved within traditional practices. We propose two novel approaches to design in this craft-based infrastructure. First, we explore designing for the social layer of infrastructure and its mutually advantageous exploitative relationships rooted in culture and traditions. Second, we suggest designing for roaming value-creation artifacts, which blend physical and digital materializations of songket textile design. Developed through a collaborative and asynchronous process, we argue that these artifacts represent less-explored vehicles for value co-creation, and that sociotechnical infrastructures as emerging sites of innovation could benefit from HCI research.
Trust in a Recommender System (RS) is crucial for its overall success. However, it remains underexplored whether users trust personal recommendation sources (i.e. other humans) more than impersonal sources (i.e. conventional RS), and, if they do, whether the perceived quality of explanation provided account for the difference. We conducted an empirical study in which we compared these two sources of recommendations and explanations. Human advisors were asked to explain movies they recommended in short texts while the RS created explanations based on item similarity. Our experiment comprised two rounds of recommending. Over both rounds the quality of explanations provided by users was assessed higher than the quality of the system’s explanations. Moreover, explanation quality significantly influenced perceived recommendation quality as well as trust in the recommendation source. Consequently, we suggest that RS should provide richer explanations in order to increase their perceived recommendation quality and trustworthiness.
We introduce Springlets, expressive, non-vibrating mechanotactile interfaces on the skin. Embedded with shape memory alloy springs, we implement Springlets as thin and flexible stickers to be worn on various body locations, thanks to their silent operation even on the neck and head. We present a technically simple and rapid technique for fabricating a wide range of Springlet interfaces and computer-generated tactile patterns. We developed Springlets for six tactile primitives: pinching, directional stretching, pressing, pulling, dragging, and expanding. A study placing Springlets on the arm and near the head demonstrates Springlets’ effectiveness and wearability in both stationary and mobile situations. We explore new interactive experiences in tactile social communication, physical guidance, health interfaces, navigation, and virtual reality gaming, enabled by Springlets’ unique and scalable form factor.
As videos become increasingly ubiquitous, so is video-based commenting. To contextualize comments, people often reference specific audio/visual content within video. However, the literature falls short of explaining the types of video content people refer to, how they establish references and identify referents, how video characteristics (e.g., genre) impact referencing behaviors, and how references impact social engagement. We present a taxonomy for classifying video references by referent type and temporal specificity. Using our taxonomy, we analyzed 2.5K references with quotations and timestamps collected from public YouTube comments. We found: 1) people reference intervals of video more frequently than time-points, 2) visual entities are referenced more often than sounds, and 3) comments with quotes are more likely to receive replies but not more “likes”. We discuss the need for in-situ dereferencing user interfaces, illustrate design concepts for typed referencing features, and provide a dataset for future studies.
Do We Care About Diversity in Human Computer Interaction: A Comprehensive Content Analysis on Diversity Dimensions in Research
In Human-Computer Interaction (HCI) research, awareness for the relevance of diversity of users is increasing. In this work, we analyze whether the articulated need for more diversity-sensitive research led indeed to a higher consideration of diversity in HCI research. Based on a comprehensive collection of diversity dimensions, we present results of a quantitative content analysis of articles accepted in the Proceedings of the Conference on Human Factors in Computing Systems 2006, 2011, and 2016. Results demonstrate how many and how intensively diversity dimensions were considered, and moreover highlight those dimensions that have so far received less attention. Uncovering continuous and discontinuous trends across time and differences between subfields of research, we identify research gaps and aim at contributing to a comprehensive understanding of diversity supporting diversity-sensitive research in HCI.
Exploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.
Block-based programming languages can support novice programmers through features such as simplified code syntax and user-friendly libraries. However, most block-based programming languages are highly visual, which makes them inaccessible to blind and visually impaired students. To address the inaccessibility of block-based languages, we introduce StoryBlocks, a tangible block-based game that enables blind programmers to learn basic programming concepts by creating audio stories. In this paper, we document the design of StoryBlocks and report on a series of design activities with groups of teachers, Braille experts, and students. Participants in our design sessions worked together to create accessible stories, and their feedback offers insights for the future development of accessible, tangible programming tools.
This paper examines the entangled development of governance strategies and networked technologies in the pervasive but under-examined domain of public restrooms. Drawing on a mix of archival materials, participant observation, and interviews within and beyond the city of Seattle, Washington, we look at the motivations of public restroom facilities managers as they introduce (or consider introducing) networked technology in the spaces they administer. Over the course of the research, we found internet of things technologies-or, connected devices imbued with computational capacity-became increasingly tied up with cost-reducing efficiencies and exploitative regulatory techniques. Drawing from this case study, we develop the concept of managerial visions: ways of seeing that structure labor, enforce compliance, and define access to resources. We argue that these ways of seeing prove increasingly critical to HCI research as it attends to computer-mediated collaboration beyond white-collar settings.
Algorithms exert great power in curating online information, yet are often opaque in their operation, and even existence. Since opaque algorithms sometimes make biased or deceptive decisions, many have called for increased transparency. However, little is known about how users perceive and interact with potentially biased and deceptive opaque algorithms. What factors are associated with these perceptions, and how does adding transparency into algorithmic systems change user attitudes? To address these questions, we conducted two studies: 1) an analysis of 242 users’ online discussions about the Yelp review filtering algorithm and 2) an interview study with 15 Yelp users disclosing the algorithm’s existence via a tool. We found that users question or defend this algorithm and its opacity depending on their engagement with and personal gain from the algorithm. We also found adding transparency into the algorithm changed users’ attitudes towards the algorithm: users reported their intention to either write for the algorithm in future reviews or leave the platform.
Emojis are becoming an increasingly popular mode of communication between individuals worldwide, with researchers claiming them to be a type of “ubiquitous language” that can span different languages due to its pictorial nature. Our study uses a combination of methods to examine how emojis are adopted and perceived by individuals from diverse cultural backgrounds and 45 countries. Our survey and interview findings point to the existence of a cultural gap between user perceptions and the current emoji standard. Using participatory design, we sought to address this gap by designing 40 emojis and conducted another survey to evaluate their acceptability compared to existing Japanese emojis. We also draw on participant observation from a Unicode Consortium meeting on emoji addition. Our analysis leads us to discuss how emojis might be made more inclusive, diverse, and representative of the populations that use them.
Time to Scale: Generalizable Affect Detection for Tens of Thousands of Students across An Entire School Year
We developed generalizable affect detectors using 133,966 instances of 18 affective states collected from 69,174 students who interacted with an online math learning platform called Algebra Nation over the entire school year. To enable scalability and generalizability, we used generic interaction features (e.g., viewing a video, taking a quiz), which do not require specialized sensors and are domain- and (to a certain extent) system-independent. We experimented with standard classifiers, recurrent neural networks, and genetically evolved neural networks for affect modeling. Prediction accuracies, quantified with Spearman’s rho, were modest and ranged from .08 (for surprise) to .34 (for happiness) with a mean of .25. Our model trained on Algebra students generalized to a different set of Geometry students (n = 28,458) on the same platform. We discuss implications for scaling up affect detection for affect-sensitive online learning environments which aim to improve engagement and learning by detecting and responding to student affect.
Online collaborative writing is an increasingly common practice. Despite its positive effect on productivity and quality of work, it poses challenges to co-authors in remote settings because of limitations in conversational grounding and activity awareness. This paper presents Eye-Write, a novel system which allows two co-authors to see at will the location of their partner’s gaze within a text editor. To investigate the effect of shared gaze on collaboration, we conducted a study on synchronous remote collaborative writing in academic settings with 20 dyads. Gaze sharing improved five aspects of perceived collaboration quality: mutual understanding, level of joint attention, flow of communication, level of negotiation, and awareness of the co-author’s activity. Furthermore, dyads whose participants deactivated the gaze visualization showed a smaller degree of collaboration. Our findings offer insights for future text editors by outlining the benefits of at-will gaze sharing in collaborative writing.
Students and hobbyists build embedded systems that combine sensing, actuation and microcontrollers on solderless breadboards. To help students debug such circuits, experienced teachers apply visual inspection, targeted measurements, and circuit modifications to diagnose and localize the problem(s). However, experienced helpers may not always be available to review student projects in person. To enable remote debugging of circuit problems, we introduce Heimdall, a remote electronics workbench that allows experts to visually inspect a student’s circuit; perform measurements; and to re-wire and inject test signals. These interactions are enabled by an actuated inspection camera; an augmented breadboard that enables flexible configuration of row connectivity and measurement/injection lines; and a web-based UI that teachers can use to perform measurements through interaction with the captured images. We demonstrate that common issues arising in embedded electronics classes can be successfully diagnosed remotely and report on preliminary user feedback from teaching assistants who frequently debug circuits.
HOPE for Computing Education: Towards the Infrastructuring of Support for University-School Partnerships
The state of computing education in the UK is described as “patchy and fragile” with universities tasked to provide further support to schools. However, little guidance exists towards the provision of this support. To explore the development of university-school partnerships, we present findings of an extended educational engagement coordinated by Newcastle University, as part of the national “Create, Learn and Inspire with the micro:bit and the BBC” initiative. Following an action research approach, we explore the experiences of undergraduate students, schoolteachers and an educational broker through the process, including recruitment, content development, and delivery of over 30 computing lessons by nine undergraduates. We identify a number of design considerations towards the development of High Opportunity Progression Ecosystems for the improvement of computing education, such as student identity, workload model,and process visibility. We then discuss the potential role of technology in infrastructuring support for university-school partnerships
Recent research suggests that a robot’s motors make sounds that can influence users’ perception of the robot’s characteristics. To more deeply understand users’ associations with specific sonic characteristics, we adapted methods from sensory science including Check All That Apply (CATA) questions and Polarized Sensory Positioning (PSP) to tease out small differences in motor sounds in an online survey. These methods are straightforward for untrained people to do in an online setting, mathematically rigorous, and can explore a variety of subtle auditory and perceptual stimuli. We describe how to use these methods, interpret the results with several intuitive visual representations, and show that the results align with a previous study of the same dataset. We close by discussing benefits and limitations of applying these methods to study subtle phenomena in the HCI community.
Intensive exploration and navigation of hierarchical lists on smartphones can be tedious and time-consuming as it often requires users to frequently switch between multiple views. To overcome this limitation, we present PinchList, a novel interaction design that leverages pinch gestures to support seamless exploration of multi-level list items in hierarchical views. With PinchList, sub-lists are accessed with a pinch-out gesture whereas a pinch-in gesture navigates back to the previous level. Additionally, pinch and flick gestures are used to navigate lists consisting of more than two levels. We conduct a user study to refine the design parameters of PinchList such as a suitable item size, and quantitatively evaluate the target acquisition performance using pinch-in/out gestures in both scrolling and non-scrolling conditions. In a second study, we compare the performance of PinchList in a hierarchal navigation task with two commonly used touch interfaces for list browsing: pagination and expand-and-collapse interfaces. The results reveal that PinchList is significantly faster than other two interfaces in accessing items located in hierarchical list views. Finally, we demonstrate that PinchList enables a host of novel applications in list-based interaction?
In this paper, we investigate the use of mobile technology in an underexplored context, the bed that couples share. Despite large amounts of research on the impact of pre-bedtime technology use on our sleep and mental state, scant research in the HCI field focuses on the physical bed as a negotiated site of technology use by couples. This paper explores (a) the meaning of the bed accessed by mobile technology and (b) the strategies of both individual and shared technology use in bed, in the context of couple’s relationships. We investigate the effects of mobile technology to couples’ bed-sharing practices through in-depth interviews (n = 12) and an online survey (n = 117). We report on creative and negotiated bodily practices of mobile technology use by couples in bed, and the perceived effects on couples’ verbal and physical interaction and the intimacy of the bed.
Ten years ago, Thaler and Sunstein introduced the notion of nudging to talk about how subtle changes in the ‘choice architecture’ can alter people’s behaviors in predictable ways. This idea was eagerly adopted in HCI and applied in multiple contexts, including health, sustainability and privacy. Despite this, we still lack an understanding of how to design effective technology-mediated nudges. In this paper we present a systematic review of the use of nudging in HCI research with the goal of laying out the design space of technology-mediated nudging – the why (i.e., which cognitive biases do nudges combat) and the how (i.e., what exact mechanisms do nudges employ to incur behavior change). All in all, we found 23 distinct mechanisms of nudging, grouped in 6 categories, and leveraging 15 different cognitive biases. We present these as a framework for technology-mediated nudging, and discuss the factors shaping nudges’ effectiveness and their ethical implications.
Assistive technologies such as screen readers and text editors have been used in past to improve the accessibility and authoring of scientific and mathematical documents. However, most screens readers fail to narrate complex mathematical notations and expressions as they skip symbols and necessary information required for the accurate narration of mathematical content. This study aims at evaluating a new Accessible LaTeX Based Mathematical Document Authoring and Presentation (ALAP) tool, which assist people with visual impairments in reading and writing mathematical documents. ALAP includes features like, assistive debugging, Math Mode for reading and writing mathematical notations, and automatic generation of an accessible PDF document. These features aim to improve the LaTeX debugging experience and make it simple for blind users to author mathematical content by narrating it in natural language through the use of integrated text to speech (TTS) engine. We evaluated ALAP by conducting a study with 18 visually impaired LaTeX users. The results showed that users preferred ALAP over another comparable LaTeX based authoring tool and were relatively more comfortable in completing the tasks while using ALAP.
Embodied Imagination: An Approach to Stroke Recovery Combining Participatory Performance and Interactive Technology
Participatory performance provides methods for exploring social identities and situations in ways that can help people to imagine new ways of being. Digital technologies provide tools that can help people envision these possibilities. We explore this combination through a performance workshop process designed to help stroke survivors imagine new physical and social possibilities by enacting fantasies of “things they always wanted to do”. This process uses performance methods combined with specially designed real-time movement visualisations to progressively build fantasy narratives that are enacted with and for other workshop participants. Qualitative evaluations suggest this process successfully stimulates participant’s embodied imagination and generates a diverse range of fantasies. The interactive and communal aspects of the workshop process appear to be especially important in achieving these effects. This work highlights how the combination of performance methods and interactive tools can bring a rich, prospective and political understanding of people’s lived experience to design.
Behind the Curtain of the "Ultimate Empathy Machine": On the Composition of Virtual Reality Nonfiction Experiences
Virtual Reality nonfiction (VRNF) is an emerging form of immersive media experience created for consumption using panoramic “Virtual Reality” headsets. VRNF promises nonfiction content producers the potential to create new ways for audiences to experience “the real”; allowing viewers to transition from passive spectators to active participants. Our current project is exploring VRNF through a series of ethnographic and experimental studies. In order to document the content available, we embarked on an analysis of VR documentaries produced to date. In this paper, we present an analysis of a representative sample of 150 VRNF titles released between 2012-2018. We identify and quantify 64 characteristics of the medium over this period, discuss how producers are exploiting the affordances of VR, and shed light on new audience roles. Our findings provide insight into the current state of the art in VRNF and provide a digital resource for other researchers in this area.
Empirical studies are a cornerstone of HCI research. Technical progress constantly enables new study methods. Online surveys, for example, make it possible to collect feedback from remote users. Progress in augmented and virtual reality enables to collect feedback with early designs. In-situ studies enable researchers to gather feedback in natural environments. While these methods have unique advantages and disadvantages, it is unclear if and how using a specific method affects the results. Therefore, we conducted a study with 60 participants comparing five different methods (online, virtual reality, augmented reality, lab setup, and in-situ) to evaluate early prototypes of smart artifacts. We asked participants to assess four different smart artifacts using standardized questionnaires. We show that the method significantly affects the study result and discuss implications for HCI research. Finally, we highlight further directions to overcome the effect of the used methods.
Effortless reading remains an issue for many Web users, despite a large number of readability guidelines available to designers. This paper presents a study of manual and automatic use of 39 readability guidelines in webpage evaluation. The study collected the ground-truth readability for a set of 50 webpages using eye-tracking with average and dyslexic readers (n = 79). It then matched the ground truth against human-based (n = 35) and automatic evaluations. The results validated 22 guidelines as being connected to readability. The comparison between human-based and automatic results also revealed a complex framework: algorithms were better or as good as human experts at evaluating webpages on specific guidelines – particularly those about low-level features of webpage legibility and text formatting. However, multiple guidelines still required a human judgment related to understanding and interpreting webpage content. These results contribute a guideline categorization laying the ground for future design evaluation methods.
Automated vehicles have to make decisions, such as driving maneuvers or rerouting, based on environment data and decision algorithms. There is a question whether ethical aspects should be considered in these algorithms. When all available decisions within a situation have fatal consequences, this leads to a dilemma. Contemporary discourse surrounding this issue is dominated by the trolley problem, a specific version of such a dilemma. Based on an outline of its origins, we discuss the trolley problem and its viability to help solve the questions regarding ethical decision making in automated vehicles. We show that the trolley problem serves several important functions but is an ill-suited benchmark for the success or failure of an automated algorithm. We argue that research and design should focus on avoiding trolley-like problems at all rather than trying to solve an unsolvable dilemma and discuss alternative approaches on how to feasibly address ethical issues in automated agents.
Depression is an affective disorder with distinctive autobiographical memory impairments, including negative bias, overgeneralization and reduced positivity. Several clinical therapies address these impairments, and there is an opportunity to develop new supports for treatment by considering depression-associated memory impairments within design. We report on interviews with ten experts in treating depression, with expertise in both neuropsychology and cognitive behavioral therapies. The interviews explore approaches for addressing each of these memory impairments. We found consistent use of positive memories for treating all memory impairments, the challenge of direct retrieval, and the need to support the experience of positive memories. We aim to sensitize HCI researchers to the limitations of memory technologies, broaden their awareness of memory impairments beyond episodic memory recall, and inspire them to engage with this less explored design space. Our findings open up new design opportunities for memory technologies for depression, including positive memory banks for active encoding and selective retrieval, novel cues for supporting generative retrieval, and novel interfaces to strengthen the reliving of positive memories.
Kinaesthetic creativity refers to the body’s ability to generate alternate futures in activities such as role-playing in participatory design workshops. This has relevance not only to the design of methods for inspiring creativity but also to the design of systems that promote engaging experiences via bodily interaction. This paper probes this creative process by studying how dancers interact with technology to generate ideas. We developed a series of parameterized interactive visuals and asked dance practitioners to use them in generating movement materials. From our study, we define a taxonomy that comprises different relationships and movement responses dancers form with the visuals. Against this taxonomy, we describe six types of interaction patterns and demonstrate how dance creativity is driven by the ability to shift between these patterns. We then propose a set of interaction design qualities to support kinaesthetic creativity.
Rankings distill a large number of factors into simple comparative models to facilitate complex decision making. Yet key questions remain in the design of mixed-initiative systems for ranking, in particular how best to collect users’ preferences to produce high-quality rankings that users trust and employ in the real world. To address this challenge we evaluate the relative merits of three preference collection methods for ranking in a crowdsourced study. We find that with a categorical binning technique, users interact with a large amount of data quickly, organizing information using broad strokes. Alternative interaction modes using pairwise comparisons or sub-lists result in smaller, targeted input from users. We consider how well each interaction mode addresses design goals for interactive ranking systems. Our study indicates that the categorical approach provides the best value-added benefit to users, requiring minimal effort to create sufficient training data for the underlying ranking algorithm.
Auditory alarms that repeatedly interrupt users until they react are common, especially in the context of alarms. However, when an alarm repeats, our brains habituate to it and perceive it less and less, with reductions in both perception and attention-shifting: a phenomenon known as the repetition-suppression effect (RS). To retain users’ perception and attention, this paper proposes and tests the use of pitch- and intensity-modulated alarms. Its experimental findings suggest that the proposed modulated alarms can reduce RS, albeit in different patterns, depending on whether pitch or intensity is the focus of the modulation. Specifically, pitch-modulated alarms were found to reduce RS more when the number of repetitions was small, while intensity-modulated alarms reduced it more as the number of repetitions increased. Based on these results, we make several recommendations for the design of improved repeating alarms, based on which modulation approach should be adopted in various situations.
Computer science education is widely viewed as a path to empowerment for young people, potentially leading to higher education, careers, and development of computational thinking skills. However, few resources exist for people with cognitive disabilities to learn computer science. In this paper, we document our observations of a successful program in which young adults with cognitive disabilities are trained in computing concepts. Through field observations and interviews, we identify instructional strategies used by this group, accessibility challenges encountered by this group, and how instructors and students leverage peer learning to support technical education. Our findings lead to guidelines for developing tools and curricula to support young adults with cognitive disabilities in learning computer science.
Obtaining meaningful user consent is increasingly problematic in a world of numerous, heterogeneous digital services. Current approaches (e.g. agreeing to Terms and Conditions) are rooted in the idea of individual control despite growing evidence that users do not (or cannot) exercise such control in informed ways. We consider an alternative approach whereby users can opt to delegate consent decisions to an ecosystem of third-parties including friends, experts, groups and AI entities. We present the results of a study that used a technology probe at a large festival to explore initial public responses to this reframing — focusing on when and to whom users would delegate such decisions. The results reveal substantial public interest in delegating consent and identify differing preferences depending on the privacy context, highlighting the need for alternative decision mechanisms beyond the current focus on individual choice.
People with dyslexia face challenges expressing themselves in writing on social networking sites (SNSs). Such challenges come from not only the technicality of writing, but also the self-representation aspect of sharing and communicating publicly on social networking sites such as Facebook. To empower people with dyslexia-style writing to express them-selves more confidently on SNSs, we designed and implemented Additional Writing Help(AWH) – a writing assistance tool to proofread text produced by users with dyslexia before they post on Facebook. AWH was powered by a neural machine translation (NMT) model that translates dyslexia style to non-dyslexia style writing. We evaluated the performance and the design of AWH through a week-long field study with 19 people with dyslexia and received highly positive feedback. Our field study demonstrated the value of providing better and more extensive writing support on SNSs, and the potential of AI for building a more inclusive Internet.
VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction
Modern touchscreen keyboards are all powered by the word-level auto-correction ability to handle input errors. Unfortunately, visually impaired users are deprived of such benefit because a screen-reader keyboard offers only character-level input and provides no correction ability. In this paper, we present VIPBoard, a smart keyboard for visually impaired people, which aims at improving the underlying keyboard algorithm without altering the current input interaction. Upon each tap, VIPBoard predicts the probability of each key considering both touch location and language model, and reads the most likely key, which saves the calibration time when the touchdown point misses the target key. Meanwhile, the keyboard layout automatically scales according to users’ touch point location, which enables them to select other keys easily. A user study shows that compared with the current keyboard technique, VIPBoard can reduce touch error rate by 63.0% and increase text entry speed by 12.6%.
Phishing emails often disguise a link’s actual URL. Thus, common anti-phishing advice is to check a link’s URL before clicking, but email clients do not support this well. Automated phishing detection enables email clients to warn users that an email is suspicious, but current warnings are often not specific. We evaluated the effects on phishing susceptibility of (1) moving phishing warnings close to the suspicious link in the email, (2) displaying the warning on hover interactions with the link, and (3) forcing attention to the warning by deactivating the original link, forcing users to click the URL in the warning. We assessed the effectiveness of such link-focused phishing warning designs in a between-subjects online experiment (n=701). We found that link-focused phishing warnings reduced phishing click-through rates compared to email banner warnings; forced attention warnings were most effective. We discuss the implications of our findings for phishing warning design.
Data is changing how we design consumer products. Shoe production is a prime example of this; foot size, footstep pressure and personal preferences can be used to design personalized shoes. Research done around metamaterials, programming materials and computational composites illustrate the possibilities of creating complex data & material relationships. These new relationships allow us to look at future products almost like software apps, becoming a kind of product service systems, where the focus is on its iterative personalization improvement over time. Can we create systems of such data driven objects that in turn allow us to design new objects that are informed by the data trail? In this paper we report on four RtD project iterations that explore this challenge and provide a set of insights on how to close this new iterative loop.
When automating tasks using some form of artificial intelligence, some inaccuracy in the result is virtually unavoidable. In many cases, the user must decide whether to try the automated method again, or fix it themselves using the available user interface. We argue this decision is influenced by both perceived automation accuracy and degree of task “controllability” (how easily and to what extent an automated result can be manually modified). This relationship between accuracy and controllability is investigated in a 750-participant crowdsourced experiment using a controlled, gamified task. With high controllability, self-reported satisfaction remained constant even under very low accuracy conditions, and overall, a strong preference was observed for using manual control rather than automation, despite much slower performance and regardless of very poor controllability.
Jewelry weaves into our everyday lives as no other wearable does. It comes in many wearable forms, is fashionable, and can adorn any part of the body. In this paper, through an exploratory, Research through Design (RtD) process, we tap into this vast potential space of input interaction that jewelry can enable. We do so by first identifying a small set of fundamental structural elements — called Jewelements — that any jewelry is composed of, and then defining their properties that enable the interaction. We leverage this synthesis along with observational data and literature to formulate a design space of jewelry-enabled input techniques. This work encapsulates both the extensions of common existing input methods (e.g., touch) as well as new ones inspired by jewelry. Furthermore, we discuss our prototypical sensor-based implementations. Through this work, we invite the community to engage in the conversation on how jewelry as a material can help shape wearable-based input.
Advances in tracking technology and wireless headsets enable walking as a means of locomotion in Virtual Reality. When exploring virtual environments larger than room-scale, it is often desirable to increase users’ perceived walking speed, for which we investigate three methods. (1) Ground-Level Scaling increases users’ avatar size, allowing them to walk farther. (2) Eye-Level Scaling enables users to walk through a World in Miniature, while maintaining a street-level view. (3) Seven-League Boots amplifies users’ movements along their walking path. We conduct a study comparing these methods and find that users feel most embodied using Ground-Level Scaling and consequently increase their stride length. Using Seven-League Boots, unlike the other two methods, diminishes positional accuracy at high gains, and users modify their walking behavior to compensate for the lack of control. We conclude with a discussion on each technique’s strength and weaknesses and the types of situation they might be appropriate for.
Since the release of the first activity tracker, there has been a steady increase in the number of sensors embedded in wearable devices and with it in the amount and diversity of information that can be derived from these sensors. This development leads to novel privacy threats for users. In a web survey with 248 participants, we explored whether users’ willingness to share private data is dependent on how the data is requested by an application. Specifically, requests can be formulated as access to sensor data or as access to information derived from the sensor data (e.g., accelerometer vs. sleep quality). We show that non-expert users lack an understanding of how the two representation levels relate to each other. The results suggest that the willingness to share sensor data over derived information is governed by whether the derived information has positive or negative connotations (e.g., training intensity vs. life expectancy). Using the results of the survey, we derive implications for supporting users in protecting their private data collected via wearable sensors.
As reading increasingly shifts from paper to online media, many web browsers now provide a “Reader View,” which modifies web page layout and design for better readability. However, research has yet to establish whether Reader Views are effective in improving readability and how they might change the user experience. We characterize how Mozilla Firefox’s Reader View significantly reduces the visual complexity of websites by excluding menus, images, and content. We then conducted an online study with 391 participants (including 42 who self-reported having been diagnosed with dyslexia), showing that compared to standard websites the Reader View increased reading speed by 5% for readers on average, and significantly improved perceived readability and visual appeal. We suggest guidelines for the design of websites and browsers that better support people with varying reading skills.
Many popular social networking and microblogging sites support verified accounts—user accounts that are deemed of public interest and whose owners have been authenticated by the site. Importantly, the content of messages contributed by verified account owners is not verified. Such messages may be factually correct, or not. This paper investigates whether users confuse authenticity with credibility by posing the question: Are users more likely to believe content from verified accounts than from non-verified accounts? We conduct two online studies, a year apart, with 748 and 2041 participants respectively, to assess how the presence or absence of verified account indicators influences users’ perceptions of tweets. Surprisingly, across both studies, we find that—in the context of unfamiliar accounts—most users can effectively distinguish between authenticity and credibility. The presence or absence of an authenticity indicator has no significant effect on willingness to share a tweet or take action based on its contents.
Does Who Matter?: Studying the Impact of Relationship Characteristics on Receptivity to Mobile IM Messages
This study examines the characteristics of mobile instant-messaging users’ relationships with their social contacts and the effects of both relationship and interruption context on four measures of receptivity: Attentiveness, Responsiveness, Interruptibility, and Opportuneness. Overall, interruption context overshadows relationship characteristics as predictors of all four of these facets of receptivity; this overshadowing was most acute for Interruptibility and Opportuneness, but existed for all factors. In addition, while Mobile Maintenance Expectation and Activity Engagement were negatively correlated with all receptivity measures, each such measure had its own set of predictors, highlighting the conceptual differences among the measures. Finally, delving more deeply into potential relationship effects, we found that a single, simple closeness question was as effective at predicting receptivity as the 12-item Unidimensional Relationship Closeness Scale.
Twitter continues to be used increasingly for communication related advocacy, activism, and social change. This is also the case for the disability community. In light of the recently proposed ADA Education and Reform in the United States, we investigate factors for effectiveness of sharing or retweeting messages about topics affecting the rights of people with disabilities. We perform a multifaceted study of the #HandsOffMyADA campaign against the proposed H.R.620 bill to: (1) explore how communication via Twitter compares to previous disability rights movements; (2) characterize the campaign in terms of hashtags, user groups, and content such as accessible multimedia that contribute to dissemination of campaign messages; (3) identify major themes in tweets and responses, and their variation among user groups; and (4) understand how the disability community mobilized for this campaign compared to previous Twitter initiatives.
To better understand the issues designers face as they interact with developers and use developer tools to create websites, we conducted a formative investigation consisting of interviews, a survey, and an analysis of professional design documents. Based on insights gained from these efforts, we developed Poirot, a web inspection tool for designers that enables them to make style edits to websites using a familiar graphical interface. We compared Poirot to Chrome DevTools in a lab study with 16 design professionals. We observed common problems designers experience when using Chrome DevTools and found that when using Poirot, designers were more successful in accomplishing typical design tasks (97% to 63%). In addition, we found that Poirot had a significantly lower perceived cognitive load and was overwhelmingly preferred by the designers in our study.
We used content analysis of in-app driver survey responses, customer support tickets, and tweets, and face-to-face interviews of DHH Uber drivers to better understand the DHH driver experience. Here we describe challenges DHH drivers experience and how they address those difficulties via Uber’s accessibility features and their own workarounds. We also identify and discuss design and product opportunities to improve the DHH driver experience on Uber.
Errors and biases are earning algorithms increasingly malignant reputations in society. A central challenge is that algorithms must bridge the gap between high-level policy and on-the-ground decisions, making inferences in novel situations where the policy or training data do not readily apply. In this paper, we draw on the theory of street-level bureaucracies, how human bureaucrats such as police and judges interpret policy to make on-the-ground decisions. We present by analogy a theory of street-level algorithms, the algorithms that bridge the gaps between policy and decisions about people in a socio-technical system. We argue that unlike street-level bureaucrats, who reflexively refine their decision criteria as they reason through a novel situation, street-level algorithms at best refine their criteria only after the decision is made. This loop-and-a-half delay results in illogical decisions when handling new or extenuating circumstances. This theory suggests designs for street-level algorithms that draw on historical design patterns for street-level bureaucracies, including mechanisms for self-policing and recourse in the case of error.
Traditional approaches for ensuring high quality crowdwork have failed to achieve high-accuracy on difficult problems. Aggregating redundant answers often fails on the hardest problems when the majority is confused. Argumentation has been shown to be effective in mitigating these drawbacks. However, existing argumentation systems only support limited interactions and show workers general justifications, not context-specific arguments targeted to their reasoning. This paper presents Cicero, a new workflow that improves crowd accuracy on difficult tasks by engaging workers in multi-turn, contextual discussions through real-time, synchronous argumentation. Our experiments show that compared to previous argumentation systems which only improve the average individual worker accuracy by 6.8 percentage points on the Relation Extraction domain, our workflow achieves 16.7 percentage point improvement. Furthermore, previous argumentation approaches don’t apply to tasks with many possible answers; in contrast, Cicero works well in these cases, raising accuracy from 66.7% to 98.8% on the Codenames domain.
Personalising the TV Experience using Augmented Reality: An Exploratory Study on Delivering Synchronised Sign Language Interpretation
Augmented Reality (AR) technology has the potential to extend the screen area beyond the rigid frames of televisions. The additional display area can be used to augment televisions (TVs) with extra information tailored to individuals, for instance, the provision of access services like sign language interpretations. We invited 23 (11 in the UK, 12 in Germany) users of signed content to evaluate three methods of watching a sign language interpreted programme – one traditional in-vision method with signed programme content on TV and two AR-enabled methods in which an AR sign language interpreter (a ‘half-body’ version and a ‘full-body’ version) is projected just outside the frame of the TV presenting the programme. In the UK, participants were split 3-ways in their preferences while in Germany, half the participants preferred the traditional method followed closely by the ‘half-body’ version. We discuss our participants reasoning behind their preferences and implications for future research.
FTVR in VR: Evaluation of 3D Perception With a Simulated Volumetric Fish-Tank Virtual Reality Display
Spherical fish tank virtual reality (FTVR) displays attempt to create a virtual “crystal ball” experience using head-tracked rendering. Almost all of these systems have omitted stereo cues, making them easy to build, but it is not clear how much this omission degrades the 3D experience. In this study, we evaluate performance and subjective effects of stereo on 3D perception and interaction tasks with a spherical FTVR display. To control for calibration error and tracking latency, we perform the evaluation on a simulated spherical display in VR. The results of our study provide a clear recommendation for the design and use of spherical FTVR displays: while omitting stereo may not be readily apparent for users, their performance will be significantly degraded (20% – 91% increase in median task time). Therefore, including stereo viewing in spherical displays is critical for use in FTVR.
Despite growing concerns about security and privacy of Internet of Things (IoT) devices, consumers generally do not have access to security and privacy information when purchasing these devices. We interviewed 24 participants about IoT devices they purchased. While most had not considered privacy and security prior to purchase, they reported becoming concerned later due to media reports, opinions shared by friends, or observing unexpected device behavior. Those who sought privacy and security information before purchase, reported that it was difficult or impossible to find. We asked interviewees to rank factors they would consider when purchasing IoT devices; after features and price, privacy and security were ranked among the most important. Finally, we showed interviewees our prototype privacy and security label. Almost all found it to be accessible and useful, encouraging them to incorporate privacy and security in their IoT purchase decisions.
An Explanation of Fitts’ Law-like Performance in Gaze-Based Selection Tasks Using a Psychophysics Approach
Eye gaze as an input method has been studied since the 1990s, to varied results: some studies found gaze to be more efficient than traditional input methods like a mouse, others far behind. Comparisons are often backed up by Fitts’ Law without explicitly acknowledging the ballistic nature of saccadic eye movements. Using a vision science-inspired model, we here show that a Fitts’-like distribution of movement times can arise due to the execution of secondary saccades, especially when targets are small. Study participants selected circular targets using gaze. Seven different target sizes and two saccade distances were used. We then determined performance across target sizes for different sampling windows (“dwell times”) and predicted an optimal dwell time range. Best performance was achieved for large targets reachable by a single saccade. Our findings highlight that Fitts’ Law, while a suitable approximation in some cases, is an incomplete description of gaze interaction dynamics.
This paper presents MultiTrack, a commodity WiFi based human sensing system that can track multiple users and recognize activities of multiple users performing them simultaneously. Such a system can enable easy and large-scale deployment for multi-user tracking and sensing without the need for additional sensors through the use of existing WiFi devices (e.g., desktops, laptops and smart appliances). The basic idea is to identify and extract the signal reflection corresponding to each individual user with the help of multiple WiFi links and all the available WiFi channels at 5GHz. Given the extracted signal reflection of each user, MultiTrack examines the path of the reflected signals at multiple links to simultaneously track multiple users. It further reconstructs the signal profile of each user as if only a single user has performed activity in the environment to facilitate multi-user activity recognition. We evaluate MultiTrack in different multipath environments with up to 4 users for multi-user tracking and up to 3 users for activity recognition. Experimental results show that our system can achieve decimeter localization accuracy and over 92% activity recognition accuracy under multi-user scenarios.
What is Mixed Reality (MR)? To revisit this question given the many recent developments, we conducted interviews with ten AR/VR experts from academia and industry, as well as a literature survey of 68 papers. We find that, while there are prominent examples, there is no universally agreed on, one-size-fits-all definition of MR. Rather, we identified six partially competing notions from the literature and experts’ responses. We then started to isolate the different aspects of reality relevant for MR experiences, going beyond the primarily visual notions and extending to audio, motion, haptics, taste, and smell. We distill our findings into a conceptual framework with seven dimensions to characterize MR applications in terms of the number of environments, number of users, level of immersion, level of virtuality, degree of interaction, input, and output. Our goal with this paper is to support classification and discussion of MR applications’ design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.
In this day and age of identity theft, are we likely to trust machines more than humans for handling our personal information? We answer this question by invoking the concept of “machine heuristic,” which is a rule of thumb that machines are more secure and trustworthy than humans. In an experiment (N = 160) that involved making airline reservations, users were more likely to reveal their credit card information to a machine agent than a human agent. We demonstrate that cues on the interface trigger the machine heuristic by showing that those with higher cognitive accessibility of the heuristic (i.e., stronger prior belief in the rule of thumb) were more likely than those with lower accessibility to disclose to a machine, but they did not differ in their disclosure to a human. These findings have implications for design of interface cues conveying machine vs. human sources of our online interactions.
Critter: Augmenting Creative Work with Dynamic Checklists, Automated Quality Assurance, and Contextual Reviewer Feedback
Checklists and guidelines have played an increasingly important role in complex tasks ranging from the cockpit to the operating theater. Their role in creative tasks like design is less explored. In a needfinding study with expert web designers, we identified designers’ challenges in adhering to a checklist of design guidelines. We built Critter, which addressed these challenges with three components: Dynamic Checklists that progressively disclose guideline complexity with a self-pruning hierarchical view, AutoQA to automate common quality assurance checks, and guideline-specific feedback provided by a reviewer to highlight mistakes as they appear. In an observational study, we found that the more engaged a designer was with Critter, the fewer mistakes they made in following design guidelines. Designers rated the AutoQA and contextual feedback experience highly, and provided feedback on the tradeoffs of the hierarchical Dynamic Checklists. We additionally found that a majority of designers rated the AutoQA experience as excellent and felt that it increased the quality of their work. Finally, we discuss broader implications for supporting complex creative tasks.
It is reasonable to expect trusted news organizations to have more engaged users. However, given the lowest levels of trust in media and the several intermediaries involved in digital news consumption, recent studies posit that trust and usage may not be related. We argue that while trust may not relate to overall news usage, given that much of it is incidental, but it could still explain intentional usage. We correlated passively metered usage from digital trace data on 35 national news outlets in the US with their trustworthiness from a nationally representative survey, for three discrete months. We find no association between trust and overall user engagement, but a positive relationship between trustworthiness and direct visits, the latter a measure of intentional usage. These relationships held for outlets despite their partisan leanings, multi-platform presence and their mainstream nature.
We report the results of a crowdsourced experiment that measured the accuracy of motion outlier detection in multivariate, animated scatterplots. The targets were outliers either in speed or direction of motion, and were presented with varying levels of saliency in dimensions that are irrelevant to the task of motion outlier detection (e.g., color, size, position). We found that participants had trouble finding the outlier when it lacked irrelevant salient features and that visual channels contribute unevenly to the odds of an outlier being correctly detected. Direction of motion contributes the most to accurate detection of speed outliers, and position contributes the most to accurate detection of direction outliers. We introduce the concept of saliency deficit in which item importance in the data space is not reflected in the visualization due to a lack of saliency. We conclude that motion outlier detection is not well supported in multivariate animated scatterplots.
While previous studies of Conversational Agents (e.g. Siri, Google Assistant, Alexa and Cortana) have focused on evaluating usability and exploring capabilities of these systems, little work has examined users’ affective experiences. In this paper we present a survey study with 171 participants to examine CA users’ affective experiences. Specifically, we present four major usage scenarios, users’ affective responses in these scenarios, and the factors which influenced the affective responses. We found that users’ overall experience was positive with interest being the most salient positive emotion. Affective responses differed depending on the scenarios. Both pragmatic and hedonic qualities influenced affect. The factors underlying pragmatic quality are: helpfulness, proactivity, fluidity, seamlessness and responsiveness. The factors underlying hedonic quality are: comfort in human-machine conversation, pride of using cutting-edge technology, fun during use, perception of having a human-like assistant, concern about privacy and fear of causing distraction.
The increased use of machine learning in recent years led to large volumes of data being manually labeled via crowdsourcing microtasks completed by humans. This brought about dehumanization effects, namely, when task requesters overlook the humans behind the task, leading to issues of ethics (e.g., unfair payment) and amplification of human biases, which are transferred into training data and affect machine learning in the real world. We propose a framework that allocates microtasks considering human factors of workers such as demographics and compensation. We deployed our framework to a popular crowdsourcing platform and conducted experiments with 1,919 workers collecting 160,345 human judgments. By routing microtasks to workers based on demographics and appropriate pay, our framework mitigates biases in the contributor sample and increases the hourly pay given to contributors. We discuss potential extensions and how it can promote transparency in crowdsourcing.
Emerging technologies such as Augmented Reality (AR), have the potential to radically transform education by making challenging concepts visible and accessible to novices. In this project, we have designed a Hololens-based system in which collaborators are exposed to an unstructured learning activity in which they learned about the invisible physics involved in audio speakers. They learned topics ranging from spatial knowledge, such as shape of magnetic fields, to abstract conceptual knowledge, such as relationships between electricity and magnetism. We compared participants’ learning, attitudes and collaboration with a tangible interface through multiple experimental conditions containing varying layers of AR information. We found that educational AR representations were beneficial for learning specific knowledge and increasing participants’ self-efficacy (i.e., their ability to learn concepts in physics). However, we also found that participants in conditions that did not contain AR educational content, learned some concepts better than other groups and became more curious about physics. We discuss learning and collaboration differences, as well as benefits and detriments of implementing augmented reality for unstructured learning activities.
We introduce Serpentine, a self-powered sensor that is a reversibly deformable cord capable of sensing a variety of human input. The material properties and structural design of Serpentine allow it to be flexible, twistable, stretchable and squeezable, enabling a broad variety of expressive input modalities. The sensor operates using the principle of Triboelectric Nanogenerators (TENG), which allows it to sense mechanical deformation without an external power source. The affordances of the cord include six interactions—Pluck, Twirl, Stretch, Pinch, Wiggle and Twist. Serpentine demonstrates the ability to simultaneously recognize these inputs through a single physical interface. A 12-participant user study illustrates 95.7% accuracy for a user-dependent recognition model using a realtime system and 92.17% for user-independent offline detection. We conclude by demonstrating how Serpentine can be employed in everyday ubiquitous computing applications.
While Virtual Reality continues to increase in fidelity, it remains an open question how to effectively reflect the user’s movements and provide congruent feedback in virtual environments. We present VRsneaky, a system for producing auditory movement feedback, which helps participants orient themselves in a virtual environment by providing footstep sounds. The system reacts to the user’s specific gait features and adjusts the audio accordingly. In a user study with 28 participants, we found that VRsneaky increases users’ sense of presence as well as awareness of their own posture and gait. Additionally, we find that increasing auditory realism significantly influences certain characteristics of participants’ gait. Our work shows that gait-aware audio feedback is a means to increase presence in virtual environments. We discuss opportunities and design requirements for future scenarios where users walk through immersive virtual worlds.
This mixed-methods study examines the effects of a tablet-based checklist system on team performance during a dynamic and safety-critical process of trauma resuscitation. We compared team performance from 47 resuscitations that used a paper checklist to that from 47 cases with a digital checklist to determine if digitizing a checklist led to improvements in task completion rates and in how fast the tasks were initiated for 18 most critical assessment and treatment tasks. We also compared if the checklist compliance increased with the digital design. We found that using the digital checklist led to more frequent completions of the initial airway assessment task but fewer completions of ear and lower extremities exams. We did not observe any significant differences in time to task performance, but found increased compliance with the checklist. Although improvements in team performance with the digital checklist were minor, our findings are important because they showed no adverse effects as a result of the digital checklist introduction. We conclude by discussing the takeaways and implications of these results for effective digitization of medical work.
Understanding users’ behavior at home is central to behavioral research. For example, social researchers are interested in studying domestic abuse, and healthcare professionals are interested in caregiver-patient interaction. Today, such studies rely on diaries and questionnaires, which are subjective, erroneous, and hard to sustain in longitudinal studies. We introduce Marko, a system that automatically collects behavior-related data, without asking people to write diaries or wear sensors. Marko transmits a low power wireless signal and analyses its reflections from the environment. It maps those reflections to how users interact with the environment (e.g., access to medication cabinet) and with each other (e.g., watch TV together). It provides novel algorithms for identifying who-does-what, and bootstrapping the system in new homes without asking users for new annotations. We evaluate Marko with a one-month deployment in six homes, and demonstrate its value for studying couple relationships and caregiver-patient interaction.
Using gamepad-driven devices like games consoles is an activity frequently shared with others. Thus, shoulder-surfing is a serious threat. To address this threat, we present the first investigation of shoulder-surfing resistant text password entry on gamepads by (1) identifying the requirements of this context; (2) assessing whether shoulder-surfing resistant authentication schemes proposed in non-gamepad contexts can be viably adapted to meet these requirements; (3) proposing “Colorwheels”, a novel shoulder-surfing resistant authentication scheme specifically geared towards this context; (4) using two different methodologies proposed in the literature for evaluating shoulder-surfing resistance to compare “Colorwheels”, on-screen keyboards (the de facto standard in this context), and an existing shoulder-surfing resistant scheme which we identified during our assessment and adapted for the gamepad context; (5) evaluating all three schemes regarding their usability. Having applied different methodologies to measure shoulder-surfing resistance, we discuss their strengths and pitfalls and derive recommendations for future research.
Quality, diversity, and size of training data are critical factors for learning-based gaze estimators. We create two datasets satisfying these criteria for near-eye gaze estimation under infrared illumination: a synthetic dataset using anatomically-informed eye and face models with variations in face shape, gaze direction, pupil and iris, skin tone, and external conditions (2M images at 1280×960), and a real-world dataset collected with 35 subjects (2.5M images at 640×480). Using these datasets we train neural networks performing with sub-millisecond latency. Our gaze estimation network achieves 2.06(±0.44)° of accuracy across a wide 30°×40° field of view on real subjects excluded from training and 0.5° best-case accuracy (across the same FOV) when explicitly trained for one real subject. We also train a pupil localization network which achieves higher robustness than previous methods.
Player engagement within a game is often influenced by its difficulty curve: the pace at which in-game challenges become harder. Thus, finding an optimal difficulty curve is important. In this paper, we present a flexible and formal approach to transforming game difficulty curves by leveraging function composition. This allows us to describe changes to difficulty curves, such as making them “smoother”, in a more precise way. In an experiment with 400 players, we used function composition to modify the existing difficulty curve of the puzzle game Paradox to generate new curves. We found that transforming difficulty curves in this way impacted player engagement, including the number of levels completed and the estimated skill needed to complete those levels, as well as perceived competence. Further, we found some transformed curves dominated others with respect to engagement, indicating that different design goals can be traded-off by considering a subset of curves.
Trigger-action programming (TAP) is a programming model enabling users to connect services and devices by writing if-then rules. As such systems are deployed in increasingly complex scenarios, users must be able to identify programming bugs and reason about how to fix them. We first systematize the temporal paradigms through which TAP systems could express rules. We then identify ten classes of TAP programming bugs related to control flow, timing, and inaccurate user expectations. We report on a 153-participant online study where participants were assigned to a temporal paradigm and shown a series of pre-written TAP rules. Half of the rules exhibited bugs from our ten bug classes. For most of the bug classes, we found that the presence of a bug made it harder for participants to correctly predict the behavior of the rule. Our findings suggest directions for better supporting end-user programmers.
Early childhood developmental screening is critical for timely detection and intervention. babyTRACKS (Formerly Baby CROINC, CROwd INtelligence Curation.) is a free, live, interactive developmental tracking mobile app with over 3,000 children’s diaries. Parents write or select short milestone texts, like “began taking first steps,” to record their babies’ developmental achievements, and receive crowd-based percentiles to evaluate development and catch potential delays.
Currently, an expert-based Curated Crowd Intelligence (CCI) process manually groups incoming novel parent-authored milestone texts according to their similarity to existing milestones in the database (for example, starting to walk), or determining that the milestone represents a new developmental concept not seen before in another child’s diary. CCI cannot scale well, however, and babyTRACKS is mature enough, with a rich enough database of existing milestone texts, to now consider machine learning tools to replace or assist the human curators. Three new studies explore (1) the usefulness of automation, by analyzing the human cost of CCI and how the work is currently broken down; (2) the validity of automation, by testing the inter-rater reliability of curators; and (3) the value of automation, by appraising the “real world” clinical value of milestones when assessing child development.
We conclude that automation can indeed be appropriate and helpful for a large percentage, though not all, of CCI work. We further establish realistic upper bounds for algorithm performance; confirm that the babyTRACKS milestones dataset is valid for training and testing purposes; and verify that it represents clinically meaningful developmental information.
Player’s physical experience is a critical factor to consider in designing motion-based games that are played through motion sensor gaming consoles or virtual reality devices. However, adjusting the physical challenge involved in a motion-based game is difficult and tedious, as it is typically done manually by level designers on a trial-and-error basis. In this paper, we propose a novel approach for automatically synthesizing levels for motion-based games that can achieve desired physical movement goals. By formulating the level design problem as a trans-dimensional optimization problem which is solved by a reversible-jump Markov chain Monte Carlo technique, we show that our approach can automatically synthesize a variety of game levels, each carrying the desired physical movement properties. To demonstrate the generality of our approach, we synthesize game levels for two different types of motion-based games and conduct a user study to validate the effectiveness of our approach.
Time-lapse videos are often navigated by scrubbing with a slider. When networks are slow or images are large, however, even thumbnail versions load so slowly that scrubbing is limited to the start of the video. We developed a frame-loading technique called spread-loading that enables scrubbing regardless of delivery rate. Spread-loading orders frame delivery to maximize coverage of the entire sequence; this provides a temporal overview of the entire video that can be fully navigated at any time during delivery. The overview initially has a coarse temporal resolution, becoming finer-grained with each new frame. We compared spread-loading with traditional linear loading in a study where participants were asked to find specific episodes in a long time-lapse sequence, using three views with increasing levels of detail. Results show that participants found target episodes significantly and substantially faster with spread-loading, regardless of whether they could click to change the load point. Users rated spread-loading as requiring less effort, and strongly preferred the new technique.
Pan and zoom timelines and sliders help us navigate large time series data. However, designing efficient interactions can be difficult. We study pan and zoom methods via crowd-sourced experiments on mobile and computer devices, asking which designs and interactions provide faster target acquisition. We find that visual context should be limited for low-distance navigation, but added for far-distance navigation; that timelines should be oriented along the longer axis, especially on mobile; and that, as compared to default techniques, double click, hold, and rub zoom appear to scale worse with task difficulty, whereas brush and especially ortho zoom seem to scale better. Software and data used in this research are available as open source.
Investigating Implicit Gender Bias and Embodiment of White Males in Virtual Reality with Full Body Visuomotor Synchrony
Previous research has shown that when White people embody a black avatar in virtual reality (VR) with full body visuomotor synchrony, this can reduce their implicit racial bias. In this paper, we put men in female and male avatars in VR with full visuomotor synchrony using wearable trackers and investigated implicit gender bias and embodiment. We found that participants embodied in female avatars displayed significantly higher levels of implicit gender bias than those embodied in male avatars. The implicit gender bias actually increased after exposure to female embodiment in contrast to male embodiment. Results also showed that participants felt embodied in their avatars regardless of gender matching, demonstrating that wearable trackers can be used for a realistic sense of avatar embodiment in VR. We discuss the future implications of these findings for both VR scenarios and embodiment technologies.
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, interaction designers are rarely engineers, and 30 years of haptic inventions are buried in a fragmented literature that describes devices mechanically rather than by potential purpose. We conceived of Haptipedia to unlock this trove of examples: Haptipedia presents a device corpus for exploration through metadata that matter to both device and interaction designers. It is a taxonomy of device attributes that go beyond physical description to capture potential utility, applied to a growing database of 105 grounded force-feedback devices, and accessed through a public visualization that links utility to morphology. Haptipedia’s design was driven by both systematic review of the haptic device literature and rich input from diverse haptic designers. We describe Haptipedia’s reception (including hopes it will redefine device reporting standards) and our plans for its sustainability through community participation.
Increasingly, algorithms are used to make important decisions across society. However, these algorithms are usually poorly understood, which can reduce transparency and evoke negative emotions. In this research, we seek to learn design principles for explanation interfaces that communicate how decision-making algorithms work, in order to help organizations explain their decisions to stakeholders, or to support users’ “right to explanation”. We conducted an online experiment where 199 participants used different explanation interfaces to understand an algorithm for making university admissions decisions. We measured users’ objective and self-reported understanding of the algorithm. Our results show that both interactive explanations and “white-box” explanations (i.e. that show the inner workings of an algorithm) can improve users’ comprehension. Although the interactive approach is more effective at improving comprehension, it comes with a trade-off of taking more time. Surprisingly, we also find that users’ trust in algorithmic decisions is not affected by the explanation interface or their level of comprehension of the algorithm.
Effects of unintended latency on gamer performance have been reported. End-to-end latency can be corrected by post-input manipulation of activation times, but this gives the player unnatural gameplay experience. For moving-target selection games such as Flappy Bird, the paper presents a predictive model of latency on error rate and a novel compensation method for the latency effects by adjusting the game’s geometry design — e.g., by modifying the size of the selection region. Without manipulation of the game clock, this can keep the user’s error rate constant even if the end-to-end latency of the system changes. The approach extends the current model of moving-target selection with two additional assumptions about the effects of latency: (1) latency reduces players’ cue-viewing time and (2) pushes the mean of the input distribution backward. The model and method proposed have been validated through precise experiments.
This paper draws on a collaborative project called the Africatown Activation to examine the role design practices play in contributing to (or conspiring against) the flourishing of the Black community in Seattle, Washington. Specifically, we describe the efforts of a community group called Africatown to design and build an installation that counters decades of disinvestment and ongoing displacement in the historically Black Central Area neighborhood. Our analysis suggests that despite efforts to include community, conventional design practices may perpetuate forms of institutional racism: enabling activities of community engagement that may further legitimate racialized forms of displacement. We discuss how focusing on amplifying the legacies of imagination already at work may help us move beyond a simple reading of design as the solution to systemic forms of oppression.
Cross-Device Taxonomy: Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices
Designing interfaces or applications that move beyond the bounds of a single device screen enables new ways to engage with digital content. Research addressing the opportunities and challenges of interactions with multiple devices in concert is of continued focus in HCI research. To inform the future research agenda of this field, we contribute an analysis and taxonomy of a corpus of 510 papers in the cross-device computing domain. For both new and experienced researchers in the field we provide: an overview, historic trends and unified terminology of cross-device research; discussion of major and under-explored application areas; mapping of enabling technologies; synthesis of key interaction techniques spanning across multiple devices; and review of common evaluation strategies. We close with a discussion of open issues. Our taxonomy aims to create a unified terminology and common understanding for researchers in order to facilitate and stimulate future cross-device research.
Millions of people worldwide contribute content to peer production repositories that serve human information needs and provide vital world knowledge to prominent artificial intelligence systems. Yet, extreme gender participation disparities exist in which men significantly outnumber women. A central concern has been that due to self-focus bias, these disparities can lead to corresponding gender content disparities, in which content of interest to men is better represented than content of interest to women. This paper investigates the relationship between participation and content disparities in OpenStreetMap. We replicate findings that women are dramatically under-represented as OSM contributors, and observe that men and women contribute different types of content and do so about different places. However, the character of these differences confound simple narratives about self-focus bias: we find that on a proportional basis, men produced a higher proportion of contributions in feminized spaces compared to women, while women produced a higher proportion of contributions in masculinized spaces compared to men.
Commercial social VR applications represent a diverse and evolving ecology with competing models of what it means to be social in VR. Drawing from expert interviews, this paper examines how the creators of different social VR applications think about how their platforms frame, support, shape, or constrain social interaction. The study covers a range of applications including: Rec Room, High Fidelity, VRChat, Mozilla Hubs, Altspace VR, AnyLand, and Facebook Spaces. We contextualize design choices underlying these applications, with particular attention paid to the ways that industry experts perceive, and seek to shape, the relationship between user experiences and design choices. We underscore considerations related to: (1) aesthetics of place (2) embodied affordances, (3) social mechanics, (4) and tactics for shaping social norms and mitigating harassment. Drawing on this analysis, we discuss the stakes of these choices, suggest future research directions, and propose an emerging design framework for shaping pro-social behavior in VR.
Casters commentate on a live, streamed video game for a large online audience. Drawing from 20 semi-structured interviews with amateur casters of either Dota 2 or Rocket League video games and over 20 hours of participant observations, we describe the distinctive practices of two types of casters, play-by-play and color commentary. Play-by-play casters are adept at improvising a rich narrative of hype on top of live games, whereas color commentators methodically prepare to fill in the gaps of live play with informative analysis. Casters often start out alone, relying upon reflective practice to hone their craft. Through examining challenges faced by amateur casters, we identified three design opportunities for game designers to support casters and would-be casters as first-class users. Such designs would provide an antidote to the challenges faced by amateur casters: those of the lack of social support for casting, camerawork, and data availability.
We present an interactive editing system for laser cutting called kyub. Kyub allows users to create models efficiently in 3D, which it then unfolds into the 2D plates laser cutters expect. Unlike earlier systems, such as FlatFitFab, kyub affords construction based on closed box structures, which allows users to turn very thin material, such as 4mm plywood, into objects capable of withstanding large forces, such as chairs users can actually sit on. To afford such sturdy construction, every kyub project begins with a simple finger-joint “boxel”-a structure we found to be capable of withstanding over 500kg of load. Users then extend their model by attaching additional boxels. Boxels merge automatically, resulting in larger, yet equally strong structures. While the concept of stacking boxels allows kyub to offer the strong affordance and ease of use of a voxel-based editor, boxels are not confined to a grid and readily combine with kuyb’s various geometry deformation tools. In our technical evaluation, objects built with kyub withstood hundreds of kilograms of loads. In our user study, non-engineers rated the learnability of kyub 6.1/7.
FiberWire: Embedding Electronic Function into 3D Printed Mechanically Strong, Lightweight Carbon Fiber Composite Objects
3D printing offers significant potential in creating highly customized interactive and functional objects. However, at present ability to manufacture functional objects is limited by available materials (e.g., various polymers) and their process properties. For instance, many functional objects need stronger materials which may be satisfied with metal printers. However, to create wholly interactive devices, we need both conductors and insulators to create wiring, and electronic components to complete circuits. Unfortunately, the single material nature of metal printing, and its inherent high temperatures, preclude this. Thus, in 3D printed devices, we have had a choice of strong materials, or embedded interactivity, but not both. In this paper, we introduce a set of techniques we call FiberWire, which leverages a new commercially available capability to 3D print carbon fiber composite objects. These objects are light weight and mechanically strong, and our techniques demonstrate a means to embed circuitry for interactive devices within them. With FiberWire, we describe a fabrication pipeline takes advantage of laser etching and fiber printing between layers of carbon-fiber composite to form low resistance conductors, thereby enabling the fabrication of electronics directly embedded into mechanically strong objects. Utilizing the fabrication pipeline, we show a range of sensor designs, their performance characterization on these new materials and finally three fully printed example object that are both interactive and mechanically strong — a bicycle handle bar with interactive controls, a swing and impact sensing golf club and an interactive game controller (Figure 1).
Tool use extends people’s representations of the immediately actionable space around them. Physical tools thereby become integrated in people’s body schemas. We introduce a measure for tool extension in HCI by using a visual-tactile interference paradigm. In this paradigm, an index of tool extension is given by response time differences between crossmodally congruent and incongruent stimuli; tactile on the hand and visual on the tool. We use this measure to examine if and how findings on tool extension apply to interaction with computer-based tools. Our first experiment shows that touchpad and mouse both provide tool extension over a baseline condition without a tool. A second experiment shows a higher degree of tool extension for a realistic avatar hand compared to an abstract pointer for interaction in virtual reality. In sum, our measure can detect tool extension with computer-based tools and differentiate interfaces by their degree of extension.
Virtual reality (VR) applications for exposure therapy predominantly use computer-generated imagery to create controlled environments in which users can be exposed to their fears. Creating 3D animations, however, is demanding and time-consuming. This paper presents a participatory approach for prototyping VR scenarios that are enabled by 360° video and grounded in lived experiences. We organized a participatory workshop with adolescents to prototype such scenarios, consisting of iterative phases of ideation, storyboarding, live-action plays recorded by a 360° camera, and group evaluation. Through an analysis of the participants’ interactions, we outline how they worked to design prototypes that depict situations relevant to those with a fear of public speaking. Our analysis also explores how participants used their experiences and reflections as resources for design. Six clinical psychologists evaluated the prototypes from the workshop and concluded they were viable therapeutic tools, emphasizing the immersive, realistic experience they presented. We argue that our approach makes the design of VR scenarios more accessible.
In steering law tasks, deviating from the path is immediately considered an error operation. However, in navigating a hierarchical menu item, which is a representative application of the law, a deviation within a short duration is sometimes permitted. We tested the validity of the steering law model with various durations of such error-accepting delays and found that it showed high fits for each delay condition (R2 > 0.96) but poor fits if the delay values were not separated (R2 = 0.58). Because the average movement speed linearly increased as the delay increased, we refined the model by taking the delay into account, and the fitness was significantly improved (R2 = 0.97). Our model will help GUI designers estimate the average operational time on the basis of the menu item length, width, and error-accepting delay.
We explore how usage data captured from ideation cards can enable reflection on design. We deployed a deck of ideation cards on a Masters level module over two years, developing the means to capture the students’ designs into a digital repository. We created two visualisations to reveal the relative co-occurrences of the cards as concept space and the relative proximity of designs (through cards used in common) as design space. We used these to elicit reflections from the perspectives of students, teachers and card designers. Our findings inspire ideas for extending the data-driven use of ideation cards throughout the design process; informing the redesign of cards, the rules for using them and their live connection to supporting materials and enabling stakeholders to reflect and recognise challenges and opportunities. We also identified the need, and potential ways, to capture a richer design rationale, including annotations, discarded cards and varying card interpretations.
Executive coaching has been drawing more and more attention for developing corporate managers. While conversing with managers, coach practitioners are also required to understand internal states of coachees through objective observations. In this paper, we present REsCUE, an automated system to aid coach practitioners in detecting unconscious behaviors of their clients. Using an unsupervised anomaly detection algorithm applied to multimodal behavior data such as the subject’s posture and gaze, REsCUE notifies behavioral cues for coaches via intuitive and interpretive feedback in real-time. Our evaluation with actual coaching scenes confirms that REsCUE provides the informative cues to understand internal states of coachees. Since REsCUE is based on the unsupervised method and does not assume any prior knowledge, further applications beside executive coaching are conceivable using our framework.
Data analysts apply machine learning and statistical methods to timestamped event sequences to tackle various problems but face unique challenges when interpreting the results. Especially in event sequence prediction, it is difficult to convey uncertainty and possible alternative paths or outcomes. In this work, informed by interviews with five machine learning practitioners, we iteratively designed a novel visualization for exploring event sequence predictions of multiple records where users are able to review the most probable predictions and possible alternatives alongside uncertainty information. Through a controlled study with 18 participants, we found that users are more confident in making decisions when alternative predictions are displayed and they consider the alternatives more when deciding between two options with similar top predictions.
Poor sleep has been acknowledged as an increasingly prevalent global health concern, however, how to design for promoting sleep is relatively underexplored. We propose neurofeedback technology may potentially facilitate restfulness and sleep onset, and we explore this through the creation and study of “Inter-Dream”, a novel multisensory interactive artistic experience driven by neurofeedback. Twelve participants individually rested, augmented by Inter-Dream. Results demonstrated: statistically significant decreases in pre-sleep cognitive arousal (p = .01), negative emotion (p = .008), and negative affect (p = .004). EEG readings were also indicative of restorative restfulness and cognitive stillness, while interview responses described experiences of mindfulness and playful self-exploration. Taken together, our work highlights neurofeedback as a potential pathway for future research in the promotion of sleep, while also suggesting strategies for designing towards this within the context of pre-sleep.
Text messages are sometimes prompts that lead to information related tasks, e.g. checking one’s schedule, creating reminders, or sharing content. We introduce MessageOnTap, a suggestive inter-face for smartphones that uses the text in a conversation to suggest task shortcuts that can streamline likely next actions. When activated, MessageOnTap uses word embeddings to rank relevant external apps, and parameterizes associated task shortcuts using key phrases mentioned in the conversation, such as times, persons, or events. MessageOnTap also tailors the auto-complete dictionary based on text in the conversation, to streamline any text input.We first conducted a month-long study of messaging behaviors(N=22) that informed our design. We then conducted a lab study to evaluate the effectiveness of MessageOnTap’s suggestive interface, and found that participants can complete tasks 3.1x faster withMessageOnTap than their typical task flow.
Ingestible sensors are pill-like sensors that people swallow mainly for medical purposes. We propose that ingestible sensors also offer unique opportunities to facilitate intriguing bodily experiences in a playful manner. To explore this, we present “HeatCraft”, a two-player system that translates the user’s body temperature measured by an ingestible sensor to localized thermal stimuli delivered through a waist belt equipped with heating pads. We conducted a study with 16 participants. The study revealed three design themes (Integration of body and technology, Integration of internal body and outside world, and Integration of play and life) along with some open challenges. In summary, this work contributes knowledge to the future design of playful experiences with ingestible sensors.
Group Support Systems provide ways to review and edit shared content during meetings, but typically require participants to explicitly generate the content. Recent advances in speech-to-text conversion and language processing now make it possible to automatically record and review spoken information. We present the iterative design and evaluation of TalkTraces, a real-time visualization that helps teams identify themes in their discussions and obtain a sense of agenda items covered. We use topic modeling to identify themes within the discussions and word embeddings to compute the discussion “relatedness” to items in the meeting agenda. We evaluate TalkTraces iteratively: we first conduct a comparative between-groups study between two teams using TalkTraces and two teams using traditional notes, over four sessions. We translate the findings into changes in the interface, further evaluated by one team over four sessions. Based on our findings, we discuss design implications for real-time displays of discussion content.
Technology allows us to scale the number of jobs we search for and apply to, train for work, and earn money online. However, these technologies do not benefit all job seekers equally and must be designed to better support the needs of underserved job seekers. Research suggests that underserved job seekers prefer employment technologies that can support them in articulating their skills and experiences and in identifying pathways to achieve their career goals. Therefore, we present the design, implementation, and evaluation of DreamGigs, a tool that identifies the skills job seekers need to reach their dream jobs and presents volunteer and employment opportunities for them to acquire those skills. Our evaluation results show that DreamGigs aids in the process of personal empowerment. We contribute design implications for mitigating aspects of powerlessness that low-resource job seekers experience and discuss ways to promote action-taking in these job seekers.
Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations. This has led to a rallying cry for model interpretability. Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools. Through an iterative design process with expert machine learning researchers and practitioners, we designed a visual analytics system, Gamut, to explore how interactive interfaces could better support model interpretation. Using Gamut as a probe, we investigated why and how professional data scientists interpret models, and how interface affordances can support data scientists in answering questions about model interpretability. Our investigation showed that interpretability is not a monolithic concept: data scientists have different reasons to interpret models and tailor explanations for specific audiences, often balancing competing concerns of simplicity and completeness. Participants also asked to use Gamut in their work, highlighting its potential to help data scientists understand their own data.
Guerilla Warfare and the Use of New (and Some Old) Technology: Lessons from FARC’s Armed Struggle in Colombia
Studying armed political struggles from a CSCW perspective can throw the complex interactions between culture, technology, materiality and political conflict into sharp relief. Such studies highlight interrelations that otherwise remain under-remarked upon, despite their severe consequences. The present paper provides an account of the armed struggle of one of the Colombian guerrillas, FARC-EP, with the Colombian army. We document how radio-based communication became a crucial, but ambiguous infrastructure of war. The sudden introduction of localization technologies by the Colombian army presented a lethal threat to the guerrilla group. Our interviewees report a severe learning process to diminish this new risk, relying on a combination of informed beliefs and significant technical understanding. We end with a discussion of the role of HCI in considerations of ICT use in armed conflicts and introduce the concept of counter-appropriation as process of adapting one’s practices to other’s appropriation of technology in conflict.
Overlaying virtual worlds onto existing physical rides and altering the sensations of motion can deliver new experiences of thrill, but designing how motion is mapped between physical ride and virtual world is challenging. In this paper, we present the notion of an abstract machine, a new form of intermediate design knowledge that communicates motion mappings at the level of metaphor, mechanism and implementation. Following a performance-led, in-the-wild approach we report lessons from creating and touring VR Playground, a ride that overlays four distinct abstract machines and virtual worlds on a playground swing. We compare the artist’s rationale with riders’ reported experiences and analysis of their physical behaviours to reveal the distinct thrills of each abstract machine. Finally, we discuss how to make and use abstract machines in terms of heuristics for designing motion mappings, principles for virtual world design and communicating experiences to riders.
Learning games that address targeted curriculum areas are widely used in schools. Within games, productive learning episodes can result from breakdowns when followed by a breakthrough, yet their role in children’s learning has not been investigated. This paper examines the role of game and instructional design during and after breakdowns. We observed 26 young children playing several popular learning games and conducted a moment-by-moment analysis of breakdown episodes. Our findings show children achieve productive breakthroughs independently less than half of the time. In particular, breakdowns caused by game actions are difficult for children to overcome independently and prevent engagement with the domain skills. Importantly, we identify specific instructional game components and their role in fostering strategies that result in successful breakthroughs. We conclude with intrinsic and extrinsic instructional design implications for both game designers and primary teachers to better enable children’s games-based learning.
Robotic-assisted Minimally Invasive Surgery (MIS) is adopted more and more as it overcomes the shortcomings of classic MIS for surgeons while keeping the benefits of small incisions for patients. However, introducing new technology oftentimes affects the work of skilled practitioners. Our goals are to investigate the impacts of telemanipulated surgical robots on the work practices of surgical teams and to understand their cause. We conducted a field study observing 21 surgeries, conducting 12 interviews and performing 3 data validation sessions with surgeons. Using Thematic Analysis, we find that physically separating surgeons from their teams makes them more autonomous, shifts their use of perceptual senses, and turns the surgeon’s assistant into the robot’s assistant. We open design opportunities for the HCI field by questioning the telemanipulated approach and discussing alternatives that keep surgeons on the surgical field.
When designing comfort and usability in products, designers need to evaluate aspects ranging from anthropometrics to use scenarios. Therefore, virtual and poseable mannequins are employed as a reference in early-stage tools and for evaluation in the later stages. However, tools to intuitively interact with virtual humans are lacking. In this paper, we introduceSmartManikin, a mannequin with agency that responds to high-level commands and to real-time design changes. We first captured human poses with respect to desk configurations, identified key features of the pose and trained regression functions to estimate the optimal features at a given desk setup. The SmartManikin’s pose is generated by the predicted features as well as by using forward and inverse kinematics. We present our design, implementation, and an evaluation with expert designers. The results revealed that SmartManikin enhances the design experience by providing feedback concerning comfort and health in real time.
The latest generation of consumer market Head-mounted displays (HMD) now include self-contained inside-out tracking of head motions, which makes them suitable for mobile applications. However, 3D tracking of input devices is either not included at all or requires to keep the device in sight, so that it can be observed from a sensor mounted on the HMD. Both approaches make natural interactions cumbersome in mobile applications. TrackCap, a novel approach for 3D tracking of input devices, turns a conventional smartphone into a precise 6DOF input device for an HMD user. The device can be conveniently operated both inside and outside the HMD’s field of view, while it provides additional 2D input and output capabilities.
With the increased reach and impact of video lectures, it is crucial to understand how they are experienced. Whereas previous studies typically present questionnaires at the end of the lecture, they fail to capture students’ experience in enough granularity. In this paper we propose recording the lecture difficulty in real-time with a physical slider, enabling continuous and fine-grained analysis of the learning experience. We evaluated our approach in a study with 100 participants viewing two variants of two short lectures. We demonstrate that our approach helps us paint a more complete picture of the learning experience. Our analysis has design implications for instructors, providing them with a method that helps them compare their expectations with students’ beliefs about the lectures and to better understand the specific effects of different instructional design decisions.
Raiding is a format in digital gaming that requires groups of people to collaborate and/or compete for a common goal. In 2017, the raiding format was introduced in the location-based mobile game Pokémon GO, which offers a mixed reality experience to friends and strangers coordinating for in-person raids. To understand this technology-mediated social phenomenon, we conducted over a year of participant observations, surveys with 510 players, and interviews with 25 players who raid in Pokémon GO. Using the analytical lens of Arrow, McGrath, and Berdahl’s theory of small groups as complex systems, we identify global, local, and contextual dynamics in location-based raiding that support and challenge ad-hoc group formation in real life. Based on this empirical and theoretical understanding, we discuss implications to design for transparency, social affordances, and bridging gaps between global and contextual dynamics for increased positive and inclusive community interactions.
Our observations of landscape architecture students revealed a new phenomenon-interstices. Their bimanual interactions with a pen and touch surface involved various sustained hand gestures, interleaved between their regular commands. Positioning of the non-preferred hand indicates anticipated actions, including: sustained hovering near the surface; pulled back but still floating above the surface; and resting in their laps. We ran a second study with 14 landscape architect students which confirmed our observations, and uncovered a new interstice i.e. stabilizing the preferred hand while handwriting. We conclude with directions for future research and challenges for designers and researchers.
Unauthorized physical access to personal devices by people known to the owner of the device is a common concern, and a common occurrence. But how do people experience incidents of unauthorized access? Using an online survey, we collected 102 accounts of unauthorized access. Participants wrote stories about past situations in which either they accessed the smartphone of someone they know, or someone they know accessed theirs. We describe the context leading up to these incidents, the course of events, and the consequences. We then identify two orthogonal themes in how participants conceptualized these incidents. First, participants understood trust as performative vulnerability: trust was necessary to sustain relationships, but building trust required displaying vulnerability to breaches. Second, participants were self-serving in their sensemaking: they blamed the circumstances, or the other person’s shortcomings, but rarely themselves. We discuss the implications of our findings for security design and practice.
The news landscape has been changing dramatically over the past few years. Whereas news once came from a small set of highly edited sources, now people can find news from thousands of news sites online, through a variety of channels such as web search, social media, email newsletters, or direct browsing. We set out to understand how Americans read news online using web browser logs collected from 174 diverse participants. We found that 20% of all news sessions started with a web search, that 16% started from social media, that 61% of news sessions only involved a single news domain, and that 47% of our participants read news from both sides of the political spectrum. We conclude with key implications for online news, social media, and search sites to encourage more balanced news browsing.
Virtual keyboard typing is typically aided by an auto-correct method that decodes a user’s noisy taps into their intended text. This decoding process can reduce error rates and possibly increase entry rates by allowing users to type faster but less precisely. However, virtual keyboard decoders sometimes make mistakes that change a user’s desired word into another. This is particularly problematic for challenging text such as proper names. We investigate whether users can guess words that are likely to cause auto-correct problems and whether users can adjust their behavior to assist the decoder. We conduct computational experiments to decide what predictions to offer in a virtual keyboard and design a smartwatch keyboard named VelociWatch. Novice users were able to use the features of VelociWatch to enter challenging text at 17 words-per-minute with a corrected error rate of 3%. Interestingly, they wrote slightly faster and just as accurately on a simpler keyboard with limited correction options. Our finding suggest users may be able to type difficult words on a smartwatch simply by tapping precisely without the use of auto-correct.
Co-Designing Food Trackers with Dietitians: Identifying Design Opportunities for Food Tracker Customization
We report co-design workshops with registered dietitians conducted to identify opportunities for designing customizable food trackers. Dietitians typically see patients who have different dietary problems, thus having different information needs. However, existing food trackers such as paper-based diaries and mobile apps are rarely customizable, making it difficult to capture necessary data for both patients and dietitians. During the co-design sessions, dietitians created representative patient personas and designed food trackers for each persona. We found a wide range of potential tracking items such as food, reflection, symptom, activity, and physical state. Depending on patients’ dietary problems and dietitians’ practice, the necessity and importance of these tracking items vary. We identify opportunities for patients and healthcare providers to collaborate around data tracking and sharing through customization. We also discuss how to structure co-design workshops to solicit the design considerations of self-tracking tools for patients with specific health problems.
Engaging Low-Income African American Older Adults in Health Discussions through Community-based Design Workshops
Community-based approaches to participatory design, such as the design workshop, promise to engage underserved populations in collaborative dialog and provide a platform for promoting the views of communities who are not typically given a space to engage in design. Yet, we know little about how design workshops as a research site can engage underserved individuals (i.e., due to class, race, or age status) or address personal concerns (e.g., health). As a way of exploring these issues, we conducted a series of five design workshops with low-income African-American older adults to understand their health experiences. Our findings reveal three insights associated with the design workshop and the topic of health: comfort with community versus personal health; the sociocultural configuration of interaction; and empowerment in the context of systematic inequality of opportunity. We discuss the importance of understanding the situated nature of design workshops, particularly when engaging underserved groups in the topic of health, and the potential of the design workshop as a mechanism for activism.
Early warning dashboards in higher education analyze student data to enable early identification of underperforming students, allowing timely interventions by faculty and staff. To understand perceptions regarding the ethics and impact of such learning analytics applications, we conducted a multi-stakeholder analysis of an early-warning dashboard deployed at the University of Michigan through semi-structured interviews with the system’s developers, academic advisors (the primary users), and students. We identify multiple tensions among and within the stakeholder groups, especially with regard to awareness, understanding, access and use of the system. Furthermore, ambiguity in data provenance and data quality result in differing levels of reliance and concerns about the system among academic advisors and students. While students see the system’s benefits, they argue for more involvement, control, and informed consent regarding the use of student data. We discuss our findings’ implications for the ethical design and deployment of learning analytics applications in higher education. Early warning dashboards in higher education analyze student data to enable early identification of underperforming students, allowing timely interventions by faculty and staff. To understand perceptions regarding the ethics and impact of such learning analytics applications, we conducted a multi-stakeholder analysis of an early-warning dashboard deployed at the University of Michigan through semi-structured interviews with the system’s developers, academic advisors (the primary users), and students. We identify multiple tensions among and within the stakeholder groups, especially with regard to awareness, understanding, access, and use of the system. Furthermore, ambiguity in data provenance and data quality result in differing levels of reliance and concerns about the system among academic advisors and students. While students see the system’s benefits, they argue for more involvement, control, and informed consent regarding the use of student data. We discuss our findings’ implications for the ethical design and deployment of learning analytics applications in higher education.
Automating the Administration and Analysis of Psychiatric Tests: The Case of Attachment in School Age Children
This article presents the School Attachment Monitor, a novel interactive system that can reliably administer the Manchester Child Attachment Story Task (a standard psychiatric test for the assessment of attachment in children) without the supervision of trained professionals. Attachment problems in children cause significant mental health issues and costs to society which technology has the potential to reduce. SAM collects, through instrumented doll-play games, enough information to allow a human assessor to manually identify the attachment status of children. Experiments show that the system successfully does this in 87.5% of cases. In addition, the experiments show that an automatic approach based on deep neural networks can map the information collected into the attachment condition of the children. The outcome SAM matches the judgment of expert human assessors in 82.8% of cases. This is the first time an automated tool has been successful in measuring attachment. This work has significant implications for psychiatry as it allows professionals to assess many more children cost effectively and to direct healthcare resources more accurately and efficiently to improve mental health.
We explore 360 paper prototyping to rapidly create AR/VR prototypes from paper and bring them to life on AR/VR devices. Our approach is based on a set of emerging paper prototyping templates specifically for AR/VR. These templates resemble the key components of many AR/VR interfaces, including 2D representations of immersive environments, AR marker overlays and face masks, VR controller models and menus, and 2D screens and HUDs. To make prototyping with these templates effective, we developed 360proto, a suite of three novel physical–digital prototyping tools: (1) the 360proto Camera for capturing paper mockups of all components simply by taking a photo with a smartphone and seeing 360-degree panoramic previews on the phone or stereoscopic previews in Google Cardboard; (2) the 360proto Studio for organizing and editing captures, for composing AR/VR interfaces by layering the captures, and for making them interactive with Wizard of Oz via live video streaming; (3) the 360proto App for running and testing the interactive prototypes on AR/VR capable mobile devices and headsets. Through five student design jams with a total of 86 participants and our own design space explorations, we demonstrate that our approach with 360proto is useful to create relatively complex AR/VR applications.
A Walk on the Child Side: Investigating Parents’ and Children’s Experience and Perspective on Mobile Technology for Outdoor Child Independent Mobility
Technology increasingly offers parents more and more opportunities to monitor children, reshaping the way control and autonomy are negotiated within families. This paper investigates the views of parents and primary school children on mobile technology designed to support child independent mobility in the context of the local walking school buses. Based on a school-year long field study, we report findings on children’s and parents’ experience with proximity detection devices. The results provide insights into how the parents and children accepted and socially appropriated the technology into the walking school bus activity, shedding light on the way they understand and conceptualize a technology that collects data on children’s proximity to the volunteers’ smartphone. We discuss parents’ needs and concerns toward monitoring technologies and the related challenges in terms of trust-control balance. These insights are elaborated to inform the future design of technology for child independent mobility.
Voice assistants are quickly being upgraded to support advanced, security-critical commands such as unlocking devices, checking emails, and making payments. In this paper, we explore the feasibility of using users’ text-converted voice command utterances as classification features to help identify users’ genuine commands, and detect suspicious commands. To maintain high detection accuracy, our approach starts with a globally trained attack detection model (immediately available for new users), and gradually switches to a user-specific model tailored to the utterance patterns of a target user. To evaluate accuracy, we used a real-world voice assistant dataset consisting of about 34.6 million voice commands collected from 2.6 million users. Our evaluation results show that this approach is capable of achieving about 3.4% equal error rate (EER), detecting 95.7% of attacks when an optimal threshold value is used. As for those who frequently use security-critical (attack-like) commands, we still achieve EER below 5%.
While digital technologies are increasingly being used to provide support and diagnoses remotely, it is unclear whether they offer adequate emotional support and appropriate messages in navigating complex, stigmatised and sensitive conditions that can have a momentous impact on people’s lives. In this paper, we investigate how and why people access existing HIV resources, and their experiences of using these resources through a survey with 197 respondents and an interview and think-aloud study with 28 participants. Our findings indicate that many HIV-related resources do not address the anxiety-provoking reasons for access, reinforce stigma and neglect to provide important information and emotional support. We finally discuss potential ways of addressing these issues in the current environment where more sexual health services are being delivered online.
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams’ challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by teams in practice and the solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address practitioners’ needs.
From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.
Experience Centered Design (ECD) implores us to develop empathic relationships and understanding of participants, to actively work with our senses and emotions within the design process. However, theories of experience-centered design do little to account for emotion work undertaken by design researchers when doing this. As a consequence, how a design researcher’s emotions are experienced, navigated and used as part of an ECD process are rarely published. So, while emotion is clearly a tool that we use, we don’t share with one another how, why and when it gets used. This has a limiting effect on how we understand design processes, and opportunities for training. Here, we share some of our experiences of working with ECD. We analyse these using Hochschild’s framework of emotion work to show how and where this work occurs. We use our analysis to question current ECD practices and provoke debate.
While there is a renewed interest in voice user interfaces (VUI) in HCI, little attention has been paid to the design of VUI voice output beyond intelligibility and naturalness. We draw on the field of sociophonetics – the study of the social factors that influence the production and perception of speech – to highlight how current VUIs are based on a limited and homogenised set of voice outputs. We argue that current systems do not adequately consider the diversity of peoples’ speech, how that diversity represents sociocultural identities, and how voices have the potential to shape user perceptions and experiences. Ultimately, as other technological developments have influenced the ideologies of language, the voice outputs of VUIs will influence the ideologies of speech. Based on our argument, we pose three design strategies for VUI voice output design – individualisation, context awareness, and diversification – to motivate new ways of conceptualising and designing these technologies.
Commitment devices are a technique from behavioral economics that have been shown to mitigate the effects of present bias—the tendency to discount future risks and gains in favor of immediate gratifications. In this paper, we explore the feasibility of using commitment devices to nudge users towards complying with varying online security mitigations. Using two online experiments, with over 1,000 participants total, we offered participants the option to be reminded or to schedule security tasks in the future. We find that both reminders and commitment nudges can increase users’ intentions to install security updates and enable two-factor authentication, but not to configure automatic backups. Using qualitative data, we gain insights into the reasons for postponement and how to improve future nudges. We posit that current nudges may not live up to their full potential, as the timing options offered to users may be too rigid.
An unsolved debate in the field of usable security concerns whether security mechanisms should be visible, or blackboxed away from the user for the sake of usability. However, tying this question to pragmatic usability factors only might be simplistic. This study aims at researching the impact of displaying security mechanisms on User Experience (UX) in the context of e-voting. Two versions of an e-voting application were designed and tested using a between-group experimental protocol (N=38). Version D displayed security mechanisms, while version ND did not reveal any security-related information. We collected data on UX using standardised evaluation scales and semi-structured interviews. Version D performed better overall in terms of UX and need fulfilment. Qualitative analysis of the interviews gives further insights into factors impacting perceived security. Our study adds to existing research suggesting a conceptual shift from usability to UX and discusses implications for designing and evaluating secure systems.
Designing User Interface Elements to Improve the Quality and Civility of Discourse in Online Commenting Behaviors
Ensuring high-quality, civil social interactions remains a vexing challenge in many online spaces. In the present work, we introduce a novel approach to address this problem: using psychologically “embedded” CAPTCHAs containing stimuli intended to prime positive emotions and mindsets. An exploratory randomized experiment (N = 454 Mechanical Turk workers) tested the impact of eight new CAPTCHA designs implemented on a simulated, politically charged comment thread. Results revealed that the two interventions that were the most successful at activating positive affect also significantly increased the positivity of tone and analytical complexity of argumentation in participants’ responses. A focused follow-up experiment (N = 120 Mechanical Turk workers) revealed that exposure to CAPTCHAs featuring image sets previously validated to evoke low-arousal positive emotions significantly increased the positivity of sentiment and the levels of complexity and social connectedness in participants’ posts. We offer several explanations for these results and discuss the practical and ethical implications of designing interfaces to influence discourse in online forums.
The “Comadre” Project: An Asset-Based Design Approach to Connecting Low-Income Latinx Families to Out-of-School Learning Opportunities
Participation in out-of-school learning programs has been shown to generate significant academic, social/emotional, and institutional benefits for young learners, and today’s wealthy families are disproportionately reaping these benefits. This paper presents the results of an asset-based/human-centered design research process and pilot aimed at connecting low-income families in a Southern California city with local low-cost out-of-school learning opportunities. Based on background research including qualitative interviewing, home visits, technology inventories and use walkthroughs with 40 low-income, majority Latinx families, we created and piloted a free subscription SMS service that automatically pushes bilingual SMS messages with curated information on local low-cost enrichment learning opportunities to low-income families. We framed our human-centered design process through an intersectional, “asset-based approach,” which recognizes that marginalized communities have already developed robust, culturally-specific social practices to enable them to navigate the world, seeks to amplify them, and refrains from imposing a top-down or pre-conceived “idea” of intervention.
When engaged in communication, people often rely on pointing gestures to refer to out-of-reach content. However, observers frequently misinterpret the target of a pointing gesture. Previous research suggests that to perform a pointing gesture, people place the index finger on or close to a line connecting the eye to the referent, while observers interpret pointing gestures by extrapolating the referent using a vector defined by the arm and index finger. In this paper we present Warping Deixis, a novel approach to improving the perception of pointing gestures and facilitate communication in collaborative Extended Reality environments. By warping the virtual representation of the pointing individual, we are able to match the pointing expression to the observer’s perception. We evaluated our approach in a co-located side by side virtual reality scenario. Results suggest that our approach is effective in improving the interpretation of pointing gestures in shared virtual environments.
Humans involuntarily move their eyes when retrieving an image from memory. This motion is often similar to actually observing the image. We suggest to exploit this behavior as a new modality in human computer interaction, using the motion of the eyes as a descriptor of the image. Interaction requires the user’s eyes to be tracked but no voluntary physical activity. We perform a controlled experiment and develop matching techniques using machine learning to investigate if images can be discriminated based on the gaze patterns recorded while users merely think about image. Our results indicate that image retrieval is possible with an accuracy significantly above chance. We also show that this result generalizes to images not used during training of the classifier and extends to uncontrolled settings in a realistic scenario.
While there have been considerable developments in designing for dementia within HCI, there is still a lack of empirical understanding of the experience of people with advanced dementia and the ways in which design can support and enrich their lives. In this paper, we present our findings from a long-term ethnographic study, which aimed to gain an understanding of their lived experience and inform design practices for and with people with advanced dementia in residential care. We present our findings using the social theory of recognition as an analytic lens to account for recognition in practice and its challenges in care and research. We discuss how we, as the HCI community, can pragmatically engage with people with advanced dementia and propose a set of considerations for those who wish to design for and with the values of recognition theory to promote collaboration, agency and social identity in advanced dementia care.
Multi-user input over a shared display has been shown to support group process and improve performance. However, current gesturing systems for instructional collaborative tasks limit the input to experts and overlook the needs of novices in making references on a shared display. In this paper, we investigate the effects of a single-user gesturing tool on the communication between trainer and trainees in a laparoscopic surgical training. By comparing the communication structure and content between the trainings with and without the gesturing tool, we show that the communication becomes more imbalanced and the trainees become less active when using the single-user gesturing tool. Our findings highlight the needs to grant all parties the same level of access to a shared display and suggest further directions in designing a shared display for instructional collaborative tasks.
Target disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce VOR depth estimation, a method based on the Vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth.
Prolonged static and unbalanced sitting postures during computer usage contribute to musculoskeletal discomfort. In this paper, we investigated the use of a very slow moving monitor for unobtrusive posture correction. In a first study, we identified display velocities below the perception threshold and observed how users (without being aware) responded by gradually following the monitor’s motion. From the result, we designed a robotic monitor that moves imperceptible to counterbalance unbalanced sitting postures and induces posture correction. In an evaluation study (n=12), we had participants work for four hours without and with our prototype (8 in total). Results showed that actuation increased the frequency of non-disruptive swift posture corrections and significantly reduced the duration of unbalanced sitting. Most users appreciated the monitor correcting their posture and reported less physical fatigue. With slow robots, we make the first step toward using actuated objects for unobtrusive behavioral changes.
Understanding Digitally-Mediated Empathy: An Exploration of Visual, Narrative, and Biosensory Informational Cues
Digitally sharing our experiences engages a process of empathy shaped by available informational cues. Biosensory data is one informative cue, but the relationship to empathy is underexplored. In this study, we investigate this process by showing a video of a “target” person’s visual perspective watching a virtual reality film to sixty “observers”. We vary information available to observers via three experimental conditions: a baseline unmodified video, video with narrative text, or with a graph of electrodermal activity (EDA) of the target. Compared to baseline, narrative text increased empathic accuracy (EA) while EDA had an opposite, negative effect. Qualitatively, observers describe their empathic processes as using their own feelings supplemented with the information presented depending on the interpretability of that information. Both narration and EDA prompted observers to reconsider assumptions about another’s experience. Our findings lead to a discussion of digitally-mediated empathy with implications for associated research and product development.
Understanding Personal Productivity: How Knowledge Workers Define, Evaluate, and Reflect on Their Productivity
Productivity tracking tools often determine productivity based on the time interacting with work-related applications. To deconstruct productivity’s diverse and nebulous nature, we investigate how knowledge workers conceptualize personal productivity and delimit productive tasks in both work and non-work contexts. We report a 2-week diary study followed by a semi-structured interview with 24 knowledge workers. Participants captured productive activities and provided the rationale for why the activities were assessed to be productive. They reported a wide range of productive activities beyond typical desk-bound work-ranging from having a personal conversation with dad to getting a haircut. We found six themes that characterize the productivity assessment-work product, time management, worker’s state, attitude toward work, impact & benefit, and compound task and identified how participants interleaved multiple facets when assessing their productivity. We discuss how these findings could inform the design of a comprehensive productivity tracking system that covers a wide range of productive activities.
We present Vistribute, a framework for the automatic distribution of visualizations and UI components across multiple heterogeneous devices. Our framework consists of three parts: (i) a design space considering properties and relationships of interactive visualizations, devices, and user preferences in multi-display environments; (ii) specific heuristics incorporating these dimensions for guiding the distribution for a given interface and device ensemble; and (iii) a web-based implementation instantiating these heuristics to automatically generate a distribution as well as providing interaction mechanisms for user-defined adaptations. In contrast to existing UI distribution systems, we are able to infer all required information by analyzing the visualizations and devices without relying on additional input provided by users or programmers. In a qualitative study, we let experts create their own distributions and rate both other manual distributions and our automatic ones. We found that all distributions provided comparable quality, hence validating our framework.
Movement-based interactions are gaining traction, requiring a better understanding of how such expressions are shaped by designers. Through an analysis of an artistic process aimed to deliver a commissioned opera where custom-built drones are performing on stage alongside human performers, we observed the importance of achieving an intercorporeal understanding to shape body-based emotional expressivity. Our analysis reveals how the choreographer moves herself to: (1) imitate and feel the affordances and expressivity of the drones’ ‘otherness’ through her own bodily experience; (2) communicate to the engineer of the team how she wants to alter the drones’ behaviors to be more expressive; (3) enact and interactively alter her choreography. Through months of intense development and creative work, such an intercorporeal understanding was achieved by carefully crafting the drones’ behaviors, but also by the choreographer adjusting her own somatics and expressions. The choreography arose as a result of the expressivity they enabled together.
This paper presents the first investigation into using the goal-crossing paradigm for object selection with virtual reality (VR) head-mounted displays. Two experiments were carried out to evaluate ray-casting crossing tasks with target discs in 3D space and goal lines on 2D plane respectively in comparison to ray-casting pointing tasks. Five factors, i.e. task difficulty, the direction of movement constraint (collinear vs. orthogonal), the nature of the task (discrete vs. continuous), field of view of VR devices and target depth, were considered in both experiments. Our findings are: (1) crossing generally had shorter or no longer time, and higher or similar accuracy than pointing, indicating crossing can complement or substitute pointing; (2) crossing tasks can be well modelled with Fitts’ Law; (3) crossing performance depended on target depth; (4) crossing target discs in 3D space differed from crossing goal lines on 2D plane in many aspects such as time and error performance, the effects of target depth and the parameters of Fitts’ models. Based on these findings, we formulate a number of design recommendations for crossing-based interaction in VR.
Modeling in Augmented Reality (AR) lets users create and manipulate virtual objects in mid-air that are aligned to their real environment. We present ARPen, a bimanual input technique for AR modeling that combines a standard smartphone with a 3D-printed pen. Users sketch with the pen in mid-air, while holding their smartphone in the other hand to see the virtual pen traces in the live camera image. ARPen combines the pen’s higher 3D input precision with the rich interactive capabilities of the smartphone touchscreen. We studied subjective preferences for this bimanual input technique, such as how people hold the smartphone while drawing, and analyzed the performance of different bimanual techniques for selecting and moving virtual objects. Users preferred a bimanual technique casting a ray through the pen tip for both selection and translation. We provide initial design guidelines for this new class of bimanual AR modeling systems.
Navigation systems for cyclists are commonly screen-based devices mounted on the handlebar which show map information. Typically, adult cyclists have to explicitly look down for directions. This can be distracting and challenging for children, given their developmental differences in motor and perceptual-motor abilities compared with adults. To address this issue, we designed different unimodal cues and explored their suitability for child cyclists through two experiments. In the first experiment, we developed an indoor bicycle simulator and compared auditory, light, and vibrotactile navigation cues. In the second experiment, we investigated these navigation cues in-situ in an outdoor practice test track using a mid-size tricycle. To simulate road distractions, children were given an additional auditory task in both experiments. We found that auditory navigational cues were the most understandable and the least prone to navigation errors. However, light and vibrotactile cues might be useful for educating younger child cyclists.
As 360 deg cameras and virtual reality headsets become more popular, panorama images have become increasingly ubiquitous. While sounds are essential in delivering immersive and interactive user experiences, most panorama images, however, do not come with native audio. In this paper, we propose an automatic algorithm to augment static panorama images through realistic audio assignment. We accomplish this goal through object detection, scene classification, object depth estimation, and audio source placement. We built an audio file database composed of over $500$ audio files to facilitate this process. We designed and conducted a user study to verify the efficacy of various components in our pipeline. We run our method on a large variety of panorama images of indoor and outdoor scenes. By analyzing the statistics, we learned the relative importance of these components, which can be used in prioritizing for power-sensitive time-critical tasks like mobile augmented reality (AR) applications.
We present a system that augments live presentation videos with interactive graphics to create a powerful and expressive storytelling environment. Using our system, the presenter interacts with the graphical elements in real-time with gestures and postures, thus leveraging our innate, everyday skills to enhance our communication capabilities with the audience. However, crafting such an interactive and expressive performance typically requires programming, or highly-specialized tools tailored for experts. Our core contribution is a flexible, direct manipulation UI which enables amateurs and experts to craft such presentations beforehand by mapping a variety of body movements to a wide range of graphical manipulations. By simplifying the mapping between gestures, postures, and their corresponding output effects, our UI enables users to craft customized, rich interactions with the graphical elements. Our user study demonstrates the potential usage and unique affordance of this mixed-reality medium for storytelling and presentation across a range of application domains.
Couples exhibit special communication practices, but apps rarely offer couple-specific functionality. Research shows that sharing streams of contextual information (e.g. location, motion) helps couples coordinate and feel more connected. Most studies explored a single, ephemeral stream; we study how couples’ communication changes when sharing multiple, persistent streams. We designed Lifelines, a mobile-app technology probe that visualizes up to six streams on a shared timeline: closeness to home, battery level, steps, media playing, texts and calls. A month-long study with nine couples showed that partners interpreted information mostly from individual streams, but also combined them for more nuanced interpretations. Persistent streams allowed missing data to become meaningful and provided new ways of understanding each other. Unexpected patterns from any stream can trigger calls and texts, whereas seeing expected data can replace direct communication, which may improve or disrupt established communication practices. We conclude with design implications for mediating awareness within couples.
Friend, Collaborator, Student, Manager: How Design of an AI-Driven Game Level Editor Affects Creators
Machine learning advances have afforded an increase in algorithms capable of creating art, music, stories, games, and more. However, it is not yet well-understood how machine learning algorithms might best collaborate with people to support creative expression. To investigate how practicing designers perceive the role of AI in the creative process, we developed a game level design tool for Super Mario Bros.-style games with a built-in AI level designer. In this paper we discuss our design of the Morai Maker intelligent tool through two mixed-methods studies with a total of over one-hundred participants. Our findings are as follows: (1) level designers vary in their desired interactions with, and role of, the AI, (2) the AI prompted the level designers to alter their design practices, and (3) the level designers perceived the AI as having potential value in their design practice, varying based on their desired role for the AI.
Augmented and virtual reality (AR/VR) has entered the mass market and, with it, will soon eye tracking as a core technology for next generation head-mounted displays (HMDs). In contrast to existing gaze interfaces, the 3D nature of AR and VR requires estimating a user’s gaze in 3D. While first applications, such as foveated rendering, hint at the compelling potential of combining HMDs and gaze, a systematic analysis is missing. To fill this gap, we present the first design space for gaze interaction on HMDs. Our design space covers human depth perception and technical requirements in two dimensions aiming to identify challenges and opportunities for interaction design. As such, our design space provides a comprehensive overview and serves as an important guideline for researchers and practitioners working on gaze interaction on HMDs. We further demonstrate how our design space is used in practice by presenting two interactive applications: EyeHealth and XRay-Vision.
Understanding validity of user behaviour in Virtual Environments (VEs) is critical as they are increasingly being used for serious Health and Safety applications such as predicting human behaviour and training in hazardous situations. This paper presents a comparative study exploring user behaviour in VE-based fire evacuation and investigates whether this is affected by the addition of thermal and olfactory simulation. Participants (N=43) were exposed to a virtual fire in an office building. Quantitative and qualitative analyses of participant attitudes and behaviours found deviations from those we would expect in real life (e.g. pre-evacuation actions), but also valid behaviours like fire avoidance. Potentially important differences were found between multisensory and audiovisual-only conditions (e.g. perceived urgency). We conclude VEs have significant potential in safety-related applications, and that multimodality may afford additional uses in this context, but the identified limitations of behavioural validity must be carefully considered to avoid misapplication of the technology.
Can a chatbot enable us to change our conceptions, to be critically reflective? To what extent can interaction with a technologically ‘minimal’ medium such as a chatbot evoke emotional engagement in ways that can challenge us to act on the world? In this paper, we discuss the design of a provocative bot, a ‘bot of conviction’, aimed at triggering conversations on complex topics (e.g. death, wealth distribution, gender equality, privacy) and, ultimately, soliciting specific actions from the user it converses with. We instantiate our design with a use case in the cultural sector, specifically a Neolithic archaeological site that acts as a stage of conversation on such hard themes. Our larger contributions include an interaction framework for bots of conviction, insights gained from an iterative process of participatory design and evaluation, and a vision for bot interaction mechanisms that can apply to the HCI community more widely.
We present FoldTronics, a 2D-cutting based fabrication technique to integrate electronics into 3D folded objects. The key idea is to cut and perforate a 2D sheet to make it foldable into a honeycomb structure using a cutting plotter; before folding the sheet into a 3D structure, users place the electronic components and circuitry onto the sheet. The fabrication process only takes a few minutes allowing to rapidly prototype functional interactive devices. The resulting objects are lightweight and rigid, thus allowing for weight-sensitive and force-sensitive applications. Finally, due to the nature of the honeycomb structure, the objects can be folded flat along one axis and thus can be efficiently transported in this compact form factor. We describe the structure of the foldable sheet, and present a design tool that enables users to quickly prototype the desired objects. We showcase a range of examples made with our design tool, including objects with integrated sensors and display elements.
Bitcoin blockchain technology is a distributed ledger of nodes authorizing transactions between anonymous parties. Its key actors are miners using computational power to solve mathematical problems for validating transactions. By sharing blockchain’s characteristics, mining is a decentralized, transparent and unregulated practice, less explored in HCI, so we know little about miners’ motivations and experiences, and how these may impact on different dimensions of trust. This paper reports on interviews with 20 bitcoin miners about their practices and trust challenges. Findings contribute to HCI theories by extending the exploration of blockchain’s characteristics relevant to trust with the competitiveness dimension underpinning the social organization of mining. We discuss the risks of collaborative mining due to centralization and dishonest administrators, and conclude with design implications highlighting the need for tools monitoring the distribution of rewards in collaborative mining, tools tracking data centers’ authorization and reputation, and tools supporting the development of decentralized pools.
JourneyCam: Exploring Experiences of Accessibility and Mobility among Powered Wheelchair Users through Video and Data
Recent HCI research has investigated how digital technologies might enable citizens to identify and express matters of civic concern. We extend this work by describing JourneyCam, a smartphone-based system that enables powered wheelchair users to capture video and sensor data about their experiences of mobility. Thirteen participants used JourneyCam to document journeys, after which the data they collected was used to support discussions around their experiences. Our findings highlight how the system facilitated the articulation of complex embodied experiences, and how the collected data might have particular value in surfacing these experiences to help inform urban design and policymaking. Participants valued the ways in which JourneyCam’s moving image and sensor data made hard-to-express sensations apparent, as well as how it enabled them to surface previously unrecognised issues. We conclude by highlighting future opportunities for how such tools might enable citizens to inform and influence civic governance.
Board games present accessibility barriers for players with visual impairment since they often employ visuals alone to communicate gameplay information. Our research focuses on board game accessibility for those with visual impairment. This paper describes a three-phase study conducted to develop board game accessibility adaptation guidelines. These guidelines were developed through a user-centered design approach that included in-depth interviews and a series of user studies using two adapted board games. Our findings indicate that participants with and without visual impairment were able to play the adapted games, exhibiting a balanced experience whereby participants had complete autonomy and were provided with equal chances of victory. Our paper also contributes to the game and accessibility communities through the development of adaptation guidelines that allow board games to become inclusive irrespective of a player’s visual impairment.
Wearables have emerged as an increasingly promising interactive platform, imbuing the human body with always-available computational capabilities. This unlocks a wide range of applications, including discreet information access, health monitoring, fitness, and fashion. However, unlike previous platforms, wearable electronics require structural conformity, must be comfortable for the wearer, and should be soft, elastic, and aesthetically appealing. We envision a future where electronics can be temporarily attached to the body (like bandages or party masks), but in functional and aesthetically pleasing ways. Towards this vision, we introduce ElectroDermis, a fabrication approach that simplifies the creation of highly-functional and stretchable wearable electronics that are conformal and fully untethered by discretizing rigid circuit boards into individual components. These individual components are wired together using stretchable electrical wiring and assembled on a spandex blend fabric, to provide high functionality in a robust form-factor that is reusable. We describe our system in detail- including our fabrication parameters and its operational limits-which we hope researchers and practitioners can leverage. We describe a series of example applications that illustrate the feasibility and utility of our system. Overall, we believe ElectroDermis offers a complementary approach to wearable electronics-one that places value on the notion of impermanence (i.e., unlike tattoos and implants), better conforming to the dynamic nature of the human body.
Design ideation is a prime creative activity in design. However, it is challenging to support computationally due to its quickly evolving and exploratory nature. The paper presents cooperative contextual bandits (CCB) as a machine-learning method for interactive ideation support. A CCB can learn to propose domain-relevant contributions and adapt their exploration/exploitation strategy. We developed a CCB for an interactive design ideation tool that 1) suggests inspirational and situationally relevant materials (“may AI?”); 2) explores and exploits inspirational materials with the designer; and 3) explains its suggestions to aid reflection. The application case of digital mood board design is presented, wherein visual inspirational materials are collected and curated in collages. In a controlled study, 14 of 16 professional designers preferred the CCB-augmented tool. The CCB approach holds promise for ideation activities wherein adaptive and steerable support is welcome but designers must retain full outcome control.
Online question pools like LeetCode provide hands-on exercises of skills and knowledge. However, due to the large volume of questions and the intent of hiding the tested knowledge behind them, many users find it hard to decide where to start or how to proceed based on their goals and performance. To overcome these limitations, we present PeerLens, an interactive visual analysis system that enables peer-inspired learning path planning. PeerLens can recommend a customized, adaptable sequence of practice questions to individual learners, based on the exercise history of other users in a similar learning scenario. We propose a new way to model the learning path by submission types and a novel visual design to facilitate the understanding and planning of the learning path. We conducted a within-subject experiment to assess the efficacy and usefulness of PeerLens in comparison with two baseline systems. Experiment results show that users are more confident in arranging their learning path via PeerLens and find it more informative and intuitive.
Electronic Health Records Are More Than a Work Tool: Conflicting Needs of Direct and Indirect Stakeholders
The involvement of stakeholders is crucial when designing IT in highly complex application domains, such as healthcare. Stakeholder relationships are complex and can include strongly conflicting needs and value tensions. In this case study, we investigate the different perspectives of patients and physicians related to Patient Accessible Electronic Health Records (PAEHR) in Sweden. Generally, the introduction of this service has been heavily criticised by healthcare professionals, but welcomed by patients. The paper presents an innovative study design where themes from interviews with physicians are used as a lens to analyse survey data from patients. The findings highlight the necessity to understand stakeholders’ perspectives about other stakeholder groups by contrasting assumptions and expectations of physicians (indirect stakeholders) with experience of use by patients (direct stakeholders), and discusses practical challenges when designing large-scale health information systems.
Human-computer input performance inherently involves speed-accuracy tradeoffs—the faster users act, the more inaccurate those actions are. Therefore, comparing speeds and accuracies separately can result in ambiguous outcomes: Does a fast but inaccurate technique perform better or worse overall than a slow but accurate one? For pointing, speed and accuracy has been unified for over 60 years as throughput (bits/s) (Crossman 1957, Welford 1968), but to date, no similar metric has been established for text entry. In this paper, we introduce a text entry method-independent throughput metric based on Shannon information theory (1948). To explore the practical usability of the metric, we conducted an experiment in which 16 participants typed with a laptop keyboard using different cognitive sets, i.e., speed-accuracy biases. Our results show that as a performance metric, text entry throughput remains relatively stable under different speed-accuracy conditions. We also evaluated a smartphone keyboard with 12 participants, finding that throughput varied least compared to other text entry metrics. This work allows researchers to characterize text entry performance with a single unified measure of input efficiency.
Advances in automotive sensing systems and speech interfaces provide new opportunities for smarter driving assistants or infotainment systems. For both safety and consumer satisfaction reasons, any new system which interacts with drivers must do so at appropriate times. We asked 63 drivers, ”Is now a good time?” to receive non-driving information during a 50-minute drive. We analyzed 2,734 responses and synchronized automotive and video data, and show that while the chances of choosing a good time can be determined with better success using easily accessible automotive data, certain nuances in the problem require a richer understanding of the driver and environment states in order to achieve higher performance. We illustrate several of these nuances with quantitative and qualitative analyses to contribute to the understanding of how to design a system that might simultaneously minimize the risk of interacting at a bad time while maximizing the window of allowable interruption.
Recent HCI work on digital games highlighted the advantage for designers to take on a 1st person perspective on the human body (referring to the phenomenological “lived” body) and a 3rd person perspective (the material “fleshy” body, similar to looking in the mirror). This is useful when designing bodily play, however, we note that there is not much game design discussion on the 2nd person social perspective that highlights the unique interplay between human bodies. To guide designers interested in supporting players to experience their bodies as play, we describe how game designers can engage with the 2nd person social perspective through a set of design tactics based on four of our own play systems. With our work, we hope we can aid designers in embracing this 2nd person perspective so that more people can benefit from engaging their bodies through games and play.
Predictions of people’s behaviour increasingly drive interactions with a new generation of IoT services designed to support everyday life in the home, from shopping to heating. Based on the premise that such automation is difficult due to the contingent nature of people’s practices, in this work we explore the nature of these contingencies in depth. We have designed and conducted a technology probe that made use of simple linear predictions as a provocation, and invited people to track the life of their household essentials over a two-month period. Through a mixed-method approach we demonstrate the challenges of simple predictions, and in turn identify eight categories of contingencies that influenced prediction accuracy. We discuss strategies for how designers of future predictive IoT systems may take the contingencies into account by removing, hiding, revealing, managing, or exploiting the system uncertainty at the core of the issue.
Visual notifications are integral to interactive computing systems. With large displays, however, much of the content is in the user’s visual periphery, where human capacity to notice visual effects is diminished. One design strategy for enhancing noticeability is to combine visual features, such as motion and colour. Yet little is known about how feature combinations affect noticeability across the visual field, or about how peripheral noticeability changes when a user’s primary task involves the same visual features as the notification. We addressed these questions by conducting two studies. Results of the first study showed that noticeability of feature combinations were approximately equal to the better of the individual features. Results of the second study suggest that there can be interference between the features of primary tasks and the visual features in the notifications. Our findings contribute to a better understanding of how visual features operate when used as peripheral notifications.
Current haptic feedback techniques on handheld devices are applied to the finger pad or the palm of the user. These state-of-the-art approaches are coarse-grained and tend to be intrusive, rather than subtle. In contrast, we present a new feedback technique that applies stimuli around the periphery of the finger pulp, demonstrating how this can provide rich, nuanced haptic information. We use a reconfigurable haptic device employing a ferromagnetic marble for back-of-the device handheld use, which, for the first time, probes, without instrumenting the user, the periphery of the distal phalanx with localised stimulation. We present the design-space afforded by this new technique and evaluate the human-factors of finger-peripheral touch interaction in a controlled user-study. We report results with marbles of different diameters, speeds and a combination of poking, lateral vibration and patterns; present the resulting design guidelines for finger-periphery haptic feedback; and, illustrate its potential with use case scenarios.
Development of a Checklist for the Prevention of Intradialytic Hypotension in Hemodialysis Care: Design Considerations Based on Activity Theory
Hemodialysis is life-saving therapy for end-stage renal disease; yet, 20% of hemodialysis sessions are complicated by intradialytic hypotension (“IDH”). There is a need for approaches to preventing IDH that account for their implementation contexts. Using Activity Theory, we outline the design of a digital diagnostic checklist to identify patients at risk of IDH. Checklists were chosen a priori as an outcome due to prior evidence of effectiveness. Drawing on individual interviews with 20 clinicians and three focus groups with 17 patients, we describe four activity systems within hemodialysis care. We then outline a novel design process that includes co-design activities with clinicians, and four rapid-cycle iterations that progressively incorporated activity system elements into checklist design. We contribute a new type of checklist design to HCI: one that supports diagnostic thinking rather than consistent task completion. We further broaden checklist design by including a formal role for patients in checklist completion.
Preemptive Action: Accelerating Human Reaction using Electrical Muscle Stimulation Without Compromising Agency
We enable preemptive force-feedback systems to speed up human reaction time without fully compromising the user’s sense of agency. Typically these interfaces actuate by means of electrical muscle stimulation (EMS) or mechanical actuators; they preemptively move the user to perform a task, such as to improve movement performance (e.g., EMS-assisted drumming). Unfortunately, when using preemptive force-feedback users do not feel in control and loose their sense of agency. We address this by actuating the user’s body, using EMS, within a particular time window (160 ms after visual stimulus), which we found to speed up reaction time by 80 ms in our first study. With this preemptive timing, when the user and system move congruently, the user feels that they initiated the motion, yet their reaction time is faster than usual. As our second study demonstrated, this particular timing significantly increased agency when compared to the current practice in EMS-based devices. We conclude by illustrating, using examples from the HCI literature, how to leverage our findings to provide more agency to automated haptic interfaces.
Experts in different domains rely increasingly on simulation models of complex processes to reach insights, make decisions, and plan future projects. These models are often used to study possible trade-offs, as experts try to optimise multiple conflicting objectives in a single investigation. Understanding all the model intricacies, however, is challenging for a single domain expert. We propose a simple approach to support multiple experts when exploring complex model results. First, we reduce the model exploration space, then present the results on a shared interactive surface, in the form of a scatterplot matrix and linked views. To explore how multiple experts analyse trade-offs using this setup, we carried out an observational study focusing on the link between expertise and insight generation during the analysis process. Our results reveal the different exploration strategies and multi-storyline approaches that domain experts adopt during trade-off analysis, and inform our recommendations for collaborative model exploration systems.
Protection, Productivity and Pleasure in the Smart Home: Emerging Expectations and Gendered Insights from Australian Early Adopters
Interest and uptake of smart home technologies has been lower than anticipated, particularly among women. Reporting on an academic-industry partnership, we present findings from an ethnographic study with 31 Australian smart home early adopters. The paper analyses these households’ experiences in relation to three concepts central to Intel’s ambient computing vision for the home: protection, productivity and pleasure, or ‘the 3Ps’. We find that protection is a form of caregiving; productivity provides ‘small conveniences’, energy savings and multi-tasking possibilities; and pleasure is derived from ambient and aesthetic features, and the joy of ‘playing around’ with tech. Our analysis identifies three design challenges and opportunities for the smart home: internal threats to household protection; feminine desires for the smart home; and increased ‘digital housekeeping’. We conclude by suggesting how HCI designers can and should respond to these gendered challenges.
The study of rumors has garnered wider attention as regulators and researchers turn towards problems of misinformation on social media. One goal has been to discover and implement mechanisms that promote healthy information ecosystems. Classically defined as regarding ambiguous situations, rumors pose the unique difficulty of intrinsic uncertainty around their veracity. Further complicating matters, rumors can serve the public when they do spread valuable true information. To address these challenges, we develop an approach that reifies “rumor proportions” as central to the theory of systems for managing rumors. We use this lens to advocate for systems that, rather than aiming to stifle rumors entirely or aiming to stop only false rumors, aim to prevent rumors from growing out of proportion relative to normative benchmark representations of intrinsic uncertainty.
In this paper, we establish the underlying foundations of mechanisms that are composed of cell structures—known as metamaterial mechanisms. Such metamaterial mechanisms were previously shown to implement complete mechanisms in the cell structure of a 3D printed material, without the need for assembly. However, their design is highly challenging. A mechanism consists of many cells that are interconnected and impose constraints on each other. This leads to unobvious and non-linear behavior of the mechanism, which impedes user design. In this work, we investigate the underlying topological constraints of such cell structures and their influence on the resulting mechanism. Based on these findings, we contribute a computational design tool that automatically creates a metamaterial mechanism from user-defined motion paths. This tool is only feasible because our novel abstract representation of the global constraints highly reduces the search space of possible cell arrangements.
Life transitions are an integral part of the human experience. However, research shows that lack of support during life transitions can result in adverse health outcomes. To better understand the support needs and structures of low-income women during transition to motherhood, we interviewed 10 women and their 14 supporters during the transition. Our findings suggest that support needs and structures of mothers evolve during transition, and that they also vary by socio-economic contexts. In this paper, we detail our study design and findings. Informed by our findings, we posit that all life-transitions are not the same, and that therefore, the optimal support intervention point varies for different life transitions. Currently there are no tools available to identify optimal support intervention points during life transitions. To this end, we also introduce a preliminary framework – the Strength-Stress-Analysis (SSA) framework – to identify optimal support intervention points during life-transitions.
Online Grocery Delivery Services: An Opportunity to Address Food Disparities in Transportation-scarce Areas
Online grocery delivery services present new opportunities to address food disparities, especially in underserved areas. However, such services have not been systematically evaluated. This study evaluates such services’ potential to provide healthy-food access and influence healthy-food purchases among individuals living in transportation-scarce and low-resource areas. We conducted a pilot experiment with 20 participants consisting of a randomly assigned group’s 1-month use of an online grocery delivery service, and a control group’s 1-month collection of grocery receipts, and a set of semi-structured interviews. We found that online grocery delivery services (a) serve as a feasible model to healthy-food access if they are affordable and amenable to multiple payment forms and (b) could lead to healthier selections. We contribute policy recommendations to bolster affordability of healthy-food access and design opportunities to promote healthy foods to support the adoption and use of these services among low-resource and transportation-scarce groups.
Personas are powerful tools for designing technology and envisioning its usage. They are widely used to imagine archetypal users around whom to orient design work. We have been exploring co-created personas as a technique to use in co-design with users who have diverse needs. Our vision was that this would broaden the demographic and liberate co-designers of their personal relationship with a health condition. This paper reports three studies where we investigated using co-created personas with people who had Parkinson’s disease, dementia or aphasia. Observational data of co-design sessions were collected and analysed. Findings revealed that the co-created personas encouraged users with diverse needs to engage with co-designing. Importantly, they also afforded additional benefits including empowering users within a more accessible design process. Reflecting on the outcomes from the different user groups, we conclude with a discussion of the potential for co-created personas to be applied more broadly.
Eating disorders (EDs) are a worldwide public health concern that impact approximately 10% of the U.S. population. Our previous research characterized these behaviors across online spaces. These characterizations have used clinical terminology, and their lexical variants, to identify ED content online. However, previous HCI research on EDs (including our own) suffers from a lack of gender and cultural diversity. In this paper, we designed a follow-up study of online ED characterizations, extending our previous methodologies to focus specifically on male/masculine-related content. We highlight the similarities and differences found in the terminology utilized and media archetypes associated with the social media content. Finally, we discuss other considerations highlighted through our analysis of the male-related content that is missing from the previous research.
This paper provides analysis and insight from a collaborative process with a Canadian sex worker rights organization called Stella, l’amie de Maimie, where we reflect on the use of and potential for digital technologies in service delivery. We analyze the Bad Client and Aggressor List – a reporting tool co-produced by sex workers in the community and Stella staff to reduce violence against sex workers. We analyze its current and potential future formats as an artefact for communication, in a context of sex work criminalization and the exclusion of sex workers from traditional routes for reporting violence and accessing governmental systems for justice. This paper addresses a novel aspect of HCI research that relates to digital technologies and social justice. Reflecting on the Bad Client and Aggressor List, we discuss how technologies can interact with justice-oriented service delivery and develop three implications for design.
Although people frequently seek mentoring or advice for their career, most mentoring is performed in person. Little research has examined the nature and quality of career mentoring online. To address this gap, we study how people use online Q&A forums for career advice. We develop a taxonomy of career advice requests based on a qualitative analysis of posts in a career-related online forum, identifying three key types: best practices, career threats, and time-sensitive requests. Our quantitative analysis of responses shows that both requesters and external viewers value general information, encouragement, and guidance, but not role modeling. We found no relation between the type of requests and features of responses, nor differences in responses valued by requesters versus external viewers. We present design recommendations for supporting online career advice exchange.
Cognitive Aids in Acute Care: Investigating How Cognitive Aids Affect and Support In-hospital Emergency Teams
Cognitive aids – artefacts that support a user in the completion of a task at the time – have raised great interest to support healthcare staff during medical emergencies. However, the mechanisms of how cognitive aids support or affect staff remain understudied. We describe the iterative development of a tablet-based cognitive aid application to support in-hospital resuscitation team leaders. We report a summative evaluation of two different versions of the application. Finally, we outline the limitations of current explanations of how cognitive aids work and suggest an approach based on embodied cognition. We discuss how cognitive aids alter the task of the team leader (distributed cognition), the importance of the present team situation (socially situated), and the result of the interaction between mind and environment (sensorimotor coupling). Understanding and considering the implications of introducing cognitive aids may help to increase acceptance and effectiveness of cognitive aids and eventually improve patient safety.
Individuals with multiple chronic conditions (MCC) experience the overwhelming burden of treating MCC and frequently disagree with their providers on priorities for care. Aligning self-care with patients’ values may improve healthcare for these patients. However, patients’ values are not routinely discussed in clinical conversations and patients may not actively share this information with providers. In a qualitative field study, we interviewed 15 patients in their homes to investigate techniques that encourage patients to articulate values, self-care, and how they relate. Study activities facilitated self-reflection on values and self-care and produced varying responses, including: raising consciousness, evolving perspectives, identifying misalignments, and considering changes. We discuss how our findings extend prior work on supporting reflection in HCI and inform the design of tools for improving care for people with MCC.
While problem solving is a crucial aspect of programming, few learning opportunities in computer science focus on teaching problem-solving skills like planning. I