underlined indicates students/postdocs I directly supervised;
† indicates equal contribution.

2024

[J36]

Yue Lyu, Di Liu, Pengcheng An, Xin Tong, Huan Zhang, Keiko Katsuragawa, Jian Zhao. EMooly: Supporting Autistic Children in Collaborative Social-Emotional Learning with Caregiver Participation through Interactive AI-infused and AR Activities. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(4), pp. 203:1-203:36, 2024.

Abstract: Children with autism spectrum disorder (ASD) have social-emotional deficits that lead to difficulties in recognizing emotions as well as understanding and responding to social interactions. This study presents EMooly, a tablet game that actively involves caregivers and leverages augmented reality (AR) and generative AI (GenAI) to enhance social-emotional learning for autistic children. Through a year of collaborative effort with five domain experts, we developed EMooly that engages children through personalized social stories, interactive and fun activities, and enhanced caregiver participation, focusing on emotion understanding and facial expression recognition. Compared with a baseline, a controlled study with 24 autistic children and their caregivers showed EMooly significantly improved children's emotion recognition skills and its novel features were preferred and appreciated. EMooly demonstrates the potential of AI and AR in enhancing social-emotional development for autistic children via prompt personalizing and engagement, and highlights the importance of caregiver involvement for optimal learning outcomes.

[J35]

Pengcheng An, Chaoyu Zhang, Haichen Gao, Ziqi Zhou, Yage Xiao, Jian Zhao. AniBalloons: Animated Chat Balloons as Affective Augmentation for Social Messaging and Chatbot Interaction. International Journal of Human-Computer Studies, 194, pp. 103365:1-103365:16, 2025 (Accepted in 2024).

Abstract: Despite being prominent and ubiquitous, message-based communication is limited in nonverbally conveying emotions. Besides emoticons or stickers, messaging users continue seeking richer options for affective communication. Recent research explored using chat-balloons' shape and color to communicate emotional states. However, little work explored whether and how chat-balloon animations could be designed to convey emotions. We present the design of AniBalloons, 30 chat-balloon animations conveying Joy, Anger, Sadness, Surprise, Fear, and Calmness. Using AniBalloons as a research means, we conducted three studies to assess the animations' affect recognizability and emotional properties (N = 40), and probe how animated chat-balloons would influence communication experience in typical scenarios including instant messaging (N = 72) and chatbot service (N = 70). Our exploration contributes a set of chat-balloon animations to complement nonverbal affective communication for a range of text-message interfaces, and empirical insights into how animated chat-balloons might mediate particular conversation experiences (e.g., perceived interpersonal closeness, or chatbot personality).

[J34]

Shaikh Shawon Arefin Shimon, Ali Neshati, Junwei Sun, Qiang Xu, Jian Zhao. Exploring Uni-manual Around Ear Off-Device Gestures for Earables Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(1), pp. 3:1-3:29, 2024.

Abstract: Small form factor limits physical input space in earable (i.e., ear-mounted wearable) devices. Off-device earable inputs in alternate mid-air and on-skin around-ear interaction spaces using uni-manual gestures can address this input space limitation. Segmenting these alternate interaction spaces to create multiple gesture regions for reusing off-device gestures can expand earable input vocabulary by a large margin. Although prior earable interaction research has explored off-device gesture preferences and recognition techniques in such interaction spaces, supporting gesture reuse over multiple gesture regions needs further exploration. We collected and analyzed 7560 uni-manual gesture motion data from 18 participants to explore earable gesture reuse by segmentation of on-skin and mid-air spaces around the ear. Our results show that gesture performance degrades significantly beyond 3 mid-air and 5 on-skin around-ear gesture regions for different uni-manual gesture classes (e.g., swipe, pinch, tap). We also present qualitative findings on most and least preferred regions (and associated boundaries) by end-users for different uni-manual gesture shapes across both interaction spaces for earable devices. Our results complement earlier elicitation studies and interaction technologies for earables to help expand the gestural input vocabulary and potentially drive future commercialization of such devices.

[C39]

Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, Jian Zhao. Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface. Proceedings of the ACM Interactive Surfaces and Spaces Conference, pp. 528:1-528:23, 2024.

Abstract: The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility.

[C38]

Temiloluwa Paul Femi-Gege, Matthew Brehmer, Jian Zhao. VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation. Proceedings of the ACM Interactive Surfaces and Spaces Conference, pp. 531:1-531:22, 2024.

Abstract: Augmented video presentation tools provide a natural way for presenters to interact with their content, resulting in engaging experiences for remote audiences, such as when a presenter uses hand gestures to manipulate and direct attention to visual aids overlaid on their webcam feed. However, authoring and customizing these presentations can be challenging, particularly when presenting dynamic data visualization (i.e., animated charts). To this end, we introduce VisConductor, an authoring and presentation tool that equips presenters with the ability to configure gestures that control affect-varying visualization animation, foreshadow visualization transitions, direct attention to notable data points, and animate the disclosure of annotations. These gestures are integrated into configurable widgets, allowing presenters to trigger content transformations by executing gestures within widget boundaries, with feedback visible only to them. Altogether, our palette of widgets provides a level of flexibility appropriate for improvisational presentations and ad-hoc content transformations, such as when responding to audience engagement. To evaluate VisConductor, we conducted two studies focusing on presenters (N = 11) and audience members (N = 11). Our findings indicate that our approach taken with VisConductor can facilitate interactive and engaging remote presentations with dynamic visual aids. Reflecting on our findings, we also offer insights to inform the future of augmented video presentation tools.

[C37]

Ryan Yen, Jian Zhao. Reifying the Reuse of User-AI Conversational Memories. Proceedings of ACM Symposium on User Interface Software and Technology, pp. 58:1-58:22, 2024.

Abstract: As users engage more frequently with AI conversational agents, conversations may exceed their 'memory' capacity, leading to failures in correctly leveraging certain memories for tailored responses. However, in finding past memories that can be reused or referenced, users need to retrieve relevant information in various conversations and articulate to the AI their intention to reuse these memories. To support this process, we introduce Memolet, an interactive object that reifies memory reuse. Users can directly manipulate Memolet to specify which memories to reuse and how to use them. We developed a system demonstrating Memolet's interaction across various memory reuse stages, including memory extraction, organization, prompt articulation, and generation refinement. We examine the system's usefulness with an N=12 within-subject study and provide design implications for future systems that support user-AI conversational memory reusing.

[C36]

Ryan Yen, Jiawen Stefanie Zhu, Sangho Suh, Haijun Xia, Jian Zhao. CoLadder: Supporting Programmers with Hierarchical Code Generation in Multi-Level Abstraction. Proceedings of ACM Symposium on User Interface Software and Technology, pp. 11:1-11:20, 2024.

Abstract: This paper adopted an iterative design process to gain insights into programmers' strategies when using LLMs for programming. We proposed CoLadder, a novel system that supports programmers by facilitating hierarchical task decomposition, direct code segment manipulation, and result evaluation during prompt authoring. A user study with 12 experienced programmers showed that CoLadder is effective in helping programmers externalize their problem-solving intentions flexibly, improving their ability to evaluate and modify code across various abstraction levels, from their task's goal to final code implementation.

[C35]

Maoyuan Sun, Yuanxin Wang, Courtney Bolton, Yue Ma, Tianyi Li, Jian Zhao. Investigating User Estimation of Missing Data in Visual Analysis. Proceedings of the Graphics Interface Conference, pp. 30:1-30:13, 2024.

Abstract: Missing data is a pervasive issue in real-world analytics, stemming from a multitude of factors (e.g., device malfunctions and network disruptions), making it a ubiquitous challenge in many domains. Misperception of missing data impacts decision-making and causes severe consequences. To mitigate risks from missing data and facilitate proper handling, computing methods (e.g., imputation) have been studied, which often culminate in the visual representation of data for analysts to further check. Yet, the influence of these computed representations on user judgment regarding missing data remains unclear. To study potential influencing factors and their impact on user judgment, we conducted a crowdsourcing study. We controlled 4 factors: the distribution, imputation, and visualization of missing data, and the prior knowledge of data. We compared users' estimations of missing data with computed imputations under different combinations of these factors. Our results offer useful guidance for visualizing missing data and their imputations, which informs future studies on developing trustworthy computing methods for visual analysis of missing data.

[C34]

Xinyu Shi, Mingyu Liu, Ziqi Zhou, Ali Neshati, Ryan Rossi, Jian Zhao. Exploring Interactive Color Palettes for Abstraction-Driven Exploratory Image Colorization. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 146:1-146:16, 2024.

Abstract: Color design is essential in areas such as product, graphic, and fashion design. However, current tools like Photoshop, with their concrete-driven color manipulation approach, often stumble during early ideation, favoring polished end results over initial exploration. We introduced Mondrian as a test-bed for abstraction-driven approach using interactive color palettes for image colorization. Through a formative study with six design experts, we selected three design options for visual abstractions in color design and developed Mondrian where humans work with abstractions and AI manages the concrete aspects. We carried out a user study to understand the benefits and challenges of each abstraction format and compare the Mondrian with Photoshop. A survey involving 100 participants further examined the influence of each abstraction format on color composition perceptions. Findings suggest that interactive visual abstractions encourage a non-linear exploration workflow and an open mindset during ideation, thus providing better creative affordance.

[C33]

Xinyu Shi, Yinghou Wang, Yun Wang, Jian Zhao. Piet: Facilitating Color Authoring for Motion Graphics Video. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 148:1-148:17,2024.
 Best Paper

Abstract: Motion graphic (MG) videos are effective and compelling for presenting complex concepts through animated visuals; and colors are important to convey desired emotions, maintain visual continuity, and signal narrative transitions. However, current video color authoring workflows are fragmented, lacking contextual previews, hindering rapid theme adjustments, and not aligning with designers' progressive authoring flows. To bridge this gap, we introduce Piet, the first tool tailored for MG video color authoring. Piet features an interactive palette to visually represent color distributions, support controllable focus levels, and enable quick theme probing via grouped color shifts. We interviewed 6 domain experts to identify the frustrations in current tools and inform the design of Piet. An in-lab user study with 13 expert designers showed that Piet effectively simplified the MG video color authoring and reduced the friction in creative color theme exploration.

[C32]

Li Feng, Ryan Yen, Yuzhe You, Mingming Fan, Jian Zhao, Zhicong Lu. CoPrompt: Supporting Prompt Sharing and Referring in Collaborative Natural Language Programming. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 934:1-934:21, 2024.

Abstract: Natural language (NL) programming has become more approachable due to the powerful code-generation capability of large language models (LLMs). This shift to using NL to program enhances collaborative programming by reducing communication barriers and context-switching among programmers from varying backgrounds. However, programmers may face challenges during prompt engineering in a collaborative setting as they need to actively keep aware of their collaborators' progress and intents. In this paper, we aim to investigate ways to assist programmers' prompt engineering in a collaborative context. We first conducted a formative study to understand the workflows and challenges of programmers when using NL for collaborative programming. Based on our findings, we implemented a prototype, CoPrompt, to support collaborative prompt engineering by providing referring, requesting, sharing, and linking mechanisms. Our user study indicates that CoPrompt assists programmers in comprehending collaborators' prompts and building on their collaborators' work, reducing repetitive updates and communication costs.

[C31]

Pengcheng An, Jiawen Stefanie Zhu, Zibo Zhang, Yifei Yin, Qingyuan Ma, Che Yan, Linghao Du, Jian Zhao. EmoWear: Exploring Emotional Teasers for Voice Message Interaction on Smartwatches. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 279:1-279:16, 2024.

Abstract: Voice messages, by nature, prevent users from gauging the emotional tone without fully diving into the audio content. This hinders the shared emotional experience at the pre-retrieval stage. Research scarcely explored "Emotional Teasers"—pre-retrieval cues offering a glimpse into an awaiting message's emotional tone without disclosing its content. We introduce EmoWear, a smartwatch voice messaging system enabling users to apply 30 animation teasers on message bubbles to reflect emotions. EmoWear eases senders' choice by prioritizing emotions based on semantic and acoustic processing. EmoWear was evaluated in comparison with a mirroring system using color-coded message bubbles as emotional cues (N=24). Results showed EmoWear significantly enhanced emotional communication experience in both receiving and sending messages. The animated teasers were considered intuitive and valued for diverse expressions. Desirable interaction qualities and practical implications are distilled for future design. We thereby contribute both a novel system and empirical knowledge concerning emotional teasers for voice messaging.

[C30]

Xizi Wang, Ben Lafreniere, Jian Zhao. Exploring Visualizations for Precisely Guiding Bare Hand Gestures in Virtual Reality. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 636:1-636:19, 2024.

Abstract: Bare hand interaction in augmented or virtual reality (AR/VR) systems, while intuitive, often results in errors and frustration. However, existing methods, such as a static icon or a dynamic tutorial, can only inform simple and coarse hand gestures and lack corrective feedback. This paper explores various visualizations for enhancing precise hand interaction in VR. Through a comprehensive two-part formative study with 11 participants, we identified four types of essential information for visual guidance and designed different visualizations that manifest these information types. We further distilled four visual designs and conducted a controlled lab study with 15 participants to assess their effectiveness for various single- and double-handed gestures. Our results demonstrate that visual guidance significantly improved users' gesture performance, reducing time and workload while increasing confidence. Moreover, we found that the visualization did not disrupt most users' immersive VR experience or their perceptions of hand tracking and gesture recognition reliability.

[W18]

Ryan Yen, Jian Zhao, Daniel Vogel. Code Shaping: Iterative Code Editing with Free-form Sketching. Adjunct Proceedings of the ACM Symposium on User Interface Software and Technology (Poster), pp. 101:1-101:3, 2024.
 Jury Best Poster Honorable Mention

Abstract: We present an initial step towards building a system for programmers to edit code using free-form sketch annotations drawn directly onto editor and output windows. Using a working prototype system as a technical probe, an exploratory study (N = 6) examines how programmers sketch to annotate Python code to communicate edits for an AI model to perform. The results reveal personalized workflow strategies and how similar annotations vary in abstractness and intention across different scenarios and users.

[W17]

Ryan Yen, Yelizaveta Brus, Leyi Yan, Jimmy Lin, Jian Zhao. Scholarly Exploration via Conversations with Scholars-Papers Embedding. Proceedings of the IEEE Conference Visualization and Visual Analytics (Poster), 2024.

Abstract: The rapid expansion of academic publications across various sub-domains necessitates advanced visual analytics systems to help researchers efficiently navigate and explore the academic landscape. Recent advancements in retrieval augmented generation enable users to engage with data through complex, context-driven question-answering capabilities. However, existing approaches fail to provide adequate user control over the retrieval and generation process and do not reconcile visualizations with question-answering mechanisms. To address these limitations, we propose a system that supports contextually aware, controllable, and interactive exploration of academic publications and scholars. By enabling bidirectional interaction between question-answering components and Scholets, the 2D projections of scholarly works' embeddings, our system enables users to textually and visually interact with large amounts of publications. We report the system design and demonstrate its utility through an exploratory study with graduate researchers.

[W16]

Ryan Yen, Nicole Sultanum, Jian Zhao. To Search or To Gen? Exploring the Synergy between Generative AI and Web Search in Programming. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pp. 327:1-327:8, 2024.

Abstract: The convergence of generative AI and web search is reshaping problem-solving for programmers. However, the lack of understanding regarding their interplay in the information-seeking process often leads programmers to perceive them as alternatives rather than complementary tools. To analyze this interaction and explore their synergy, we conducted an interview study with eight experienced programmers. Drawing from the results and literature, we have identified three major challenges and proposed three decision-making stages, each with its own relevant factors. Additionally, we present a comprehensive process model that captures programmers' interaction patterns. This model encompasses decision-making stages, the information-foraging loop, and cognitive activities during system interaction, offering a holistic framework to comprehend and optimize the use of these convergent tools in programming.

[W15]

Jiawen Stefanie Zhu, Zibo Zhang, Jian Zhao. Facilitating Mixed-Methods Analysis with Computational Notebooks. Proceedings of the First Workshop on Human-Notebook Interactions, 2024.

Abstract: Data exploration is an important aspect of the workflow of mixed-methods researchers, who conduct both qualitative and quantitative analysis. However, there currently exists few tools that adequately support both types of analysis simultaneously, forcing researchers to context-switch between different tools and increasing their mental burden when integrating the results. To address this gap, we propose a unified environment that facilitates mixed-methods analysis in a computational notebook-based settings. We conduct a scenario study with three HCI mixed-methods researchers to gather feedback on our design concept and to understand our users' needs and requirements.

[W14]

Yue Lyu, Pengcheng An, Huan Zhang, Keiko Katsuragawa, Jian Zhao. Designing AI-Enabled Games to Support Social-Emotional Learning for Children with Autism Spectrum Disorders. Proceedings of the Second Workshop on Child-Centred AI, 2024.

Abstract: Children with autism spectrum disorder (ASD) experience challenges in grasping social-emotional cues, which can result in difficulties in recognizing emotions and understanding and responding to social interactions. Social-emotional intervention is an effective method to improve emotional understanding and facial expression recognition among individuals with ASD. Existing work emphasizes the importance of personalizing interventions to meet individual needs and motivate engagement for optimal outcomes in daily settings. We design a social-emotional game for ASD children, which generates personalized stories by leveraging the current advancement of artificial intelligence. Via a co-design process with five domain experts, this work offers several design insights into developing future AI-enabled gamified systems for families with autistic children. We also propose a fine-tuned AI model and a dataset of social stories for different basic emotions.

[W13]

Negar Arabzadeh, Kiarash Golzadeh, Christopher Risi, Charles Clarke, Jian Zhao. KnowFIRES: a Knowledge-graph Framework for Interpreting Retrieved Entities from Search. Advances in Information Retrieval (Proceedings of ECIR'24 (Demo)), pp. 182-188, 2024.

Abstract: Entity retrieval is essential in information access domains where people search for specific entities, such as individuals, organizations, and places. While entity retrieval is an active research topic in Information Retrieval, it is necessary to explore the explainability and interpretability of them more extensively. KnowFIRES addresses this by of- fering a knowledge graph-based visual representation of entity retrieval results, focusing on contrasting different retrieval methods. KnowFIRES allows users to better understand these differences through the juxtaposition and superposition of retrieved sub-graphs.

2023

[J33]

Xuejun Du, Pengcheng An, Justin Leung, April Li, Linda Chapman, Jian Zhao. DeepThInk: Designing and Probing Human-AI Co-Creation in Digital Art Therapy. International Journal of Human-Computer Studies, 181, pp. 103139:1-103139:17, 2024 (Accepted in 2023).

Abstract: Art therapy has been an essential form of psychotherapy to facilitate psychological well-being, which has been promoted and transformed by recent technological advances into digital art therapy. However, the potential of digital technologies has not been fully leveraged; especially, applying AI technologies in digital art therapy is still under-explored. In this paper, we propose an AI-infused art-making system, DeepThInk, to investigate the potential of introducing a human-AI co-creative process into art therapy, by collaborating with five experienced registered art therapists over ten months. DeepThInk offers a range of tools which can lower the expertise threshold for art-making while improving users' creativity and expressivity. We gathered the insights of DeepThInk through expert reviews and a two-part user evaluation with both synchronous and asynchronous therapy setups. This longitudinal iterative design process helped us derive and contextualize design principles of human-AI co-creation for art therapy, shedding light on future design in relevant domains

[J32]

Yue Lyu, Pengcheng An, Yage Xiao, Zibo Zhang, Huan Zhang, Keiko Katsuragawa, Jian Zhao. Eggly: Designing Mobile Augmented Reality Neurofeedback Training Games for Children with Autism Spectrum Disorder. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 7(2), pp.67:1-67:29, 2023.

Abstract: Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that affects how children communicate and relate to other people and the world around them. Emerging studies have shown that neurofeedback training (NFT) games are an effective and playful intervention to enhance social and attentional capabilities for autistic children. However, NFT is primarily available in a clinical setting that is hard to scale. Also, the intervention demands deliberately-designed gamified feedback with fun and enjoyment, where little knowledge has been acquired in the HCI community. Through a ten-month iterative design process with four domain experts, we developed Eggly, a mobile NFT game based on a consumer-grade EEG headband and a tablet. Eggly uses novel augmented reality (AR) techniques to offer engagement and personalization, enhancing their training experience. We conducted two field studies (a single-session study and a three-week multi-session study) with a total of five autistic children to assess Eggly in practice at a special education center. Both quantitative and qualitative results indicate the effectiveness of the approach as well as contribute to the design knowledge of creating mobile AR NFT games.

[J31]

Andrea Batch, Yipeng Ji, Mingming Fan, Jian Zhao, Niklas Elmqvist. uxSense: Supporting User Experience Analysis with Visualization and Computer Vision. IEEE Transactions on Visualization and Computer Graphics, 2023 (In Press).

Abstract: Analyzing user behavior from usability evaluation can be a challenging and time-consuming task, especially as the number of participants and the scale and complexity of the evaluation grows. We propose UXSENSE, a visual analytics system using machine learning methods to extract user behavior from audio and video recordings as parallel time-stamped data streams. Our implementation draws on pattern recognition, computer vision, natural language processing, and machine learning to extract user sentiment, actions,posture, spoken words, and other features from such recordings. These streams are visualized as parallel timelines in a web-based front-end, enabling the researcher to search, filter, and annotate data across time and space. We present the results of a user study involving professional UX researchers evaluating user data using uxSense. In fact, we used uxSense itself to evaluate their sessions.

[C29]

Liwei Wu, Qing Liu, Jian Zhao, Edward Lank. Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch. Proceedings of the ACM Interactive Surfaces and Spaces Conference, pp. 437:1-437:24, 2023.
 Best Paper Honorable Mention

Abstract: The growing live streaming economy and virtual reality (VR) technologies have sparked interest in VR streaming among streamers and viewers. However, limited research has been conducted to understand this emerging streaming practice. To address this gap, we conducted an in-depth thematic analysis of 34 streaming videos from 12 VR streamers with varying levels of experience, to explore the current practices, interaction styles, and strategies, as well as to investigate the challenges and opportunities for VR streaming. Our findings indicate that VR streamers face challenges in building emotional connections and maintaining streaming flow due to technical problems, lack of fluid transitions between physical and virtual environments, and not intentionally designed game scenes. As a response, we propose six design implications to encourage collaboration between game designers and streaming app developers, facilitating fluid, rich, and broad interactions for an enhanced streaming experience. In addition, we discuss the use of streaming videos as user-generated data for research, highlighting the lessons learned and emphasizing the need for tools to support streaming video analysis. Our research sheds light on the unique aspects of VR streaming, which combines interactions across displays and space.

[C28]

Qing Liu, Gustavo Alves, Jian Zhao. Challenges and Opportunities for Software Testing in Virtual Reality Application Development. Proceedings of the Graphics Interface Conference, 2023 (In Press).

Abstract: Testing is a core process for the development of Virtual Reality (VR) software, which could ensure the delivery of high-quality VR products and experiences. As VR applications have become more popular in different fields, more challenges and difficulties have been raised during the testing phase. However, few studies have explored the challenges of software testing in VR development in detail. This paper aims to fill in the gap through a qualitative interview study composed of 14 professional VR developers and a survey study with 33 additional participants. As a result, we derived 10 key challenges that are often confronted by VR developers during software testing. Our study also sheds light on potential design directions for VR development tools based on the identified challenges and needs of the VR developers to alleviate existing issues in testing.

[C27]

Xinyu Shi, Ziqi Zhou, Jingwen Zhang, Ali Neshati, Anjul Tyagi, Ryan Rossi, Shunan Guo, Fan Du, Jian Zhao. De-Stijl: Facilitating Graphics Design with Interactive 2D Color Palette Recommendation. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 122:1-122:19, 2023.

Abstract: Selecting a proper color palette is critical in crafting a high-quality graphic design to gain visibility and communicate ideas effectively. To facilitate this process, we propose De-Stijl, an intelligent and interactive color authoring tool to assist novice designers in crafting harmonic color palettes, achieving quick design iterations, and fulfilling design constraints. Through De-Stijl, we contribute a novel 2D color palette concept that allows users to intuitively perceive color designs in context with their proportions and proximities. Further, De-Stijl implements a holistic color authoring system that supports 2D palette extraction, theme-aware and spatial-sensitive color recommendation, and automatic graphical elements (re)colorization. We evaluated De-Stijl through an in-lab user study by comparing the system with existing industry standard tools, followed by in-depth user interviews. Quantitative and qualitative results demonstrate that De-Stijl is effective in assisting novice design practitioners to quickly colorize graphic designs and easily deliver several alternatives.

[C26]

Fengjie Wang, Xuye Liu, Oujing Liu, Ali Neshati, Tengfei Ma, Min Zhu, Jian Zhao. Slide4N: Creating Presentation Slides from Computational Notebooks with Human-AI Collaboration. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 364:1-364:18, 2023.

Abstract: Data scientists often have to use other presentation tools (e.g., Microsoft PowerPoint) to create slides to communicate their analysis obtained using computational notebooks. Much tedious and repetitive work is needed to transfer the routines of notebooks (e.g., code, plots) to the presentable contents on slides (e.g., bullet points, figures). We propose a human-AI collaborative approach and operationalize it within Slide4N, an interactive AI assistant for data scientists to create slides from computational notebooks. Slide4N leverages advanced natural language processing techniques to distill key information from user-selected notebook cells and then renders them in appropriate slide layouts. The tool also provides intuitive interactions that allow further refinement and customization of the generated slides. We evaluated Slide4N with a two-part user study, where participants appreciated this human-AI collaborative approach compared to fully-manual or fully-automatic methods. The results also indicate the usefulness and effectiveness of Slide4N in slide creation tasks from notebooks.

[C25]

Chang Liu, Arif Usta, Jian Zhao, Semih Salihoglu. Governor: Turning Open Government Data Portals into Interactive Databases. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 415:1-415:16, 2023.

Abstract: The launch of open governmental data portals (OGDPs) has popularized the open data movement of last decade. Although the amount of data in OGDPs is increasing, their functionalities are limited to finding datasets with titles/descriptions and downloading the actual files. This hinders the end users, especially those without technical skills, to find the open data tables and make use of them. We present Governor, an open-sourced web application developed to make OGDPs more accessible to end users by facilitating searching actual records in the tables, previewing them directly without downloading, and suggesting joinable and unionable tables to users based on their latest working tables. Governor also manages the provenance of integrated tables allowing users and their collaborators to easily trace back to the original tables in OGDP. We evaluate Governor with a two-part user study and the results demonstrate its value and effectiveness in finding and integrating tables in OGDP.

[C24]

Emily Kuang, Ehsan Jahangirzadeh Soure, Mingming Fan, Jian Zhao, Kristen Shinohara. Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text). Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 116:1-116:15, 2023.

Abstract: AI is promising in assisting UX evaluators with analyzing usability tests, but its judgments are typically presented as non-interactive visualizations. Evaluators may have questions about test recordings, but have no way of asking them. Interactive conversational assistants provide a Q&A dynamic that may improve analysis efficiency and evaluator autonomy. To understand the full range of analysis-related questions, we conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice. We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics. Those who used the text assistant asked more questions, but the question lengths were similar. The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust. We also provide design considerations for future conversational AI assistants for UX evaluation.

[W12]

Catherine Thomas, Xuejun Du, Kai Wang, Jayant Rai, Kenichi Okamoto, Miles Li, Jian Zhao. A Novel Data Analysis Pipeline for Fiber-based in Vivo Calcium Imaging. Neuroscience Reports, 15(1), pp. S342-S343, 2023.

Abstract: Examining in vivo neural circuit dynamics in relation to behaviour is crucial to advances in understanding how the brain works. Two techniques that are often used to examine these dynamics are one photon calcium imaging and optogenetics. Fiber-based micro-endoscopy provides a versatile, modular, and lightweight option for combining in vivo calcium imaging and optogenetics in freely behaving animals. One challenge with this technique is that the data collected from such an approach are often complex and dense. Extraction of meaningful conclusions from these data can be computationally challenging and often requires coding experience. While numerous powerful analysis pipelines exist for detection and extraction of one photon calcium imaging data from head-mounted mini microscopes, few options are available for data using fiber-based imaging techniques. Further, available options for fiber-based imaging are not optimized, often requiring significant troubleshooting, and providing limited results. Lastly, the existing pipelines cannot combine in vivo calcium imaging data with optogenetics and behavioural parameters collected in the same experimental system (hardware and software). As such, as a collaborative endeavour between behavioural neuroscientists, optical engineers, and computer science visual processing experts, we have developed a novel pipeline for extraction, examination, and visualization of calcium imaging data for fiber-based approaches. This pipeline offers a user friendly, code-free interface with customizable features and parameters, capable of integrating imaging, optogenetics, and behavioural measures for holistic experimental visualization and analysis. This pipeline significantly expands the opportunities afforded to behavioural neuroscience researchers and shifts forward the possible research opportunities when examining circuit dynamics in freely behaving animals.

[W11]

Pengcheng An, Chaoyu Zhang, Haicheng Gao, Ziqi Zhou, Linghao Du, Che Yan, Yage Xiao, Jian Zhao. Affective Affordance of Message Balloon Animations: An Early Exploration of AniBalloons. Companion Publication of the ACM Conference on Computer-Supported Cooperative Work and Social Computing, pp. 138-143, 2023.

Abstract: We introduce the preliminary exploration of AniBalloons, a novel form of chat balloon animations aimed at enriching nonverbal affective expression in text-based communications. AniBalloons were designed using extracted motion patterns from affective animations and mapped to six commonly communicated emotions. An evaluation study with 40 participants assessed their effectiveness in conveying intended emotions and their perceived emotional properties. The results showed that 80% of the animations effectively conveyed the intended emotions. AniBalloons covered a broad range of emotional parameters, comparable to frequently used emojis, offering potential for a wide array of affective expression in daily communication. The findings suggest AniBalloons' promise for enhancing emotional expressiveness in text-based communication and provide early insights for future affective design.

[W10]

Pengcheng An, Chaoyu Zhang, Haicheng Gao, Ziqi Zhou, Zibo Zhang, Jian Zhao. Animating Chat Balloons to Convey Emotions: theDesign Exploration of AniBalloons. Proceedings of the Graphics Interface Conference (Poster), 2023.

Abstract: Text message-based communication has limitations in conveying nonverbal emotional expressions, resulting in less sense of connectedness and increased likelihood of miscommunication. While emoticons may partially compensate for this limitation, we argue that chat balloon animations could be a new and unique channel to further complement affective cues in text messages. In this paper, we present the design of AniBalloons, a set of 30 chat-balloon animations conveying six types of emotions, and evaluate their affect recognizability and emotional properties. Our results show that animated chat balloons, as independent from the message content, are effective in communicating intended emotions and cover a variety of valence-arousal parameters for daily communication. Our results thereby suggest the potential of chat-balloon animations as a unique affective channel for text messages.

2022

[J30]

Xingjun Li, Yizhi Zhang, Justin Leung, Chengnian Sun, Jian Zhao. EDAssistant: Supporting Exploratory Data Analysis in Computational Notebooks with In-Situ Code Search and Recommendation. ACM Transactions on Interactive Intelligent Systems, 13(1), pp. 1:1-1:27, 2023 (Accepted in 2022).

Abstract: Using computational notebooks (e.g., Jupyter Notebook), data scientists rationalize their exploratory data analysis (EDA) based on their prior experience and external knowledge such as online examples. For novices or data scientists who lack specific knowledge about the dataset or problem to investigate, effectively obtaining and understanding the external information is critical to carrying out EDA. This paper presents EDAssistant, a JupyterLab extension that supports EDA with in-situ search of example notebooks and recommendation of useful APIs, powered by novel interactive visualization of search results. The code search and recommendation are enabled by advanced machine learning models, trained on a large corpus of EDA notebooks collected online. A user study is conducted to investigate both EDAssistant and data scientists' current practice (i.e., using external search engines). The results demonstrate the effectiveness and usefulness of EDAssistant, and participants appreciated its smooth and in-context support of EDA. We also report several design implications regarding code recommendation tools.

[J29]

Mingliang Xue, Yunhai Wang, Chang Han, Jian Zhang, Zheng Wang, Kaiyi Zhang, Christophe Hurter, Jian Zhao, Oliver Deussen. Target Netgrams: An Annulus-constrained Stress Model for Radial Graph Visualization. IEEE Transactions on Visualization and Computer Graphics, 29(10), pp. 4256-4268, 2023 (Accepted in 2022).

Abstract: We present Target Netgrams as a visualization technique for radial layouts of graphs Inspired by manually created target sociograms, we propose an annulus-constrained stress model that aims to position nodes onto the annuli between adjacent circles for indicating their radial hierarchy, while maintaining the network structure (clusters and neighborhoods) and improving readability as much as possible. This is achieved by having more space on the annuli than traditional layout techniques. By adapting stress majorization to this model, the layout is computed as a constrained least square optimization problem. Additional constraints (e.g., parent-child preservation, attribute-based clusters and structure-aware radii) are provided for exploring nodes, edges, and levels of interest. We demonstrate the effectiveness of our method through a comprehensive evaluation, a user study, and a case study.

[J28]

Anjul Tyagi, Jian Zhao, Pushkar Patel, Swasti Khurana, Klaus Mueller. Infographics Wizard: Flexible Infographics Authoring and Design Exploration. Computer Graphics Forum (Proceedings of EuroVis 2022), 41(3), pp. 121-132, 2022.

Abstract: Infographics are an aesthetic visual representation of information following specific design principles of human perception. Designing infographics can be a tedious process for non-experts and time-consuming, even for professional designers. With the help of designers, we propose a semi-automated infographic framework for general structured and flow-based infographic de- sign generation. For novice designers, our framework automatically creates and ranks infographic designs for a user-provided text with no requirement for design input. However, expert designers can still provide custom design inputs to customize the infographics. We will also contribute an individual visual group (VG) designs dataset (in SVG), along with a 1k complete info-graphic image dataset with segmented VGs in this work. Evaluation results confirm that by using our framework, designers from all expertise levels can generate generic infographic designs faster than existing methods while maintaining the same quality as hand-designed infographics templates.

[J27]

Takanori Fujiwara, Jian Zhao, Francine Chen, Yaoliang Yu, Kwan-Liu Ma. Network Comparison with Interpretable Contrastive Network Representation Learning. Journal of Data Science, Statistics, and Visualization, 2(5), pp. 1-35, 2022.

Abstract: Identifying unique characteristics in a network through comparison with another network is an essential network analysis task. For example, with networks of protein interactions obtained from normal and cancer tissues, we can discover unique types of interactions in cancer tissues. This analysis task could be greatly assisted by contrastive learning, which is an emerging analysis approach to discover salient patterns in one dataset relative to another. However, existing contrastive learning methods cannot be directly applied to networks as they are designed only for high-dimensional data analysis. To address this problem, we introduce a new analysis approach called contrastive network representation learning (cNRL). By integrating two machine learning schemes, network representation learning and contrastive learning, cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another. Within this approach, we also design a method, named i-cNRL, which offers interpretability in the learned results, allowing for understanding which specific patterns are only found in one network. We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets. Furthermore, we compare i-cNRL and other potential cNRL algorithm designs through quantitative and qualitative evaluations.

[S6]

Maoyuan Sun, Yue Ma, Yuanxin Wang, Tianyi Li, Jian Zhao, Yujun Liu, Ping-Shou Zhong. Toward Systematic Considerations of Missingness in Visual Analytics. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 110-114, 2022.
 Best Paper Honorable Mention

Abstract: Data-driven decision making has been a common task in today's big data era, from simple choices such as finding a fast way to drive home, to complex decisions on medical treatment. It is often supported by visual analytics. For various reasons (e.g., system failure, interrupted network, intentional information hiding, or bias), visual analytics for sensemaking of data involves missingness (e.g., data loss and incomplete analysis), which impacts human decisions. For example, missing data can cost a business millions of dollars, and failing to recognize key evidence can put an innocent person in jail. Being aware of missingness is critical to avoid such catastrophes. To fulfill this, as an initial step, we consider missingness in visual analytics from two aspects: data-centric and human-centric. The former emphasizes missingness in three data-related categories: data composition, data relationship, and data usage. The latter focuses on the human-perceived missingness at three levels: observed-level, inferred-level, and ignored-level. Based on them, we discuss possible roles of visualizations for handling missingness, and conclude our discussion with future research opportunities.

[C23]

Sangho Suh, Jian Zhao, Edith Law. CodeToon: Story Ideation, Auto Comic Generation, and Structure Mapping for Code-Driven Storytelling. Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 13:1-13:16, 2022.

Abstract: Recent work demonstrated how we can design and use coding strips, a form of comic strips with corresponding code, to enhance teaching and learning in programming. However, creating coding strips is a creative, time-consuming process. Creators have to generate stories from code (code→story) and design comics from stories (story→comic). We contribute CodeToon, a comic authoring tool that facilitates this code-driven storytelling process with two mechanisms: (1) story ideation from code using metaphor and (2) automatic comic generation from the story. We conducted a two-part user study that evaluates the tool and the comics generated by participants to test whether CodeToon facilitates the authoring process and helps generate quality comics. Our results show that CodeToon helps users create accurate, informative, and useful coding strips in a significantly shorter time. Overall, this work contributes methods and design guidelines for code-driven storytelling and opens up opportunities for using art to support computer science education.

[C22]

Nikhita Joshi, Matthew Lakier, Daniel Vogel, Jian Zhao. A Design Framework for Contextual and Embedded Information Visualizations in Spatial Augmented Reality. Proceedings of the Graphics Interface Conference, pp. 24:1-24:12, 2022.

Abstract: Spatial augmented reality (SAR) displays digital content in a real environment in ways that are situated, peripheral, and viewable by multiple people. These capabilities change how embedded information visualizations are used, designed, and experienced. But a comprehensive design framework that captures the specific characteristics and parameters relevant to SAR is missing. We contribute a new design framework for leveraging context and surfaces in the environment for SAR visualizations. An accompanying design process shows how designers can apply the framework to generate and describe SAR visualizations. The framework captures how the user's intent, interaction, and six environmental and visualization factors can influence SAR visualizations. The potential of this design framework is illustrated through eighteen exemplar application scenarios and accompanying envisionment videos.

[C21]

Gloria Fernandez-Nieto, Pengcheng An, Jian Zhao, Simon Buckingham Shum, Roberto Martinez-Maldonado. Classroom Dandelions: Visualising Participants' Position, Trajectories and Body Orientation Augments Teachers' Sensemaking. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 564:1-564:17, 2022.

Abstract: Despite the digital revolution, physical space remains the site for teaching and learning embodied knowledge and skills. Both teachers and students must develop spatial competencies to effectively use classroom spaces, enabling fluid verbal and non-verbal interaction. While video permits rich activity capture, it provides no support for quickly seeing activity patterns that can assist learning. In contrast, position tracking systems permit the automated modelling of spatial behaviour, opening new possibilities for feedback. This paper introduces the design rationale for Dandelion Diagrams that integrate location, trajectory and body orientation over a variable period. Applied in two authentic teaching contexts (a science laboratory, and a nursing simulation) we show how heatmaps showing only teacher/student location led to misinterpretations that were resolved by overlaying Dandelion Diagrams. Teachers also identified a variety of ways they could aid professional development. We conclude Dandelion Diagrams assisted sensemaking, but discuss the ethical risks of over-interpretation.

[C20]

Pengcheng An, Ziqi Zhou, Qing Liu, Yifei Yin, Linghao Du, Da-Yuan Huang, Jian Zhao. VibEmoji: Exploring User-authoring Multi-modal Emoticons in Social Communication. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 493:1-493:17, 2022.

Abstract: Emoticons are indispensable in online communications. With users' growing needs for more customized and expressive emoticons, recent messaging applications begin to support (limited) multi-modal emoticons: e.g., enhancing emoticons with animations or vibrotactile feedback. However, little empirical knowledge has been accumulated concerning how people create, share and experience multi-modal emoticons in everyday communication, and how to better support them through design. To tackle this, we developed VibEmoji, a user-authoring multi-modal emoticon interface for mobile messaging. Extending existing designs, VibEmoji grants users greater flexibility to combine various emoticons, vibrations, and animations on-the-fly, and offers non-aggressive recommendations based on these components' emotional relevance. Using VibEmoji as a probe, we conducted a four-week field study with 20 participants, to gain new understandings from in-the-wild usage and experience, and extract implications for design. We thereby contribute to both a novel system and various insights for supporting users' creation and communication of multi-modal emoticons.

[C19]

Mingming Fan, Xianyou Yang, Tsz Tung Yu, Vera Q. Liao, Jian Zhao. Human-AI Collaboration for UX Evaluation: Effects of Explanation and Synchronization. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), pp. 96:1-96:32, 2022.

Abstract: Analyzing usability test videos is arduous. Although recent research showed the promise of AI in assisting with such tasks, it remains largely unknown how AI should be designed to facilitate effective collaboration between user experience (UX) evaluators and AI. Inspired by the concepts of agency and work context in human and AI collaboration literature, we studied two corresponding design factors for AI-assisted UX evaluation: explanations and synchronization. Explanations allow AI to further inform humans how it identifies UX problems from a usability test session; synchronization refers to the two ways humans and AI collaborate: synchronously and asynchronously. We iteratively designed a tool, AI Assistant, with four versions of UIs corresponding to the two levels of explanations (with/without) and synchronization (sync/async). By adopting a hybrid wizard-of-oz approach to simulating an AI with reasonable performance, we conducted a mixed-method study with 24 UX evaluators identifying UX problems from usability test videos using AI Assistant. Our quantitative and qualitative results show that AI with explanations, regardless of being presented synchronously or asynchronously, provided better support for UX evaluators' analysis and was perceived more positively; when without explanations, synchronous AI better improved UX evaluators' performance and engagement compared to the asynchronous AI. Lastly, we present the design implications for AI-assisted UX evaluation and facilitating more effective human-AI collaboration.

[W9]

Zejiang Shen, Jian Zhao, Melissa Dell, Yaoliang Yu, Weining Li. OLALA: Object-Level Active Learning Based Layout Annotation. Proceedings of the EMNLP 5th Workshop on NLP and Computational Social Science, 2022.

Abstract: Document images often have intricate layout structures, with numerous content regions (e.g. texts, figures, tables) densely arranged on each page. This makes the manual annotation of layout datasets expensive and inefficient. These characteristics also challenge existing active learning methods, as image-level scoring and selection suffer from the overexposure of common objects.Inspired by recent progresses in semi-supervised learning and self-training, we propose an Object-Level Active Learning framework for efficient document layout Annotation, OLALA. In this framework, only regions with the most ambiguous object predictions within an image are selected for annotators to label, optimizing the use of the annotation budget. For unselected predictions, the semi-automatic correction algorithm is proposed to identify certain errors based on prior knowledge of layout structures and rectifies them with minor supervision. Additionally, we carefully design a perturbation-based object scoring function for document images. It governs the object selection process via evaluating prediction ambiguities, and considers both the positions and categories of predicted layout objects. Extensive experiments show that OLALA can significantly boost model performance and improve annotation efficiency, given the same labeling budget.

2021

[J26]

Jian Zhao, Shenyu Xu, Senthil Chandrasegaran, Chris Bryan, Fan Du, Aditi Mishra, Xin Qian, Yiran Li, Kwan-Liu Ma. ChartStory: Automated Partitioning, Layout, and Captioning of Charts into Comic-Style Narratives. IEEE Transactions on Visualization and Computer Graphics, 29(2), pp. 1384-1399, 2023 (Accepted in 2021).

Abstract: Visual data storytelling is gaining importance as a means of presenting data-driven information or analysis results, especially to the general public. This has resulted in design principles being proposed for data-driven storytelling, and new authoring tools being created to aid such storytelling. However, data analysts typically lack sufficient background in design and storytelling to make effective use of these principles and authoring tools. To assist this process, we present ChartStory for crafting data stories from a collection of user-created charts, using a style akin to comic panels to imply the underlying sequence and logic of data-driven narratives. Our approach is to operationalize established design principles into an advanced pipeline which characterizes charts by their properties and similarity, and recommends ways to partition, layout, and caption story pieces to serve a narrative. ChartStory also augments this pipeline with intuitive user interactions for visual refinement of generated data comics. We extensively and holistically evaluate ChartStory via a trio of studies. We first assess how the tool supports data comic creation in comparison to a manual baseline tool. Data comics from this study are subsequently compared and evaluated to ChartStory's automated recommendations by a team of narrative visualization practitioners. This is followed by a pair of interview studies with data scientists using their own datasets and charts who provide an additional assessment of the system. We find that ChartStory provides cogent recommendations for narrative generation, resulting in data comics that compare favorably to manually-created ones.

[J25]

Ying Zhao, Jingcheng Shi, Jiawei Liu, Jian Zhao, Fangfang Zhou, Wenzhi Zhang, Kangyi Chen, Xin Zhao, Chunyao Zhu, Wei Chen. Evaluating Effects of Background Stories on Graph Perception. IEEE Transactions on Visualization and Computer Graphics, 28(12), pp. 4839-4854, 2022 (Accepted in 2021).

Abstract: A graph is an abstract model that represents relations among entities, for example, the interactions of characters in a novel. Background story endows entities and relations with real-world meanings and describes semantics and contexts of the abstract model, for example, the actual story that the novel presents. Considering practical experience and relevant research, human viewers who know the background story of a graph and those not knowing the story may perform differently when perceiving the same graph. However, there are currently no previous studies to adequately address this problem. This paper presents an evaluation study that investigates the effects of background stories on graph perception. We formulate three hypotheses on different aspects including visual focus areas, graph structure identification, and mental model formation, and design three controlled experiments to test our hypotheses using real-world graphs with background stories. We analyze our experimental data to compare the performance of participants who have read and not read the background stories, and obtain a set of instructive findings. First, our results show that knowing the background stories affects participants focus areas in interactive graph exploration to a certain extent. Second, it significantly affects the performance of identifying community structures but not high degree and bridge structures. Third, it has a significant impact on graph recognition under blurred visual conditions. These findings can bring new considerations to the design of storytelling visualizations and interactive graph explorations.

[J24]

Maoyuan Sun, Akhil Namburi, David Koop, Jian Zhao, Tianyi Li, Haeyong Chung. Towards Systematic Design Considerations for Visualizing Cross-View Data Relationships. IEEE Transactions on Visualization and Computer Graphics, 28(12), pp. 4741-4756, 2022 (Accepted in 2021).

Abstract: Due to the scale of data and the complexity of analysis tasks, insight discovery often requires coordinating multiple visualizations (views), with each view displaying different parts of data or the same data from different perspectives. For example, to analyze car sales records, a marketing analyst uses a line chart to visualize the trend of car sales, a scatterplot to inspect the price and horsepower of different cars, and a matrix to compare the transaction amounts in types of deals. To explore related information across multiple views, current visual analysis tools heavily rely on brushing and linking techniques, which may require a significant amount of user effort (e.g., many trial-and-error attempts). There may be other efficient and effective ways of displaying cross-view data relationships to support data analysis with multiple views, but currently there are no guidelines to address this design challenge. In this paper, we present systematic design considerations for visualizing cross-view data relationships, which leverages descriptive aspects of relationships and usable visual context of multi-view visualizations. We discuss pros and cons of different designs for showing cross-view data relationships, and provide a set of recommendations for helping practitioners make design decisions.

[J23]

Ehsan Jahangirzadeh Soure, Emily Kuang, Mingming Fan, Jian Zhao. CoUX: Collaborative Visual Analysis of Think-Aloud Usability Test Videos for Digital Interfaces. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VIS'21), 28(1), pp. 643-653, 2022.

Abstract: Reviewing a think-aloud video is both time-consuming and demanding as it requires UX (user experience) professionals to attend to many behavioral signals of the user in the video. Moreover, challenges arise when multiple UX professionals need to collaborate to reduce bias and errors. We propose a collaborative visual analytics tool, CoUX, to facilitate UX evaluators collectively reviewing think-aloud usability test videos of digital interfaces. CoUX seamlessly supports usability problem identification, annotation, and discussion in an integrated environment. To ease the discovery of usability problems, CoUX visualizes a set of problem-indicators based on acoustic, textual, and visual features extracted from the video and audio of a think-aloud session with machine learning.CoUX further enables collaboration amongst UX evaluators for logging, commenting, and consolidating the discovered problems with a chatbox-like user interface. We designed CoUX based on a formative study with two UX experts and insights derived from the literature. We conducted a user study with six pairs of UX practitioners on collaborative think-aloud video analysis tasks. The results indicate that CoUX is useful and effective in facilitating both problem identification and collaborative teamwork. We provide insights into how different features of CoUX were used to support both independent analysis and collaboration. Furthermore, our work highlights opportunities to improve collaborative usability test video analysis.

[J22]

Takanori Fujiwara, Xinhai Wei, Jian Zhao, Kwan-Liu Ma. Interactive Dimensionality Reduction for Comparative Analysis. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VIS'21), 28(1), pp. 758-768, 2022.

Abstract: Finding the similarities and differences between two or more groups of datasets is a fundamental analysis task. For high-dimensional data, dimensionality reduction (DR) methods are often used to find the characteristics of each group. However, existing DR methods provide limited capability and flexibility for such comparative analysis as each method is designed only for a narrow analysis target, such as identifying factors that most differentiate groups. In this work, we introduce an interactive DR framework where we integrate our new DR method, called ULCA (unified linear comparative analysis), with an interactive visual interface. ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks. To provide flexibility for comparative analysis, we develop an optimization algorithm that enables analysts to interactively refine ULCA results. Additionally, we provide an interactive visualization interface to examine ULCA results with a rich set of analysis libraries. We evaluate ULCA and the optimization algorithm to show their efficiency as well as present multiple case studies using real-world datasets to demonstrate the usefulness of our framework.

[J21]

Maoyuan Sun, Abdul Shaikh, Hamed Alhoori, Jian Zhao. SightBi: Exploring Cross-View Data Relationships with Biclusters. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VIS'21), 28(1), pp. 54-64, 2022.
 Best Paper Honorable Mention

Abstract: Multiple-view visualization (MV) has been heavily used in visual analysis tools for sensemaking of data in various domains(e.g., bioinformatics, cybersecurity and text analytics). One common task of IEEE VISual analysis with multiple views is to relate data acrossdifferent views. For example, to identify threats, an intelligence analyst needs to link people from a social network graph with locationson a crime-map, and then search and read relevant documents. Currently, exploring cross-view data relationships heavily relies on view-coordination techniques (e.g., brushing and linking). They may require significant user effort on many trial-and-error attempts, such as repetitiously selecting elements in one view, observing and following elements highlighted in other views. To address this, we presentSightBi, a visual analytics approach for supporting cross-view data relationship explorations. We discuss the design rationale of SightBi in detail, with identified user tasks regarding the usage of cross-view data relationships. SightBi formalize cross-view data relationships as biclusters and compute them from a dataset. SightBi uses a bi-context design that highlights creating stand-alone relationship-views.This helps to preserve existing views and serves as an overview of cross-view data relationships to guide user explorations. Moreover,SightBi allows users to interactively manage the layout of multiple views by using newly created relationship-views. With a usagescenario, we demonstrate the usefulness of SightBi for sensemaking of cross-view data relationships.

[J20]

Linping Yuan, Ziqi Zhou, Jian Zhao, Yiqiu Guo, Fan Du, Huamin Qu. InfoColorizer: Interactive Recommendation of Color Palettes for Infographics. IEEE Transactions on Visualization and Computer Graphics, 28(12), pp. 4252-4266, 2022 (Accepted in 2021).

Abstract: When designing infographics, general users usually struggle with getting desired color palettes using existing infographic authoring tools, which sometimes sacrifice customizability, require design expertise, or neglect the influence of elements? spatial arrangement. We propose a data-driven method that provides flexibility by considering users? preferences, lowers the expertise barrier via automation, and tailors suggested palettes to the spatial layout of elements. We build a recommendation engine by utilizing deep learning techniques to characterize good color design practices from data, and further develop InfoColorizer, a tool that allows users to obtain color palettes for their infographics in an interactive and dynamic manner. To validate our method, we conducted a comprehensive four-part evaluation, including case studies, a controlled user study, a survey study, and an interview study. The results indicate that InfoColorizer can provide compelling palette recommendations with adequate flexibility, allowing users to effectively obtain high-quality color design for input infographics with low effort.

[C18]

Xingjun Li, Yuanxin Wang, Hong Wang, Yang Wang, Jian Zhao. NBSearch: Semantic Search and Visual Exploration of Computational Notebooks. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 308:1-308:14, 2021.

Abstract: Code search is an important and frequent activity for developers using computational notebooks (e.g., Jupyter). The flexibility of notebooks brings challenges for effective code search, where classic search interfaces for traditional software code may be limited. In this paper, we propose, NBSearch, a novel system that supports semantic code search in notebook collections and interactive visual exploration of search results. NBSearch leverages advanced machine learning models to enable natural language search queries and intuitive visualizations to present complicated intra- and inter-notebook relationships in the returned results. We developed NB- Search through an iterative participatory design process with two experts from a large software company. We evaluated the models with a series of experiments and the whole system with a controlled user study. The results indicate the feasibility of our analytical pipeline and the effectiveness of NBSearch to support code search in large notebook collections.

[C17]

Siyuan Xia, Nafisa Anzum, Semih Salihoglu, Jian Zhao. KTabulator: Interactive Ad hoc Table Creation using Knowledge Graphs. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 100:1-100:14, 2021.

Abstract: The need to find or construct tables arises routinely to accomplish many tasks in everyday life, as a table is a common format for organizing data. However, when relevant data is found on the web, it is often scattered across multiple tables on different web pages, requiring tedious manual searching and copy-pasting to collect data. We propose KTabulator, an interactive system to effectively extract, build, or extend ad hoc tables from large corpora, by leveraging their computerized structures in the form of knowledge graphs. We developed and evaluated KTabulator using Wikipedia and its knowledge graph DBpedia as our testbed. Starting from an entity or an existing table, KTabulator allows users to extend their tables by finding relevant entities, their properties, and other relevant tables, while providing meaningful suggestions and guidance. The results of a user study indicate the usefulness and efficiency of KTabulator in ad hoc table creation.

[S5]

Jian Zhao, Maoyuan Sun, Patrick Chiu, Francine Chen, Bee Liew. Know-What and Know-Who: Document Searching and Exploration using Topic-Based Two-Mode Networks. Proceedings of the IEEE Pacific Visualization Symposium, pp. 81-85, 2021.

Abstract: This paper proposes a novel approach for analyzing search results of a document collection, which can help support know-what and know-who information seeking questions. Search results are grouped by topics, and each topic is represented by a two-mode network composed of related documents and authors (i.e., biclusters). We visualize these biclusters in a 2D layout to support interactive visual exploration of the analyzed search results, which highlights a novel way of organizing entities of biclusters. We evaluated our approach using a large academic publication corpus, by testing the distribution of the relevant documents and of lead and prolific authors. The results indicate the effectiveness of our approach compared to traditional 1D ranked lists. Moreover, a user study with 12 participants was conducted to compare our proposed visualization, a simplified variation without topics, and a text-based interface. We report on participants' task performance, their preference of the three interfaces, and the different strategies used in information seeking.

[W8]

Zhaoyi Yang, Pengcheng An, Jinchen Yang, Samuel Strojny, Zihui Zhang, Dongsheng Sun, Jian Zhao. Designing Mobile EEG Neurofeedback Games for Children with Autism: Implications from Industry Practice. Proceedings of the ACM International Conference on Mobile Human-Computer Interaction (Industry Perspectives), pp. 23:1-23:6, 2021.

Abstract: Neurofeedback games are an effective and playful approach to enhance certain social and attentional capabilities in children with autism, which becomes increasingly accessible with commercialized mobile EEG modules. However, little industry-based experiences are shared, regarding how to better design neurofeedback games to fine-tune their playability and user experiences for autistic children. In this paper, we review the experiences we gained from industry practice, in which a series of mobile EEG neurofeedback games have been developed for preschool autistic children. We briefly describe our design and development in a one-year collaboration with a special education center involving a group of stakeholders: children with autism and their caregivers and parents. We then summarize four concrete implications we learnt concerning the design of game characters, game narratives, as well as gameplay elements, which aim to support future work in creating better neurofeedback games for prescho ol children with autism.

2020

[J19]

Jian Zhao, Maoyuan Sun, Francine Chen, Patrick Chiu. Understanding Missing Links in Bipartite Networks with MissBiN. IEEE Transactions on Visualization and Computer Graphics, 28(6), pp. 2457-2469, 2022 (Accepted in 2020).

Abstract: The analysis of bipartite networks is critical in a variety of application domains, such as exploring entity co-occurrences in intelligence analysis and investigating gene expression in bio-informatics. One important task is missing link prediction, which infers the existence of unseen links based on currently observed ones. In this paper, we propose a visual analysis system, MissBiN, to involve analysts in the loop for making sense of link prediction results. MissBiN equips a novel method for link prediction in a bipartite network by leveraging the information of bi-cliques in the network. It also provides an interactive visualization for understanding the algorithm outputs. The design of MissBiN is based on three high-level analysis questions (what, why, and how) regarding missing links, which are distilled from the literature and expert interviews. We conducted quantitative experiments to assess the performance of the proposed link prediction algorithm, and interviewed two experts from different domains to demonstrate the effectiveness of MissBiN as a whole. We also provide a comprehensive usage scenario to illustrate the usefulness of the tool in an application of intelligence analysis.

[J18]

Jian Zhao, Mingming Fan, Mi Feng. ChartSeer: Interactive Steering Exploratory Visual Analysis with Machine Intelligence. IEEE Transactions on Visualization and Computer Graphics, 28(3), pp. 1500-1513, 2022 (Accepted in 2020).

Abstract: During exploratory visual analysis (EVA), analysts need to continually determine which subsequent activities to perform, such as which data variables to explore or how to present data variables visually. Due to the vast combinations of data variables and visual encodings that are possible, it is often challenging to make such decisions. Further, while performing local explorations, analysts often fail to attend to the holistic picture that is emerging from their analysis, leading them to improperly steer their EVA. These issues become even more impactful in the real world analysis scenarios where EVA occurs in multiple asynchronous sessions that could be completed by one or more analysts. To address these challenges, this work proposes ChartSeer, a system that uses machine intelligence to enable analysts to visually monitor the current state of an EVA and effectively identify future activities to perform. ChartSeer utilizes deep learning techniques to characterize analyst-created data charts to generate visual summaries and recommend appropriate charts for further exploration based on user interactions. A case study was first conducted to demonstrate the usage of ChartSeer in practice, followed by a controlled study to compare ChartSeer‘s performance with a baseline during EVA tasks. The results demonstrated that ChartSeer enables analysts to adequately understand current EVA status and advance their analysis by creating charts with increased coverage and visual encoding diversity.

[C16]

Takanori Fujiwara, Jian Zhao, Francine Chen, Kwan‑Liu Ma. A Visual Analytics Framework for Contrastive Network Analysis. Proceedings of the IEEE Conference on Visual Analytics Science and Technology, pp. 48-59, 2020.

Abstract: A common network analysis task is comparison of two networks to identify unique characteristics in one network with respect to the other. For example, when comparing protein interaction networks derived from normal and cancer tissues, one essential task is to discover protein-protein interactions unique to cancer tissues. However, this task is challenging when the networks contain complex structural (and semantic) relations. To address this problem, we design ContraNA, a visual analytics framework leveraging both the power of machine learning for uncovering unique characteristics in networks and also the effectiveness of visualization for understanding such uniqueness. The basis of ContraNA is cNRL, which integrates two machine learning schemes, network representation learning (NRL) and contrastive learning (CL), to generate a low-dimensional embedding that reveals the uniqueness of one network when compared to another. ContraNA provides an interactive visualization interface to help analyze the uniqueness by relating embedding results and network structures as well as explaining the learned features by cNRL. We demonstrate the usefulness of ContraNA with two case studies using real-world datasets. We also evaluate ContraNA through a controlled user study with 12 participants on network comparison tasks. The results show that participants were able to both effectively identify unique characteristics from complex networks and interpret the results obtained from cNRL.

[B1]

Jian Zhao, Fanny Chevalier, Christopher Collins. Designing Tree Visualization Techniques for Discourse Analysis. LingVis: Visual Analytics for Linguistics, M. Butt, A. Hautli-Janisz, and V. Lyding (Editors), Chapter 3, Center for the Study of Language and Information, 2020.

Abstract: A discourse parser is a natural language processing system which can represent the organization of a document based on a rhetorical structure tree - one of the key data structures enabling applications such as text summarization question answering and dialogue generation. Computational linguists currently rely on manually exploring and comparing the discourse structures to get intuitions for improving parsing algorithms. In this paper, we revisit our earlier work on DAViewer, an interactive visualization system for assisting computational linguists to explore, compare, evaluate, and annotate the results of discourse parsers. We present an investigation of the rationales guiding design decisions for discourse analysis and compare three alternative representations of discourse parse trees. We report the results of an expert review of these design alternatives for the task of comparing discourse parsing algorithms.

[W7]

Brad Glasbergen, Michael Abebe, Khuzaima Daudjee, Daniel Vogel, Jian Zhao. Sentinel: Understanding Data Systems. Proceedings of the ACM SIGMOD Conference (Demo), pp. 2729-2732, 2020.
 Best Demo

Abstract: The complexity of modern data systems and applications greatly increases the challenge in understanding system behaviour and diagnosing performance problems. When these problems arise, system administrators are left with the difficult task of remedying them by relying on large debug log files, vast numbers of metrics, and system-specific tooling. We demonstrate the Sentinel system, which enables administrators to analyze systems and applications by building models of system execution and comparing them to derive key differences in behaviour. The resulting analyses are then presented as system reports to administrators and developers in an intuitive fashion. Users of Sentinel can locate, identify and take steps to resolve the reported performance issues. As Sentinel's models are constructed online by intercepting debug logging library calls, Sentinel's functionality incurs little overhead and works with all systems that use standard debug logging libraries.

2019

[J17]

Mingming Fan, Ke Wu, Jian Zhao, Yue Li, Winter Wei, Khai Truong. VisTA: Integrating Machine Intelligence with Visualization to Support the Investigation of Think-Aloud Sessions. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis'19), 26(1), pp. 343-352, 2020.

Abstract: Think-aloud protocols are widely used by user experience (UX) practitioners in usability testing to uncover issues in user interface design. It is often arduous to analyze large amounts of recorded think-aloud sessions and few UX practitioners have an opportunity to get a second perspective during their analysis due to time and resource constraints. Inspired by the recent research that shows subtle verbalization and speech patterns tend to occur when users encounter usability problems, we take the first step to design and evaluate an intelligent visual analytics tool that leverages such patterns to identify usability problem encounters and present them to UX practitioners to assist their analysis. We first conducted and recorded think-aloud sessions, and then extracted textual and acoustic features from the recordings and trained machine learning (ML) models to detect problem encounters. Next, we iteratively designed and developed a visual analytics tool, VisTA, which enables dynamic investigation of think-aloud sessions with a timeline visualization of ML predictions and input features. We conducted a between-subjects laboratory study to compare three conditions, i.e., VisTA, VisTASimple (no visualization of the ML’s input features), and Baseline (no ML information at all), with 30 UX professionals. The findings show that UX professionals identified more problem encounters when using VisTA than Baseline by leveraging the problem visualization as an overview, anticipations, and anchors as well as the feature visualization as a means to understand what ML considers and omits. Our findings also provide insights into how they treated ML, dealt with (dis)agreement with ML, and reviewed the videos (i.e., play, pause, and rewind).

[J16]

Maoyuan Sun, Jian Zhao, Hao Wu, Kurt Luther, Chris North, Naren Ramakrishnan. The Effect of Edge Bundling and Seriation on Sensemaking of Biclusters in Bipartite Graphs. IEEE Transactions on Visualization and Computer Graphics, 25(10), pp. 2983-2998, 2019.

Abstract: Exploring coordinated relationships (e.g., shared relationships between two sets of entities) is an important analytics task in a variety of real-world applications, such as discovering similarly behaved genes in bioinformatics, detecting malware collusions in cyber security, and identifying products bundles in marketing analysis. Coordinated relationships can be formalized as biclusters. In order to support visual exploration of biclusters, bipartite graphs based visualizations have been proposed, and edge bundling is used to show biclusters. However, it suffers from edge crossings due to possible overlaps of biclusters, and lacks in-depth understanding of its impact on user exploring biclusters in bipartite graphs. To address these, we propose a novel bicluster-based seriation technique that can reduce edge crossings in bipartite graphs drawing and conducted a user experiment to study the effect of edge bundling and this proposed technique on visualizing biclusters in bipartite graphs. We found that they both had impact on reducing entity visits for users exploring biclusters, and edge bundles helped them find more justified answers. Moreover, we identified four key trade-offs that inform the design of future bicluster visualizations. The study results suggest that edge bundling is critical for exploring biclusters in bipartite graphs, which helps to reduce low-level perceptual problems and support high-level inferences.

[J15]

Zhicong Lu, Mingming Fan, Yun Wang, Jian Zhao, Michelle Annett, Daniel Wigdor. InkPlanner: Supporting Prewriting via Intelligent Visual Diagramming. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'18), 25(1), pp. 277-287, 2019.

Abstract: Prewriting is the process of generating and organizing ideas before drafting a document. Although often overlooked by novice writers and writing tool developers, prewriting is a critical process that improves the quality of a final document. To better understand current prewriting practices, we first conducted interviews with writing learners and experts. Based on the learners' needs and experts' recommendations, we then designed and developed InkPlanner, a novel pen and touch visualization tool that allows writers to utilize visual diagramming for ideation during prewriting. InkPlanner further allows writers to sort their ideas into a logical and sequential narrative by using a novel widget— NarrativeLine. Using a NarrativeLine, InkPlanner can automatically generate a document outline to guide later drafting exercises. Inkplanner is powered by machine-generated semantic and structural suggestions that are curated from various texts. To qualitatively review the tool and understand how writers use InkPlanner for prewriting, two writing experts were interviewed and a user study was conducted with university students. The results demonstrated that InkPlanner encouraged writers to generate more diverse ideas and also enabled them to think more strategically about how to organize their ideas for later drafting.

[C15]

John Wenskovitch, Jian Zhao, Scott Carter, Matthew Cooper, Chris North. Albireo: An Interactive Tool for Visually Summarizing Computational Notebook Structure. Proceedings of the IEEE Symposium on Visualization in Data Science, pp. 1-10, 2019.

Abstract: Computational notebooks have become a major medium for data exploration and insight communication in data science. Although expressive, dynamic, and flexible, in practice they are loose collections of scripts, charts, and tables that rarely tell a story or clearly represent the analysis process. This leads to a number of usability issues, particularly in the comprehension and exploration of notebooks. In this work, we design, implement, and evaluate Albireo, a visualization approach to summarize the structure of notebooks, with the goal of supporting more effective exploration and communication by displaying the dependencies and relationships between the cells of a notebook using a dynamic graph structure. We evaluate the system via a case study and expert interviews, with our results indicating that such a visualization is useful for an analyst's self-reflection during exploratory programming, and also effective for communication of narratives and collaboration between analysts.

[S4]

Cheonbok Park, Inyoup Na, Yongjang Jo, Sungbok Shin, Yoo Jaehyo, Bum Chul Kwon, Jian Zhao, Hyungjong Noh, Yeonsoo Lee, Jaegul Choo. SANVis: Visual Analytics for Understanding Self-Attention Networks. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 146-150, 2019.

Abstract: Attention networks, a deep neural network architecture inspired by humans' attention mechanism, have seen significant success in im- age captioning, machine translation, and many other applications. Recently, they have been further evolved into an advanced approach called multi-head self-attention networks, which can encode a set of input vectors, e.g., word vectors in a sentence, into another set of vectors. Such encoding aims at simultaneously capturing diverse syntactic and semantic features within a set, each of which corresponds to a particular attention head, forming altogether multi-head attention. Meanwhile, the increased model complexity prevents users from easily understanding and manipulating the inner workings of models. To tackle the challenges, we present a visual analytics sys- tem called SANVis, which helps users understand the behaviors and the characteristics of multi-head self-attention networks. Using a state-of-the-art self-attention model called Transformer, we demon- strate usage scenarios of SANVis in machine translation tasks. Our system is available at http://short.sanvis.org.

[S3]

Jian Zhao, Maoyuan Sun, Francine Chen, Patrick Chiu. MissBiN: Visual Analysis of Missing Links in Bipartite Networks. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 71-75, 2019.

Abstract: The analysis of bipartite networks is critical in a variety of application domains, such as exploring entity co-occurrences in intelligence analysis and investigating gene expression in bio-informatics. One important task is missing link prediction, which infers the existence of unseen links based on currently observed ones. In this paper, we propose MissBiN that involves analysts in the loop for making sense of link prediction results. MissBiN combines a novel method for link prediction and an interactive visualization for examining and understanding the algorithm outputs. Further, we conducted quantitative experiments to assess the performance of the proposed link prediction algorithm and a case study to evaluate the overall effectiveness of MissBiN.

[S2]

Maoyuan Sun, David Koop, Jian Zhao, Chris North, Naren Ramakrishnan Interactive Bicluster Aggregation in Bipartite Graphs. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 246-250, 2019.

Abstract: Exploring coordinated relationships is important for sensemaking of data in various fields, such as intelligence analysis. To support such investigations, visual analysis tools use biclustering to mine relationships in bipartite graphs and visualize the resulting biclusters with standard graph visualization techniques. Due to overlaps among biclusters, such visualizations can be cluttered (e.g., with many edge crossings), when there are a large number of biclusters. Prior work attempted to resolve this problem by automatically ordering nodes in a bipartite graph. However, visual clutter is still a serious problem, since the number of displayed biclusters remains unchanged. We propose bicluster aggregation as an alternative approach, and have developed two methods of interactively merging biclusters. These interactive bicluster aggregations help organize similar biclusters and reduce the number of displayed biclusters. Initial expert feedback indicates potential usefulness of these techniques in practice.

[C14]

Mona Loorak, Wei Zhou, Ha Trinh, Jian Zhao, Wei Li. Hand-Over-Face Input Sensing for Interaction with Smartphones through the Built-in Camera. Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 32:1-32:12, 2019.
 Best Paper

Abstract: This paper proposes using face as a touch surface and employing hand-over-face (HOF) gestures as a novel input modality for interaction with smartphones, especially when touch input is limited. We contribute InterFace, a general system framework that enables the HOF input modality using advanced computer vision techniques. As an examplar of the usage of this framework, we demonstrate the feasibility and usefulness of HOF with an Android application for improving single-user and group selfie-taking experience through providing appearance customization in real-time. In a within-subjects study comparing HOF against touch input for single-user interaction, we found that HOF input led to significant improvements in accuracy and perceived workload, and was preferred by the participants. Qualitative results of an observational study also demonstrated the potential of HOF input modality to improve the user experience in multi-user interactions. Based on the lessons learned from our studies, we propose a set of potential applications of HOF to support smartphone interaction. We envision that the affordances provided by the this modality can expand the mobile interaction vocabulary and facilitate scenarios where touch input is limited or even not possible.

[C13]

Hao-Fei Cheng, Bowen Yu, Siwei Fu, Jian Zhao, Brent Hecht, Joseph Konstan, Loren Terveen, Svetlana Yarosh, Haiyi Zhu. Teaching UI Design at Global Scales: A Case Study of the Design of Collaborative Capstone Projects for MOOCs. Proceedings of the ACM Conference on Learning at Scale, pp. 11:1-11:11, 2019.

Abstract: Group projects are an essential component of teaching user interface (UI) design. We identified six challenges in transferring traditional group projects into the context of Massive Open Online Courses: managing dropout, avoiding free-riding, appropriate scaffolding, cultural and time zone differences, and establishing common ground. We present a case study of the design of a group project for a UI Design MOOC, in which we implemented technical tools and social structures to cope with the above challenges. Based on survey analysis, interviews, and team chat data from the students over a six-month period, we found that our socio-technical design addressed many of the obstacles that MOOC learners encountered during remote collaboration. We translate our findings into design implications for better group learning experiences at scale.

[W6]

Chidansh Bhatt, Jian Zhao, Hideto Oda, Francine Chen, Matthew Lee. OPaPi: Optimized Parts Pick-up Routing for Efficient Manufacturing. Proceedings of the ACM SIGMOD Workshop on Human-In-the-Loop Data Analytics, 5:1-5:8, 2019.

Abstract: Manufacturing environments require changes in work procedures and settings based on changes in product demand affecting the types of products for production. Resource re-organization and time needed for worker adaptation to such frequent changes can be expensive. For example, for each change, managers in a factory may be required to manually create a list of inventory items to be picked up by workers. Uncertainty in predicting the appropriate pick-up time due to differences in worker-determined routes may make it difficult for managers to generate a fixed schedule for delivery to the assembly line. To address these problems, we propose OPaPi, a human-centric system that improves the efficiency of manufacturing by optimizing parts pick-up routes and scheduling. OPaPi leverages frequent pattern mining and the traveling salesman problem solver to suggest rack placement for more efficient routes. The system further employs interactive visualization to incorporate an expert’s domain knowledge and different manufacturing constraints for real-time adaptive decision making.

2018

[J14]

Shenyu Xu, Chris Bryan, Kelvin Li, Jian Zhao, Kwan-Liu Ma. Chart Constellations: Effective Chart Summarization for Collaborative and Multi-User Analyses. Computer Graphics Forum (Proceedings of EuroVis 2018), 37(3), pp. 75-86, 2018.

Abstract: Nowadays, many data problems in the real-world are complex and thus require multiple analysts working together to uncover embedded insights by creating chart-driven data stories. But how, as a subsequent analysis step, do we interpret and learn from these collections of charts? We present a new system called Chart Constellations to interactively support a single analyst in the review and analysis of data stories created by other collaborative analysts. Instead of iterating through the individual charts for each data story, the analyst can project, cluster, filter, and connect results from all users in a meta-visualization approach. This approach supports deriving summary insights about the investigations and supports the exploration of new, un-investigated regions in the dataset. To evaluate our system, we conduct a user study comparing it against data science notebooks. Results suggest that our system promotes the discovery of both broad and high-level insights, including theme and trend analysis, subjective evaluation, and hypothesis generation.

[J13]

Wen Zhong, Wei Xu, Kevin Yager, Gregory Doerk, Jian Zhao, Yunke Tian, Sungsoo Ha, Cong Xie, Yuan Zhong, Klaus Mueller, Kerstin Kleese Van Dam. MultiSciView: Multivariate Scientific X-ray Image Visual Exploration with Cross-Data Space Views. Visual Informatics (Proceedings of PacificVAST 2018), 2(1), pp. 14-25, 2018.

Abstract: X-ray images obtained from synchrotron beamlines are large-scale, high-resolution and high-dynamic-range grayscale data encoding multiple complex properties of the measured materials. They are typically associated with a variety of metadata which increases their inherent complexity. There is a wealth of information embedded in these data but so far scientists lack modern exploration tools to unlock these hidden treasures. To bridge this gap, we propose MultiSciView, a multivariate scientific x-ray image visualization and exploration system for beamline-generated x-ray scattering data. Our system is composed of three complementary and coordinated interactive visualizations to enable a coordinated exploration across the images and their associated attribute and feature spaces. The first visualization features a multi-level scatterplot visualization dedicated for image exploration in attribute, image, and pixel scales. The second visualization is a histogram-based attribute cross filter by which users can extract desired subset patterns from data. The third one is an attribute projection visualization designed for capturing global attribute correlations. We demonstrate our framework by ways of a case study involving a real-world material scattering dataset. We show that our system can efficiently explore large-scale x-ray images, accurately identify preferred image patterns, anomalous images and erroneous experimental settings, and effectively advance the comprehension of material nanostructure properties..

[C12]

Chidansh Bhatt, Matthew Cooper, Jian Zhao. SeqSense: Video Recommendation Using Topic Sequence Mining. Proceedings of the International Conference on Multimedia Modeling, pp. 252-263, 2018.

Abstract: This paper examines content-based recommendation in domains exhibiting sequential topical structure. An example is educational video, including Massive Open Online Courses (MOOCs) in which knowledge builds within and across courses. Conventional content-based or collaborative filtering recommendation methods do not exploit courses' sequential nature. We describe a system for video recommendation that combines topic-based video representation with sequential pattern mining of inter-topic relationships. Unsupervised topic modeling provides a scalable and domain-independent representation. We mine inter-topic relationships from manually constructed syllabi that instructors provide to guide students through their courses. This approach also allows the inclusion of multi-video sequences among the recommendation results. Integrating the resulting sequential information with content-level similarity provides relevant as well as diversified recommendations. Quantitative evaluation indicates that the proposed system, SeqSense, recommends fewer redundant videos than baseline methods, and instead emphasizes results consistent with mined topic transitions.

[C11]

Jian Zhao, Chidansh Bhatt, Matthew Cooper, David Shamma. Flexible Learning with Semantic Visual Exploration and Sequence-Based Recommendation of MOOC Videos. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 329:1-329:13, 2018.

Abstract: Massive Open Online Course (MOOC) platforms have scaled online education to unprecedented enrollments, but remain limited by their rigid, predetermined curricula. This paper presents MOOCex, a technique that can offer a more flexible learning experience for MOOCs. MOOCex can recommend lecture videos across different courses with multiple perspectives, and considers both the video content and also sequential inter-topic relationships mined from course syllabi. MOOCex is also equipped with interactive visualization allowing learners to explore the semantic space of recommendations within their current learning context. The results of comparisons to traditional methods, including content-based recommendation and ranked list representation, indicate the effectiveness of MOOCex. Further, feedback from MOOC learners and instructors suggests that MOOCex enhances both MOOC-based learning and teaching.

[C10]

Siwei Fu, Jian Zhao, Hao-Fei Cheng, Haiyi Zhu, Jennifer Marlow. T-Cal: Understanding Team Conversation Data with Calendar-based Visualization. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 500:1-500:13, 2018.

Abstract: Understanding team communication and collaboration patterns is critical for improving work efficiency in organizations. This paper presents an interactive visualization system, T-Cal, that supports the analysis of conversation data from modern team messaging platforms (e.g., Slack). T-Cal employs a user-familiar visual interface, a calendar, to enable seamless multi-scale browsing of data from different perspectives. T-Cal also incorporates a number of analytical techniques for disentangling interleaving conversations, extracting keywords, and estimating sentiment. The design of T-Cal is based on an iterative user-centered design process including field studies, requirements gathering, initial prototypes demonstration, and evaluation with domain users. The resulting two case studies indicate the effectiveness and usefulness of T-Cal in real-world applications, including student group chats during a MOOC and daily conversations within an industry research lab.

[W4]

Matthew Cooper, Jian Zhao, Chidansh Bhatt, David Shamma. Using Recommendation to Explore Educational Video. Proceedings of the ACM International Conference on Multimedia Retrieval (Demo), 2018.

Abstract: Massive Open Online Course (MOOC) platforms have scaled online education to unprecedented enrollments, but remain limited by their rigid, predetermined curricula. Increasingly, professionals consume this content to augment or update specific skills rather than complete degree or certification programs. To better address the needs of this emergent user population, we describe a visual recommender system called MOOCex. The system recommends lecture videos across multiple courses and content platforms to provide a choice of perspectives on topics. The recommendation engine considers both video content and sequential inter-topic relationships mined from course syllabi. Furthermore, it allows for interactive visual exploration of the semantic space of recommendations within a learner's current context.

2017

[J12]

Jian Zhao, Maoyuan Sun, Francine Chen, Patrick Chiu. BiDots: Visual Exploration of Weighted Biclusters. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'17), 24(1), pp. 195-204, 2018.

Abstract: Discovering and analyzing biclusters, i.e., two sets of related entities with close relationships, is a critical task in many real-world applications, such as exploring entity co-occurrences in intelligence analysis, and studying gene expression in bio-informatics. While the output of biclustering techniques can offer some initial low-level insights, visual approaches are required on top of that due to the algorithmic output complexity.This paper proposes a visualization technique, called BiDots, that allows analysts to interactively explore biclusters over multiple domains. BiDots overcomes several limitations of existing bicluster visualizations by encoding biclusters in a more compact and cluster-driven manner. A set of handy interactions is incorporated to support flexible analysis of biclustering results. More importantly, BiDots addresses the cases of weighted biclusters, which has been underexploited in the literature. The design of BiDots is grounded by a set of analytical tasks derived from previous work. We demonstrate its usefulness and effectiveness for exploring computed biclusters with an investigative document analysis task, in which suspicious people and activities are identified from a text corpus.

[J11]

Jian Zhao, Michael Glueck, Petra Isenberg, Fanny Chevalier, Azam Khan. Supporting Handoff in Asynchronous Collaborative Sensemaking Using Knowledge-Transfer Graphs. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'17), 24(1), pp. 340-350, 2018.
 Best Paper Honorable Mention

Abstract: During asynchronous collaborative analysis, handoff of partial findings is challenging because externalizations produced by analysts may not adequately communicate their investigative process. To address this challenge, we developed techniques to automatically capture and help encode tacit aspects of the investigative process based on an analyst’s interactions, and streamline explicit authoring of handoff annotations. We designed our techniques to mediate awareness of analysis coverage, support explicit communication of progress and uncertainty with annotation, and implicit communication through playback of investigation histories. To evaluate our techniques, we developed an interactive visual analysis system, KTGraph, that supports an asynchronous investigative document analysis task. We conducted a two-phase user study to characterize a set of handoff strategies and to compare investigative performance with and without our techniques. The results suggest that our techniques promote the use of more effective handoff strategies, help increase an awareness of prior investigative process and insights, as well as improve final investigative outcomes.

[J10]

Siwei Fu, Hao Dong, Weiwei Cui, Jian Zhao, Huamin Qu. How Do Ancestral Traits Shape Family Trees over Generations? IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'17), 24(1), pp. 205-214, 2018.

Abstract: Whether and how does the structure of family trees differ by ancestral traits over generations? This is a fundamental question regarding the structural heterogeneity of family trees for the multi-generational transmission research. However, previous work mostly focuses on parent-child scenarios due to the lack of proper tools to handle the complexity of extending the research to multi-generational processes. Through an iterative design study with social scientists and historians, we develop TreeEvo that assists users to generate and test empirical hypotheses for multi-generational research. TreeEvo summarizes and organizes family trees by structural features in a dynamic manner based on a traditional Sankey diagram. A pixel-based technique is further proposed to compactly encode trees with complex structures in each Sankey Node. Detailed information of trees is accessible through a space-efficient visualization with semantic zooming. Moreover, TreeEvo embeds Multinomial Logit Model (MLM) to examine statistical associations between tree structure and ancestral traits. We demonstrate the effectiveness and usefulness of TreeEvo through an in-depth case-study with domain experts using a real-world dataset (containing 54,128 family trees of 126,196 individuals).

[C9]

Mingqian Zhao, Yijia Su, Jian Zhao, Shaoyu Chen, Huamin Qu. Mobile Situated Analytics of Ego-centric Network Data. Proceedings of the ACM SIGGRAPH Asia Symposium on Visualization, pp. 14:1-14:8, 2017.

Abstract: Situated Analytics has become popular and important with the resurge of Augmented Reality techniques and the prevalence of mobile platforms. However, existing Situated Analytics could only assist in simple visual analytical tasks such as data retrieval, and most visualization systems capable of aiding complex Visual Analytics are only designed for desktops. Thus, there remain lots of open questions about how to adapt desktop visualization systems to mobile platforms. In this paper, we conduct a study to discuss challenges and trade-offs during the process of adapting an existing desktop system to a mobile platform. With a specific example of interest, egoSlider {Wu et al. 2016}, a four-view dynamic ego-centric network visualization system is tailored to adapt the iPhone platform. We study how different view management techniques and interactions influence the effectiveness of presenting multi-scale visualizations including Scatterplot and Storyline visualizations. Simultaneously, a novel Main view+Thumbnails interface layout is devised to support smooth linking between multiple views on mobile platforms. We assess the effectiveness of our system through expert interviews with four experts in data visualization.

2016

[J9]

Jian Zhao, Michael Glueck, Simon Breslav, Fanny Chevalier, Azam Khan. Annotation Graphs: A Graph-Based Visualization for Meta-Analysis of Data based on User-Authored Annotations. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'16), 23(1), pp. 261-270, 2017.

Abstract: User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate thoughts between analysts. We present Annotation Graphs, a dynamic graph visualization that allows for high-level meta-analysis of data based on user-authored data annotations. Annotation graphs are implemented within C8, a system that enables visual exploratory analysis of a dataset and annotation authoring. Various layouts of the annotation graph are supported for viewing the annotation semantics from different perspectives. The space of annotation semantics includes data selections, comments, and tags, as well as their relationships. We propose a mixed-initiative approach to layout the annotation graph by integrating an analyst’s manual manipulations with an automatic layout based on the inferred similarity of the annotation semantics. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of IEEE VISually extending user-authored annotations to data meta-analysis for discovery and organization of ideas.

[J8]

Siwei Fu, Jian Zhao, Weiwei Cui, Huamin Qu. Visual Analysis of MOOC Forums with iForum. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'16), 23(1), pp. 201-210, 2017.

Abstract: Discussion forums of Massive Open Online Courses (MOOC) provide great opportunities for students to interact with instructional staff as well as other students. Exploration of MOOC forum data can offer valuable insights for these staff to enhance the course and prepare the next release. However, it is challenging due to the large, complicated, and heterogeneous nature of relevant datasets, which contain multiple dynamically interacting objects such as users, posts, and threads, each one including multiple attributes. In this paper, we present a design study for developing an interactive visual analytics system, called iForum, that allows for effectively discovering and understanding temporal patterns in MOOC forums. The design study was conducted with three domain experts in an iterative manner over one year, including a MOOC instructor and two official teaching assistants. iForum offers a set of novel visualization designs for presenting the three interleaving aspects of MOOC forums (i.e., posts, users, and threads) at three different scales. To demonstrate the effectiveness and usefulness of iForum, we describe a case study involving field experts, in which they use iForum to investigate real MOOC forum data for a course on JAVA programming.

[C8]

Jian Zhao, Michael Glueck, Fanny Chevalier, Yanhong Wu, Azam Khan. Egocentric Analysis of Dynamic Networks with EgoLines. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 5003-5014, 2016.
 Best Paper Honorable Mention

Abstract: The egocentric analysis of dynamic networks focuses on discovering the temporal patterns of a subnetwork around a specific central actor (i.e., an ego-network). These types of analyses are useful in many application domains, such as social science and business intelligence, providing insights about how the central actor interacts with the outside world. We present EgoLines, an interactive visualization to sup- port the egocentric analysis of dynamic networks. Using a "subway map" metaphor, a user can trace an individual actor over the evolution of the ego-network. The design of EgoLines is grounded in a set of key analytical questions pertinent to egocentric analysis, derived from our interviews with three domain experts and general network analysis tasks. We demonstrate the effectiveness of EgoLines in egocentric analysis tasks through a controlled experiment and a case study with a domain expert.

2015

[J7]

Yanhong Wu, Naveen Pitipornvivat, Jian Zhao, Sixiao Yang, Guowei Huang, Huamin Qu. egoSlider: Visual Analysis of Egocentric Network Evolution. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'15), 22(1), pp. 260-269, 2016.

Abstract: Ego-network, which represents relationships between a specific individual, i.e., the ego, and people connected to it, i.e., alters, is a critical target to study in social network analysis. Evolutionary patterns of ego-networks along time provide huge insights to many domains such as sociology, anthropology, and psychology. However, the analysis of dynamic ego-networks remains challenging due to its complicated time-varying graph structures, for example: alters come and leave, ties grow stronger and fade away, and alter communities merge and split. Most of the existing dynamic graph visualization techniques mainly focus on topological changes of the entire network, which is not adequate for egocentric analytical tasks. In this paper, we present egoSlider, a visual analysis system for exploring and comparing dynamic ego-networks. egoSlider provides a holistic picture of the data through multiple interactively coordinated views, revealing ego-network evolutionary patterns at three different layers: a macroscopic level for summarizing the entire ego-network data, a mesoscopic level for overviewing specific individuals' ego-network evolutions, and a microscopic level for displaying detailed temporal information of egos and their alters. We demonstrate the effectiveness of egoSlider with a usage scenario with the DBLP publication records. Also, a controlled user study indicates that in general egoSlider outperforms a baseline visualization of dynamic networks for completing egocentric analytical tasks.

[J6]

Jian Zhao, R. William Soukoreff, Ravin Balakrishnan. Exploring and Modeling Unimanual Object Manipulation on Multi-Touch Displays. International Journal of Human-Computer Studies, 78, pp. 68-80, 2015.

Abstract: Touch-sensitive devices are becoming increasingly wide-spread, and consequently gestural interfaces have become familiar to the public. Despite the fact that many gestures require frequently dragging, pinching, spreading, and rotating the finger-tips, there currently does not exist a human performance model describing this interaction. In this paper, a novel user performance model is derived for virtual object manipulation on touch-sensitive displays, which involves simultaneous translation, rotation, and scaling of the object. Two controlled experiments with dual-finger unimanual manipulations were conducted to validate the new model. The results indicate that the model fits the experimental data well, and performs the best among several alternative models. Moreover, based on the analysis of the empirical data, the simultaneity nature of manipulation in the task is explored and several design implications are provided.

[C7]

Jian Zhao, Zhicheng Liu, Mira Dontcheva, Aaron Hertzmann, Alan Wilson. MatrixWave: Visual Comparison of Event Sequence Data. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 259-268, 2015.
 Best Paper Honorable Mention

Abstract: Event sequence data analysis is common in many domains, including web and software development, transportation, and medical care. Few have investigated visualization techniques for comparison analysis of multiple event sequence datasets. Grounded in the real-world characteristics of web clickstream data, we explore visualization techniques for comparison of two clickstream datasets collected on different days or from users with different demographics. Through iterative design with web analysts, we designed MatrixWave, a matrix-based representation that allows analysts to get an overview of differences in traffic patterns and interactively explore paths through the website. We use color to encode differences and size to offer context over traffic volume. User feedback on MatrixWave is positive. Participants in a laboratory study were more accurate with MatrixWave than the conventional Sankey diagram.

[C6]

Fan Du, Nan Cao, Jian Zhao, Yu-Ru Lin. Trajectory Bundling for Animated Transitions. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 289-298, 2015.

Abstract: Animated transition has been a popular design choice when switching between different views or layouts, in which moving trajectories are created as cues for tracking objects between their location shifting. Tracking moving objects, however, becomes difficult when objects' moving paths overlap or tracking targets increase. In our work, we propose a new design to facilitate tracking moving objects in animated transitions. Instead of simply moving an object along a straight line, we create "bundled" moving trajectories for a group of objects that are close to one another and share similar moving directions. To study the effect of bundled trajectories, we untangle variations due to different aspects of tracking complexity in a comprehensive controlled user study. The results ascertain the effectiveness of using bundled trajectories, especially when the number of tracking targets grow and the object movement involves high degree of occlusion. We discuss the implication of our new design and study.

2014

[J5]

Jian Zhao, Nan Cao, Zhen Wen, Yale Song, Yu-Ru Lin, Christopher Collins. #FluxFlow: Visual Analysis of Anomalous Information Spreading on Social Media. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'14), 20(12), pp. 1773-1782, 2014.
 Best Paper Honorable Mention

Abstract: We present FluxFlow, an interactive visual analysis system for revealing and analyzing anomalous information spreading in social media. Everyday, millions of messages are created, commented, and shared by people on social media websites, such as Twitter and Facebook. This provides valuable data for researchers and practitioners in many application domains, such as marketing, to inform decision-making. Distilling valuable social signals from the huge crowd's messages, however, is challenging, due to the heterogeneous and dynamic crowd behaviors. The challenge is rooted in data analysts' capability of discerning the anomalous information behaviors, such as the spreading of rumors or misinformation, from the rest that are more conventional patterns, such as popular topics and newsworthy events, in a timely fashion. FluxFlow incorporates advanced machine learning algorithms to detect anomalies, and offers a set of novel visualization designs for presenting the detected threads for deeper analysis. We evaluated FluxFlow with real datasets containing the Twitter feeds captured during significant events such as Hurricane Sandy. Through quantitative measurements of the algorithmic performance and qualitative interviews with domain experts, the results show that the back-end anomaly detection model is effective in identifying anomalous retweeting threads, and its front-end interactive visualizations are intuitive and useful for analysts to discover insights in data and comprehend the underlying analytical model.

[J4]

Jian Zhao, R. William Soukoreff, Xiangshi Ren, Ravin Balakrishnan. A Model of Scrolling on Touch-Sensitive Displays. International Journal of Human-Computer Studies, 72(12), pp. 805-821, 2014.

Abstract: Scrolling interaction is a common and frequent activity allowing users to browse content that is initially off-screen. With the increasing popularity of touch-sensitive devices, gesture-based scrolling interactions (e.g., finger panning and flicking) have become an important element in our daily interaction vocabulary. However, there are currently no comprehensive user performance models for scrolling tasks on touch displays. This paper presents an empirical study of user performance in scrolling tasks on touch displays. In addition to three geometrical movement parameters --- scrolling distance, display window size, and target width, we also investigate two other factors that could affect the performance, i.e., scrolling modes --- panning and flicking, and feedback techniques --- with and without distance feedback. We derive a quantitative model based on four formal assumptions that abstract the real-world scrolling tasks, which are drawn from the analysis and observations of user scrolling actions. The results of a control experiment reveal that our model generalizes well for direct-touch scrolling tasks, accommodating different movement parameters, scrolling modes and feedback techniques. Also, the supporting blocks of the model, the four basic assumptions and three important mathematical components, are validated by the experimental data. In-depth comparisons with existing models of similar tasks indicate that our model performs the best under different measurement criteria. Our work provides a theoretical foundation for modeling sophisticated scrolling actions, as well as offers insights into designing scrolling techniques for next-generation touch input devices.

[C5]

Jian Zhao, Liang Gou, Fei Wang, Michelle Zhou. PEARL: An Interactive Visual Analytic Tool for Understanding Personal Emotion Style Derived from Social Media. Proceedings of the IEEE Symposium on Visual Analytics Science and Technology, pp. 203-212, 2014.

Abstract: Hundreds of millions of people leave digital footprints on social media (e.g., Twitter and Facebook). Such data not only disclose a person's demographics and opinions, but also reveal one's emotional style. Emotional style captures a person's patterns of emotions over time, including his overall emotional volatility and resilience. Understanding one's emotional style can provide great benefits for both individuals and businesses alike, including the support of self-reflection and delivery of individualized customer care. We present PEARL a timeline-based visual analytic tool that allows users to interactively discover and examine a person's emotional style derived from this person's social media text. Compared to other visual text analytic systems, our work offers three unique contributions. First, it supports multi-dimensional emotion analysis from social media text to automatically detect a person's expressed emotions at different time points and summarize those emotions to reveal the person's emotional style. Second, it effectively visualizes complex, multi-dimensional emotion analysis results to create a visual emotional profile of an individual, which helps users browse and interpret one's emotional style. Third, it supports rich visual interactions that allow users to interactively explore and validate emotion analysis results. We have evaluated our work extensively through a series of studies. The results demonstrate the effectiveness of our tool both in emotion analysis from social media and in support of interactive visualization of the emotion analysis results.

[C4]

Ji Wang, Jian Zhao, Sheng Guo, Chris North, Naren Ramakrishnan. ReCloud: Semantics-based Word Cloud Visualization of User Reviews. Proceedings of the Graphics Interface Conference, pp. 151-158, 2014.

Abstract: User reviews, like those found on Yelp and Amazon, have become an important reference for decision making in daily life, for example, in dining, shopping and entertainment. However, large amounts of available reviews make the reading process tedious. Existing word cloud visualizations attempt to provide an overview. However their randomized layouts do not reveal content relationships to users. In this paper, we present ReCloud, a word cloud visualization of user reviews that arranges semantically related words as spatially proximal. We use a natural language processing technique called grammatical dependency parsing to create a semantic graph of review contents. Then, we apply a force-directed layout to the semantic graph, which generates a clustered layout of words by minimizing an energy model. Thus, ReCloud can provide users with more insight about the semantics and context of the review content. We also conducted an experiment to compare the efficiency of our method with two alternative review reading techniques: random layout word cloud and normal text-based reviews. The results showed that the proposed technique improves user performance and experience of understanding a large number of reviews.

2013

[J3]

Jian Zhao, Christopher Collins, Fanny Chevalier, Ravin Balakrishnan. Interactive Exploration of Implicit and Explicit Relations in Faceted Datasets. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'13), 19(12), pp. 2080-2089, 2013.

Abstract: Many datasets, such as scientific literature collections, contain multiple heterogeneous facets which derive implicit relations, as well as explicit relational references between data items. The exploration of this data is challenging not only because of large data scales but also the complexity of resource structures and semantics. In this paper, we present PivotSlice, an interactive visualization technique which provides efficient faceted browsing as well as flexible capabilities to discover data relationships. With the metaphor of direct manipulation, PivotSlice allows the user to visually and logically construct a series of dynamic queries over the data, based on a multi-focus and multi-scale tabular view that subdivides the entire dataset into several meaningful parts with customized semantics. PivotSlice further facilitates the visual exploration and sensemaking process through features including live search and integration of online data, graphical interaction histories and smoothly animated visual state transitions. We evaluated PivotSlice through a qualitative lab study with university researchers and report the findings from our observations and interviews. We also demonstrate the effectiveness of PivotSlice using a scenario of exploring a repository of information visualization literature.

[C3]

Jian Zhao, Daniel Wigdor, Ravin Balakrishnan. TrailMap: Facilitating Information Seeking in a Multi-Scale Digital Map via Implicit Bookmarking. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 3009-3018, 2013.

Abstract: Web applications designed for map exploration in local neighborhoods have become increasingly popular and important in everyday life. During the information-seeking process, users often revisit previously viewed locations, repeat earlier searches, or need to memorize or manually mark areas of interest. To facilitate rapid returns to earlier views during map exploration, we propose a novel algorithm to automatically generate map bookmarks based on a user's interaction. TrailMap, a web application based on this algorithm, is developed, providing a fluid and effective neighborhood exploration experience. A one-week study is conducted to evaluate TrailMap in users' everyday web browsing activities. Results showed that TrailMap's implicit bookmarking mechanism is efficient for map exploration and the interactive and visual nature of the tool is intuitive to users.

[W3]

Ji Wang, Jian Zhao, Sheng Guo, Chris North. Clustered Layout Word Cloud for User Generated Review. Yelp Dataset Challenge (Grand Prize Winner), 2013.

Abstract: User reviews, like those found on Yelp and Amazon, have become an important reference for decision making in daily life, for example, in dining, shopping and entertainment. However, large amounts of available reviews make the reading process tedious. Existing word cloud visualizations attempt to provide an overview. However their randomized layouts do not reveal content relationships to users. In this paper, we present ReCloud, a word cloud visualization of user reviews that arranges semantically related words as spatially proximal. We use a natural language processing technique called grammatical dependency parsing to create a semantic graph of review contents. Then, we apply a force-directed layout to the semantic graph, which generates a clustered layout of words by minimizing an energy model. Thus, ReCloud can provide users with more insight about the semantics and context of the review content. We also conducted an experiment to compare the efficiency of our method with two alternative review reading techniques: random layout word cloud and normal text-based reviews. The results showed that the proposed technique improves user performance and experience of understanding a large number of reviews.

2012

[J2]

Jian Zhao, Fanny Chevalier, Christopher Collins, Ravin Balakrishnan. Facilitating Discourse Analysis with Interactive Visualization. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis'12), 18(12), pp. 2639-2648, 2012.

Abstract: A discourse parser is a natural language processing system which can represent the organization of a document based on a rhetorical structure tree---one of the key data structures enabling applications such as text summarization, question answering and dialogue generation. Computational linguistics researchers currently rely on manually exploring and comparing the discourse structures to get intuitions for improving parsing algorithms. In this paper, we present DAViewer, an interactive visualization system for assisting computational linguistics researchers to explore, compare, evaluate and annotate the results of discourse parsers. An iterative user-centered design process with domain experts was conducted in the development of DAViewer. We report the results of an informal formative study of the system to better understand how the proposed visualization and interaction techniques are used in the real research environment.

[S1]

Jian Zhao, Steven Drucker, Danyel Fisher, Donald Brinkman. TimeSlice: Interactive Faceted Browsing of Timeline Data. Proceedings of the International Working Conference on Advanced Visual Interfaces, pp. 433-436, 2012.

Abstract: Temporal events with multiple sets of metadata attributes, i.e., facets, are ubiquitous across different domains. The capabilities of efficiently viewing and comparing events data from various perspectives are critical for revealing relationships, making hypotheses, and discovering patterns. In this paper, we present TimeSlice, an interactive faceted visualization of temporal events, which allows users to easily compare and explore timelines with different attributes on a set of facets. By directly manipulating the filtering tree, a dynamic visual representation of queries and filters in the facet space, users can simultaneously browse the focused timelines and their contexts at different levels of detail, which supports efficient navigation of multi-dimensional events data. Also presented is an initial evaluation of TimeSlice with two datasets - famous deceased people and US daily flight delays.

[W2]

Jian Zhao. A Particle Filter Based Approach of Visualizing Time-varying Volume. Proceedings of the IEEE Symposium on Large-Scale Data Analysis and Visualization (Poster), 2012.

Abstract: Extracting and presenting essential information of time-varying volumetric data is critical in many fields of sciences. This paper introduces a novel approach of identifying important aspects of the dataset under the particle filter framework in computer vision. With the view of time-varying volumes as dynamic voxels moving along time, an algorithm for computing the 3D voxel transition curves is derived. Based on the curves which characterize the local data temporal behavior, this paper also introduces several post-processing techniques to visualize important features such as curve clusters by k-means and curve variations computed from curve gradients.

2011

[J1]

Jian Zhao, Fanny Chevalier, Emmanuel Pietriga, Ravin Balakrishnan. Exploratory Analysis of Time-Series with ChronoLenses. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis'11), 17(12), pp. 2422-2431, 2011.

Abstract: Visual representations of time-series are useful for tasks such as identifying trends, patterns and anomalies in the data. Many techniques have been devised to make these visual representations more scalable, enabling the simultaneous display of multiple variables, as well as the multi-scale display of time-series of very high resolution or that span long time periods. There has been comparatively little research on how to support the more elaborate tasks associated with the exploratory visual analysis of timeseries, e.g., visualizing derived values, identifying correlations, or discovering anomalies beyond obvious outliers. Such tasks typically require deriving new time-series from the original data, trying different functions and parameters in an iterative manner. We introduce a novel visualization technique called ChronoLenses, aimed at supporting users in such exploratory tasks. ChronoLenses perform on-the-fly transformation of the data points in their focus area, tightly integrating visual analysis with user actions, and enabling the progressive construction of advanced visual analysis pipelines.

[C2]

R. William Soukoreff, Jian Zhao, Xiangshi Ren. The Entropy of a Rapid Aimed Movement: Fitts' Index of Difficulty versus Shannon's Entropy. Proceedings of 13th IFIP TC 13 International Conference on Human Computer Interaction, Vol Part 4, pp. 222-239, 2011.

Abstract: A thought experiment is proposed that reveals a difference between Fitts' index of difficulty and Shannon's entropy, in the quantification of the information content of a series of rapid aimed movements. This implies that the contemporary Shannon formulation of the index of difficulty is similar to, but not identical to, entropy. Preliminary work is reported toward developing a model that resolves the problem. Starting from first principles (information theory), a formulation for the entropy of a Fitts' law style rapid aimed movement is derived, that is similar in form to the traditional formulation. Empirical data from Fitts' 1954 paper are analysed, demonstrating that the new model fits empirical data as well as the current standard approach. The novel formulation is promising because it accurately describes human movement data, while also being derived from first principles (using information theory), thus providing insight into the underlying cause of Fitts' law.

[C1]

Jian Zhao, Fanny Chevalier, Ravin Balakrishnan. KronoMiner: Using Multi-Foci Navigation for the Visual Exploration of Time-Series Data. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 1737-1746, 2011.

Abstract: The need for pattern discovery in long time-series data led researchers to develop interactive visualization tools and analytical algorithms for gaining insight into the data. Most of the literature on time-series data visualization either focus on a small number of tasks or a specific domain. We propose KronoMiner, a tool that embeds new interaction and visualization techniques as well as analytical capabilities for the visual exploration of time-series data. The interface design has been iteratively refined based on feedback from expert users. Qualitative evaluation with an expert user not involved in the design process indicates that our prototype is promising for further research.

[W1]

Jian Zhao, R. William Soukoreff, Ravin Balakrishnan. A Model of Multi-touch Manipulation Proceedings of the 2nd Annual Grand Conference (Poster), 2011.

Abstract: As touch-sensitive devices become increasingly popular, fundamentally understanding the human performances of multi-touch gestures is critical. However, there is currently no mathematical model for interpreting such gestures. In this paper, a novel model of multi-touch interaction is derived by combining the Mahalanobis distance metric and Fitts' law. The model describes the time required to complete an object manipulation task that includes translocation, rotation, and scaling. Empirical data is reported that validates the new model (R2>0.9). Linear relationship between the difficulty and time elapsed is revealed indicating that the model can provide guidelines for interface designers for empirically comparing gestures and devices.

Refereed Journal Articles

[J36]

Yue Lyu, Di Liu, Pengcheng An, Xin Tong, Huan Zhang, Keiko Katsuragawa, Jian Zhao. EMooly: Supporting Autistic Children in Collaborative Social-Emotional Learning with Caregiver Participation through Interactive AI-infused and AR Activities. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(4), pp. 203:1-203:36, 2024.

Abstract: Children with autism spectrum disorder (ASD) have social-emotional deficits that lead to difficulties in recognizing emotions as well as understanding and responding to social interactions. This study presents EMooly, a tablet game that actively involves caregivers and leverages augmented reality (AR) and generative AI (GenAI) to enhance social-emotional learning for autistic children. Through a year of collaborative effort with five domain experts, we developed EMooly that engages children through personalized social stories, interactive and fun activities, and enhanced caregiver participation, focusing on emotion understanding and facial expression recognition. Compared with a baseline, a controlled study with 24 autistic children and their caregivers showed EMooly significantly improved children's emotion recognition skills and its novel features were preferred and appreciated. EMooly demonstrates the potential of AI and AR in enhancing social-emotional development for autistic children via prompt personalizing and engagement, and highlights the importance of caregiver involvement for optimal learning outcomes.

[J35]

Pengcheng An, Chaoyu Zhang, Haichen Gao, Ziqi Zhou, Yage Xiao, Jian Zhao. AniBalloons: Animated Chat Balloons as Affective Augmentation for Social Messaging and Chatbot Interaction. International Journal of Human-Computer Studies, 194, pp. 103365:1-103365:16, 2025 (Accepted in 2024).

Abstract: Despite being prominent and ubiquitous, message-based communication is limited in nonverbally conveying emotions. Besides emoticons or stickers, messaging users continue seeking richer options for affective communication. Recent research explored using chat-balloons' shape and color to communicate emotional states. However, little work explored whether and how chat-balloon animations could be designed to convey emotions. We present the design of AniBalloons, 30 chat-balloon animations conveying Joy, Anger, Sadness, Surprise, Fear, and Calmness. Using AniBalloons as a research means, we conducted three studies to assess the animations' affect recognizability and emotional properties (N = 40), and probe how animated chat-balloons would influence communication experience in typical scenarios including instant messaging (N = 72) and chatbot service (N = 70). Our exploration contributes a set of chat-balloon animations to complement nonverbal affective communication for a range of text-message interfaces, and empirical insights into how animated chat-balloons might mediate particular conversation experiences (e.g., perceived interpersonal closeness, or chatbot personality).

[J34]

Shaikh Shawon Arefin Shimon, Ali Neshati, Junwei Sun, Qiang Xu, Jian Zhao. Exploring Uni-manual Around Ear Off-Device Gestures for Earables Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(1), pp. 3:1-3:29, 2024.

Abstract: Small form factor limits physical input space in earable (i.e., ear-mounted wearable) devices. Off-device earable inputs in alternate mid-air and on-skin around-ear interaction spaces using uni-manual gestures can address this input space limitation. Segmenting these alternate interaction spaces to create multiple gesture regions for reusing off-device gestures can expand earable input vocabulary by a large margin. Although prior earable interaction research has explored off-device gesture preferences and recognition techniques in such interaction spaces, supporting gesture reuse over multiple gesture regions needs further exploration. We collected and analyzed 7560 uni-manual gesture motion data from 18 participants to explore earable gesture reuse by segmentation of on-skin and mid-air spaces around the ear. Our results show that gesture performance degrades significantly beyond 3 mid-air and 5 on-skin around-ear gesture regions for different uni-manual gesture classes (e.g., swipe, pinch, tap). We also present qualitative findings on most and least preferred regions (and associated boundaries) by end-users for different uni-manual gesture shapes across both interaction spaces for earable devices. Our results complement earlier elicitation studies and interaction technologies for earables to help expand the gestural input vocabulary and potentially drive future commercialization of such devices.

[J33]

Xuejun Du, Pengcheng An, Justin Leung, April Li, Linda Chapman, Jian Zhao. DeepThInk: Designing and Probing Human-AI Co-Creation in Digital Art Therapy. International Journal of Human-Computer Studies, 181, pp. 103139:1-103139:17, 2024 (Accepted in 2023).

Abstract: Art therapy has been an essential form of psychotherapy to facilitate psychological well-being, which has been promoted and transformed by recent technological advances into digital art therapy. However, the potential of digital technologies has not been fully leveraged; especially, applying AI technologies in digital art therapy is still under-explored. In this paper, we propose an AI-infused art-making system, DeepThInk, to investigate the potential of introducing a human-AI co-creative process into art therapy, by collaborating with five experienced registered art therapists over ten months. DeepThInk offers a range of tools which can lower the expertise threshold for art-making while improving users' creativity and expressivity. We gathered the insights of DeepThInk through expert reviews and a two-part user evaluation with both synchronous and asynchronous therapy setups. This longitudinal iterative design process helped us derive and contextualize design principles of human-AI co-creation for art therapy, shedding light on future design in relevant domains

[J32]

Yue Lyu, Pengcheng An, Yage Xiao, Zibo Zhang, Huan Zhang, Keiko Katsuragawa, Jian Zhao. Eggly: Designing Mobile Augmented Reality Neurofeedback Training Games for Children with Autism Spectrum Disorder. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 7(2), pp.67:1-67:29, 2023.

Abstract: Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that affects how children communicate and relate to other people and the world around them. Emerging studies have shown that neurofeedback training (NFT) games are an effective and playful intervention to enhance social and attentional capabilities for autistic children. However, NFT is primarily available in a clinical setting that is hard to scale. Also, the intervention demands deliberately-designed gamified feedback with fun and enjoyment, where little knowledge has been acquired in the HCI community. Through a ten-month iterative design process with four domain experts, we developed Eggly, a mobile NFT game based on a consumer-grade EEG headband and a tablet. Eggly uses novel augmented reality (AR) techniques to offer engagement and personalization, enhancing their training experience. We conducted two field studies (a single-session study and a three-week multi-session study) with a total of five autistic children to assess Eggly in practice at a special education center. Both quantitative and qualitative results indicate the effectiveness of the approach as well as contribute to the design knowledge of creating mobile AR NFT games.

[J31]

Andrea Batch, Yipeng Ji, Mingming Fan, Jian Zhao, Niklas Elmqvist. uxSense: Supporting User Experience Analysis with Visualization and Computer Vision. IEEE Transactions on Visualization and Computer Graphics, 2023 (In Press).

Abstract: Analyzing user behavior from usability evaluation can be a challenging and time-consuming task, especially as the number of participants and the scale and complexity of the evaluation grows. We propose UXSENSE, a visual analytics system using machine learning methods to extract user behavior from audio and video recordings as parallel time-stamped data streams. Our implementation draws on pattern recognition, computer vision, natural language processing, and machine learning to extract user sentiment, actions,posture, spoken words, and other features from such recordings. These streams are visualized as parallel timelines in a web-based front-end, enabling the researcher to search, filter, and annotate data across time and space. We present the results of a user study involving professional UX researchers evaluating user data using uxSense. In fact, we used uxSense itself to evaluate their sessions.

[J30]

Xingjun Li, Yizhi Zhang, Justin Leung, Chengnian Sun, Jian Zhao. EDAssistant: Supporting Exploratory Data Analysis in Computational Notebooks with In-Situ Code Search and Recommendation. ACM Transactions on Interactive Intelligent Systems, 13(1), pp. 1:1-1:27, 2023 (Accepted in 2022).

Abstract: Using computational notebooks (e.g., Jupyter Notebook), data scientists rationalize their exploratory data analysis (EDA) based on their prior experience and external knowledge such as online examples. For novices or data scientists who lack specific knowledge about the dataset or problem to investigate, effectively obtaining and understanding the external information is critical to carrying out EDA. This paper presents EDAssistant, a JupyterLab extension that supports EDA with in-situ search of example notebooks and recommendation of useful APIs, powered by novel interactive visualization of search results. The code search and recommendation are enabled by advanced machine learning models, trained on a large corpus of EDA notebooks collected online. A user study is conducted to investigate both EDAssistant and data scientists' current practice (i.e., using external search engines). The results demonstrate the effectiveness and usefulness of EDAssistant, and participants appreciated its smooth and in-context support of EDA. We also report several design implications regarding code recommendation tools.

[J29]

Mingliang Xue, Yunhai Wang, Chang Han, Jian Zhang, Zheng Wang, Kaiyi Zhang, Christophe Hurter, Jian Zhao, Oliver Deussen. Target Netgrams: An Annulus-constrained Stress Model for Radial Graph Visualization. IEEE Transactions on Visualization and Computer Graphics, 29(10), pp. 4256-4268, 2023 (Accepted in 2022).

Abstract: We present Target Netgrams as a visualization technique for radial layouts of graphs Inspired by manually created target sociograms, we propose an annulus-constrained stress model that aims to position nodes onto the annuli between adjacent circles for indicating their radial hierarchy, while maintaining the network structure (clusters and neighborhoods) and improving readability as much as possible. This is achieved by having more space on the annuli than traditional layout techniques. By adapting stress majorization to this model, the layout is computed as a constrained least square optimization problem. Additional constraints (e.g., parent-child preservation, attribute-based clusters and structure-aware radii) are provided for exploring nodes, edges, and levels of interest. We demonstrate the effectiveness of our method through a comprehensive evaluation, a user study, and a case study.

[J28]

Anjul Tyagi, Jian Zhao, Pushkar Patel, Swasti Khurana, Klaus Mueller. Infographics Wizard: Flexible Infographics Authoring and Design Exploration. Computer Graphics Forum (Proceedings of EuroVis 2022), 41(3), pp. 121-132, 2022.

Abstract: Infographics are an aesthetic visual representation of information following specific design principles of human perception. Designing infographics can be a tedious process for non-experts and time-consuming, even for professional designers. With the help of designers, we propose a semi-automated infographic framework for general structured and flow-based infographic de- sign generation. For novice designers, our framework automatically creates and ranks infographic designs for a user-provided text with no requirement for design input. However, expert designers can still provide custom design inputs to customize the infographics. We will also contribute an individual visual group (VG) designs dataset (in SVG), along with a 1k complete info-graphic image dataset with segmented VGs in this work. Evaluation results confirm that by using our framework, designers from all expertise levels can generate generic infographic designs faster than existing methods while maintaining the same quality as hand-designed infographics templates.

[J27]

Takanori Fujiwara, Jian Zhao, Francine Chen, Yaoliang Yu, Kwan-Liu Ma. Network Comparison with Interpretable Contrastive Network Representation Learning. Journal of Data Science, Statistics, and Visualization, 2(5), pp. 1-35, 2022.

Abstract: Identifying unique characteristics in a network through comparison with another network is an essential network analysis task. For example, with networks of protein interactions obtained from normal and cancer tissues, we can discover unique types of interactions in cancer tissues. This analysis task could be greatly assisted by contrastive learning, which is an emerging analysis approach to discover salient patterns in one dataset relative to another. However, existing contrastive learning methods cannot be directly applied to networks as they are designed only for high-dimensional data analysis. To address this problem, we introduce a new analysis approach called contrastive network representation learning (cNRL). By integrating two machine learning schemes, network representation learning and contrastive learning, cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another. Within this approach, we also design a method, named i-cNRL, which offers interpretability in the learned results, allowing for understanding which specific patterns are only found in one network. We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets. Furthermore, we compare i-cNRL and other potential cNRL algorithm designs through quantitative and qualitative evaluations.

[J26]

Jian Zhao, Shenyu Xu, Senthil Chandrasegaran, Chris Bryan, Fan Du, Aditi Mishra, Xin Qian, Yiran Li, Kwan-Liu Ma. ChartStory: Automated Partitioning, Layout, and Captioning of Charts into Comic-Style Narratives. IEEE Transactions on Visualization and Computer Graphics, 29(2), pp. 1384-1399, 2023 (Accepted in 2021).

Abstract: Visual data storytelling is gaining importance as a means of presenting data-driven information or analysis results, especially to the general public. This has resulted in design principles being proposed for data-driven storytelling, and new authoring tools being created to aid such storytelling. However, data analysts typically lack sufficient background in design and storytelling to make effective use of these principles and authoring tools. To assist this process, we present ChartStory for crafting data stories from a collection of user-created charts, using a style akin to comic panels to imply the underlying sequence and logic of data-driven narratives. Our approach is to operationalize established design principles into an advanced pipeline which characterizes charts by their properties and similarity, and recommends ways to partition, layout, and caption story pieces to serve a narrative. ChartStory also augments this pipeline with intuitive user interactions for visual refinement of generated data comics. We extensively and holistically evaluate ChartStory via a trio of studies. We first assess how the tool supports data comic creation in comparison to a manual baseline tool. Data comics from this study are subsequently compared and evaluated to ChartStory's automated recommendations by a team of narrative visualization practitioners. This is followed by a pair of interview studies with data scientists using their own datasets and charts who provide an additional assessment of the system. We find that ChartStory provides cogent recommendations for narrative generation, resulting in data comics that compare favorably to manually-created ones.

[J25]

Ying Zhao, Jingcheng Shi, Jiawei Liu, Jian Zhao, Fangfang Zhou, Wenzhi Zhang, Kangyi Chen, Xin Zhao, Chunyao Zhu, Wei Chen. Evaluating Effects of Background Stories on Graph Perception. IEEE Transactions on Visualization and Computer Graphics, 28(12), pp. 4839-4854, 2022 (Accepted in 2021).

Abstract: A graph is an abstract model that represents relations among entities, for example, the interactions of characters in a novel. Background story endows entities and relations with real-world meanings and describes semantics and contexts of the abstract model, for example, the actual story that the novel presents. Considering practical experience and relevant research, human viewers who know the background story of a graph and those not knowing the story may perform differently when perceiving the same graph. However, there are currently no previous studies to adequately address this problem. This paper presents an evaluation study that investigates the effects of background stories on graph perception. We formulate three hypotheses on different aspects including visual focus areas, graph structure identification, and mental model formation, and design three controlled experiments to test our hypotheses using real-world graphs with background stories. We analyze our experimental data to compare the performance of participants who have read and not read the background stories, and obtain a set of instructive findings. First, our results show that knowing the background stories affects participants focus areas in interactive graph exploration to a certain extent. Second, it significantly affects the performance of identifying community structures but not high degree and bridge structures. Third, it has a significant impact on graph recognition under blurred visual conditions. These findings can bring new considerations to the design of storytelling visualizations and interactive graph explorations.

[J24]

Maoyuan Sun, Akhil Namburi, David Koop, Jian Zhao, Tianyi Li, Haeyong Chung. Towards Systematic Design Considerations for Visualizing Cross-View Data Relationships. IEEE Transactions on Visualization and Computer Graphics, 28(12), pp. 4741-4756, 2022 (Accepted in 2021).

Abstract: Due to the scale of data and the complexity of analysis tasks, insight discovery often requires coordinating multiple visualizations (views), with each view displaying different parts of data or the same data from different perspectives. For example, to analyze car sales records, a marketing analyst uses a line chart to visualize the trend of car sales, a scatterplot to inspect the price and horsepower of different cars, and a matrix to compare the transaction amounts in types of deals. To explore related information across multiple views, current visual analysis tools heavily rely on brushing and linking techniques, which may require a significant amount of user effort (e.g., many trial-and-error attempts). There may be other efficient and effective ways of displaying cross-view data relationships to support data analysis with multiple views, but currently there are no guidelines to address this design challenge. In this paper, we present systematic design considerations for visualizing cross-view data relationships, which leverages descriptive aspects of relationships and usable visual context of multi-view visualizations. We discuss pros and cons of different designs for showing cross-view data relationships, and provide a set of recommendations for helping practitioners make design decisions.

[J23]

Ehsan Jahangirzadeh Soure, Emily Kuang, Mingming Fan, Jian Zhao. CoUX: Collaborative Visual Analysis of Think-Aloud Usability Test Videos for Digital Interfaces. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VIS'21), 28(1), pp. 643-653, 2022.

Abstract: Reviewing a think-aloud video is both time-consuming and demanding as it requires UX (user experience) professionals to attend to many behavioral signals of the user in the video. Moreover, challenges arise when multiple UX professionals need to collaborate to reduce bias and errors. We propose a collaborative visual analytics tool, CoUX, to facilitate UX evaluators collectively reviewing think-aloud usability test videos of digital interfaces. CoUX seamlessly supports usability problem identification, annotation, and discussion in an integrated environment. To ease the discovery of usability problems, CoUX visualizes a set of problem-indicators based on acoustic, textual, and visual features extracted from the video and audio of a think-aloud session with machine learning.CoUX further enables collaboration amongst UX evaluators for logging, commenting, and consolidating the discovered problems with a chatbox-like user interface. We designed CoUX based on a formative study with two UX experts and insights derived from the literature. We conducted a user study with six pairs of UX practitioners on collaborative think-aloud video analysis tasks. The results indicate that CoUX is useful and effective in facilitating both problem identification and collaborative teamwork. We provide insights into how different features of CoUX were used to support both independent analysis and collaboration. Furthermore, our work highlights opportunities to improve collaborative usability test video analysis.

[J22]

Takanori Fujiwara, Xinhai Wei, Jian Zhao, Kwan-Liu Ma. Interactive Dimensionality Reduction for Comparative Analysis. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VIS'21), 28(1), pp. 758-768, 2022.

Abstract: Finding the similarities and differences between two or more groups of datasets is a fundamental analysis task. For high-dimensional data, dimensionality reduction (DR) methods are often used to find the characteristics of each group. However, existing DR methods provide limited capability and flexibility for such comparative analysis as each method is designed only for a narrow analysis target, such as identifying factors that most differentiate groups. In this work, we introduce an interactive DR framework where we integrate our new DR method, called ULCA (unified linear comparative analysis), with an interactive visual interface. ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks. To provide flexibility for comparative analysis, we develop an optimization algorithm that enables analysts to interactively refine ULCA results. Additionally, we provide an interactive visualization interface to examine ULCA results with a rich set of analysis libraries. We evaluate ULCA and the optimization algorithm to show their efficiency as well as present multiple case studies using real-world datasets to demonstrate the usefulness of our framework.

[J21]

Maoyuan Sun, Abdul Shaikh, Hamed Alhoori, Jian Zhao. SightBi: Exploring Cross-View Data Relationships with Biclusters. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VIS'21), 28(1), pp. 54-64, 2022.
 Best Paper Honorable Mention

Abstract: Multiple-view visualization (MV) has been heavily used in visual analysis tools for sensemaking of data in various domains(e.g., bioinformatics, cybersecurity and text analytics). One common task of IEEE VISual analysis with multiple views is to relate data acrossdifferent views. For example, to identify threats, an intelligence analyst needs to link people from a social network graph with locationson a crime-map, and then search and read relevant documents. Currently, exploring cross-view data relationships heavily relies on view-coordination techniques (e.g., brushing and linking). They may require significant user effort on many trial-and-error attempts, such as repetitiously selecting elements in one view, observing and following elements highlighted in other views. To address this, we presentSightBi, a visual analytics approach for supporting cross-view data relationship explorations. We discuss the design rationale of SightBi in detail, with identified user tasks regarding the usage of cross-view data relationships. SightBi formalize cross-view data relationships as biclusters and compute them from a dataset. SightBi uses a bi-context design that highlights creating stand-alone relationship-views.This helps to preserve existing views and serves as an overview of cross-view data relationships to guide user explorations. Moreover,SightBi allows users to interactively manage the layout of multiple views by using newly created relationship-views. With a usagescenario, we demonstrate the usefulness of SightBi for sensemaking of cross-view data relationships.

[J20]

Linping Yuan, Ziqi Zhou, Jian Zhao, Yiqiu Guo, Fan Du, Huamin Qu. InfoColorizer: Interactive Recommendation of Color Palettes for Infographics. IEEE Transactions on Visualization and Computer Graphics, 28(12), pp. 4252-4266, 2022 (Accepted in 2021).

Abstract: When designing infographics, general users usually struggle with getting desired color palettes using existing infographic authoring tools, which sometimes sacrifice customizability, require design expertise, or neglect the influence of elements? spatial arrangement. We propose a data-driven method that provides flexibility by considering users? preferences, lowers the expertise barrier via automation, and tailors suggested palettes to the spatial layout of elements. We build a recommendation engine by utilizing deep learning techniques to characterize good color design practices from data, and further develop InfoColorizer, a tool that allows users to obtain color palettes for their infographics in an interactive and dynamic manner. To validate our method, we conducted a comprehensive four-part evaluation, including case studies, a controlled user study, a survey study, and an interview study. The results indicate that InfoColorizer can provide compelling palette recommendations with adequate flexibility, allowing users to effectively obtain high-quality color design for input infographics with low effort.

[J19]

Jian Zhao, Maoyuan Sun, Francine Chen, Patrick Chiu. Understanding Missing Links in Bipartite Networks with MissBiN. IEEE Transactions on Visualization and Computer Graphics, 28(6), pp. 2457-2469, 2022 (Accepted in 2020).

Abstract: The analysis of bipartite networks is critical in a variety of application domains, such as exploring entity co-occurrences in intelligence analysis and investigating gene expression in bio-informatics. One important task is missing link prediction, which infers the existence of unseen links based on currently observed ones. In this paper, we propose a visual analysis system, MissBiN, to involve analysts in the loop for making sense of link prediction results. MissBiN equips a novel method for link prediction in a bipartite network by leveraging the information of bi-cliques in the network. It also provides an interactive visualization for understanding the algorithm outputs. The design of MissBiN is based on three high-level analysis questions (what, why, and how) regarding missing links, which are distilled from the literature and expert interviews. We conducted quantitative experiments to assess the performance of the proposed link prediction algorithm, and interviewed two experts from different domains to demonstrate the effectiveness of MissBiN as a whole. We also provide a comprehensive usage scenario to illustrate the usefulness of the tool in an application of intelligence analysis.

[J18]

Jian Zhao, Mingming Fan, Mi Feng. ChartSeer: Interactive Steering Exploratory Visual Analysis with Machine Intelligence. IEEE Transactions on Visualization and Computer Graphics, 28(3), pp. 1500-1513, 2022 (Accepted in 2020).

Abstract: During exploratory visual analysis (EVA), analysts need to continually determine which subsequent activities to perform, such as which data variables to explore or how to present data variables visually. Due to the vast combinations of data variables and visual encodings that are possible, it is often challenging to make such decisions. Further, while performing local explorations, analysts often fail to attend to the holistic picture that is emerging from their analysis, leading them to improperly steer their EVA. These issues become even more impactful in the real world analysis scenarios where EVA occurs in multiple asynchronous sessions that could be completed by one or more analysts. To address these challenges, this work proposes ChartSeer, a system that uses machine intelligence to enable analysts to visually monitor the current state of an EVA and effectively identify future activities to perform. ChartSeer utilizes deep learning techniques to characterize analyst-created data charts to generate visual summaries and recommend appropriate charts for further exploration based on user interactions. A case study was first conducted to demonstrate the usage of ChartSeer in practice, followed by a controlled study to compare ChartSeer‘s performance with a baseline during EVA tasks. The results demonstrated that ChartSeer enables analysts to adequately understand current EVA status and advance their analysis by creating charts with increased coverage and visual encoding diversity.

[J17]

Mingming Fan, Ke Wu, Jian Zhao, Yue Li, Winter Wei, Khai Truong. VisTA: Integrating Machine Intelligence with Visualization to Support the Investigation of Think-Aloud Sessions. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis'19), 26(1), pp. 343-352, 2020.

Abstract: Think-aloud protocols are widely used by user experience (UX) practitioners in usability testing to uncover issues in user interface design. It is often arduous to analyze large amounts of recorded think-aloud sessions and few UX practitioners have an opportunity to get a second perspective during their analysis due to time and resource constraints. Inspired by the recent research that shows subtle verbalization and speech patterns tend to occur when users encounter usability problems, we take the first step to design and evaluate an intelligent visual analytics tool that leverages such patterns to identify usability problem encounters and present them to UX practitioners to assist their analysis. We first conducted and recorded think-aloud sessions, and then extracted textual and acoustic features from the recordings and trained machine learning (ML) models to detect problem encounters. Next, we iteratively designed and developed a visual analytics tool, VisTA, which enables dynamic investigation of think-aloud sessions with a timeline visualization of ML predictions and input features. We conducted a between-subjects laboratory study to compare three conditions, i.e., VisTA, VisTASimple (no visualization of the ML’s input features), and Baseline (no ML information at all), with 30 UX professionals. The findings show that UX professionals identified more problem encounters when using VisTA than Baseline by leveraging the problem visualization as an overview, anticipations, and anchors as well as the feature visualization as a means to understand what ML considers and omits. Our findings also provide insights into how they treated ML, dealt with (dis)agreement with ML, and reviewed the videos (i.e., play, pause, and rewind).

[J16]

Maoyuan Sun, Jian Zhao, Hao Wu, Kurt Luther, Chris North, Naren Ramakrishnan. The Effect of Edge Bundling and Seriation on Sensemaking of Biclusters in Bipartite Graphs. IEEE Transactions on Visualization and Computer Graphics, 25(10), pp. 2983-2998, 2019.

Abstract: Exploring coordinated relationships (e.g., shared relationships between two sets of entities) is an important analytics task in a variety of real-world applications, such as discovering similarly behaved genes in bioinformatics, detecting malware collusions in cyber security, and identifying products bundles in marketing analysis. Coordinated relationships can be formalized as biclusters. In order to support visual exploration of biclusters, bipartite graphs based visualizations have been proposed, and edge bundling is used to show biclusters. However, it suffers from edge crossings due to possible overlaps of biclusters, and lacks in-depth understanding of its impact on user exploring biclusters in bipartite graphs. To address these, we propose a novel bicluster-based seriation technique that can reduce edge crossings in bipartite graphs drawing and conducted a user experiment to study the effect of edge bundling and this proposed technique on visualizing biclusters in bipartite graphs. We found that they both had impact on reducing entity visits for users exploring biclusters, and edge bundles helped them find more justified answers. Moreover, we identified four key trade-offs that inform the design of future bicluster visualizations. The study results suggest that edge bundling is critical for exploring biclusters in bipartite graphs, which helps to reduce low-level perceptual problems and support high-level inferences.

[J15]

Zhicong Lu, Mingming Fan, Yun Wang, Jian Zhao, Michelle Annett, Daniel Wigdor. InkPlanner: Supporting Prewriting via Intelligent Visual Diagramming. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'18), 25(1), pp. 277-287, 2019.

Abstract: Prewriting is the process of generating and organizing ideas before drafting a document. Although often overlooked by novice writers and writing tool developers, prewriting is a critical process that improves the quality of a final document. To better understand current prewriting practices, we first conducted interviews with writing learners and experts. Based on the learners' needs and experts' recommendations, we then designed and developed InkPlanner, a novel pen and touch visualization tool that allows writers to utilize visual diagramming for ideation during prewriting. InkPlanner further allows writers to sort their ideas into a logical and sequential narrative by using a novel widget— NarrativeLine. Using a NarrativeLine, InkPlanner can automatically generate a document outline to guide later drafting exercises. Inkplanner is powered by machine-generated semantic and structural suggestions that are curated from various texts. To qualitatively review the tool and understand how writers use InkPlanner for prewriting, two writing experts were interviewed and a user study was conducted with university students. The results demonstrated that InkPlanner encouraged writers to generate more diverse ideas and also enabled them to think more strategically about how to organize their ideas for later drafting.

[J14]

Shenyu Xu, Chris Bryan, Kelvin Li, Jian Zhao, Kwan-Liu Ma. Chart Constellations: Effective Chart Summarization for Collaborative and Multi-User Analyses. Computer Graphics Forum (Proceedings of EuroVis 2018), 37(3), pp. 75-86, 2018.

Abstract: Nowadays, many data problems in the real-world are complex and thus require multiple analysts working together to uncover embedded insights by creating chart-driven data stories. But how, as a subsequent analysis step, do we interpret and learn from these collections of charts? We present a new system called Chart Constellations to interactively support a single analyst in the review and analysis of data stories created by other collaborative analysts. Instead of iterating through the individual charts for each data story, the analyst can project, cluster, filter, and connect results from all users in a meta-visualization approach. This approach supports deriving summary insights about the investigations and supports the exploration of new, un-investigated regions in the dataset. To evaluate our system, we conduct a user study comparing it against data science notebooks. Results suggest that our system promotes the discovery of both broad and high-level insights, including theme and trend analysis, subjective evaluation, and hypothesis generation.

[J13]

Wen Zhong, Wei Xu, Kevin Yager, Gregory Doerk, Jian Zhao, Yunke Tian, Sungsoo Ha, Cong Xie, Yuan Zhong, Klaus Mueller, Kerstin Kleese Van Dam. MultiSciView: Multivariate Scientific X-ray Image Visual Exploration with Cross-Data Space Views. Visual Informatics (Proceedings of PacificVAST 2018), 2(1), pp. 14-25, 2018.

Abstract: X-ray images obtained from synchrotron beamlines are large-scale, high-resolution and high-dynamic-range grayscale data encoding multiple complex properties of the measured materials. They are typically associated with a variety of metadata which increases their inherent complexity. There is a wealth of information embedded in these data but so far scientists lack modern exploration tools to unlock these hidden treasures. To bridge this gap, we propose MultiSciView, a multivariate scientific x-ray image visualization and exploration system for beamline-generated x-ray scattering data. Our system is composed of three complementary and coordinated interactive visualizations to enable a coordinated exploration across the images and their associated attribute and feature spaces. The first visualization features a multi-level scatterplot visualization dedicated for image exploration in attribute, image, and pixel scales. The second visualization is a histogram-based attribute cross filter by which users can extract desired subset patterns from data. The third one is an attribute projection visualization designed for capturing global attribute correlations. We demonstrate our framework by ways of a case study involving a real-world material scattering dataset. We show that our system can efficiently explore large-scale x-ray images, accurately identify preferred image patterns, anomalous images and erroneous experimental settings, and effectively advance the comprehension of material nanostructure properties..

[J12]

Jian Zhao, Maoyuan Sun, Francine Chen, Patrick Chiu. BiDots: Visual Exploration of Weighted Biclusters. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'17), 24(1), pp. 195-204, 2018.

Abstract: Discovering and analyzing biclusters, i.e., two sets of related entities with close relationships, is a critical task in many real-world applications, such as exploring entity co-occurrences in intelligence analysis, and studying gene expression in bio-informatics. While the output of biclustering techniques can offer some initial low-level insights, visual approaches are required on top of that due to the algorithmic output complexity.This paper proposes a visualization technique, called BiDots, that allows analysts to interactively explore biclusters over multiple domains. BiDots overcomes several limitations of existing bicluster visualizations by encoding biclusters in a more compact and cluster-driven manner. A set of handy interactions is incorporated to support flexible analysis of biclustering results. More importantly, BiDots addresses the cases of weighted biclusters, which has been underexploited in the literature. The design of BiDots is grounded by a set of analytical tasks derived from previous work. We demonstrate its usefulness and effectiveness for exploring computed biclusters with an investigative document analysis task, in which suspicious people and activities are identified from a text corpus.

[J11]

Jian Zhao, Michael Glueck, Petra Isenberg, Fanny Chevalier, Azam Khan. Supporting Handoff in Asynchronous Collaborative Sensemaking Using Knowledge-Transfer Graphs. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'17), 24(1), pp. 340-350, 2018.
 Best Paper Honorable Mention

Abstract: During asynchronous collaborative analysis, handoff of partial findings is challenging because externalizations produced by analysts may not adequately communicate their investigative process. To address this challenge, we developed techniques to automatically capture and help encode tacit aspects of the investigative process based on an analyst’s interactions, and streamline explicit authoring of handoff annotations. We designed our techniques to mediate awareness of analysis coverage, support explicit communication of progress and uncertainty with annotation, and implicit communication through playback of investigation histories. To evaluate our techniques, we developed an interactive visual analysis system, KTGraph, that supports an asynchronous investigative document analysis task. We conducted a two-phase user study to characterize a set of handoff strategies and to compare investigative performance with and without our techniques. The results suggest that our techniques promote the use of more effective handoff strategies, help increase an awareness of prior investigative process and insights, as well as improve final investigative outcomes.

[J10]

Siwei Fu, Hao Dong, Weiwei Cui, Jian Zhao, Huamin Qu. How Do Ancestral Traits Shape Family Trees over Generations? IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'17), 24(1), pp. 205-214, 2018.

Abstract: Whether and how does the structure of family trees differ by ancestral traits over generations? This is a fundamental question regarding the structural heterogeneity of family trees for the multi-generational transmission research. However, previous work mostly focuses on parent-child scenarios due to the lack of proper tools to handle the complexity of extending the research to multi-generational processes. Through an iterative design study with social scientists and historians, we develop TreeEvo that assists users to generate and test empirical hypotheses for multi-generational research. TreeEvo summarizes and organizes family trees by structural features in a dynamic manner based on a traditional Sankey diagram. A pixel-based technique is further proposed to compactly encode trees with complex structures in each Sankey Node. Detailed information of trees is accessible through a space-efficient visualization with semantic zooming. Moreover, TreeEvo embeds Multinomial Logit Model (MLM) to examine statistical associations between tree structure and ancestral traits. We demonstrate the effectiveness and usefulness of TreeEvo through an in-depth case-study with domain experts using a real-world dataset (containing 54,128 family trees of 126,196 individuals).

[J9]

Jian Zhao, Michael Glueck, Simon Breslav, Fanny Chevalier, Azam Khan. Annotation Graphs: A Graph-Based Visualization for Meta-Analysis of Data based on User-Authored Annotations. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'16), 23(1), pp. 261-270, 2017.

Abstract: User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate thoughts between analysts. We present Annotation Graphs, a dynamic graph visualization that allows for high-level meta-analysis of data based on user-authored data annotations. Annotation graphs are implemented within C8, a system that enables visual exploratory analysis of a dataset and annotation authoring. Various layouts of the annotation graph are supported for viewing the annotation semantics from different perspectives. The space of annotation semantics includes data selections, comments, and tags, as well as their relationships. We propose a mixed-initiative approach to layout the annotation graph by integrating an analyst’s manual manipulations with an automatic layout based on the inferred similarity of the annotation semantics. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of IEEE VISually extending user-authored annotations to data meta-analysis for discovery and organization of ideas.

[J8]

Siwei Fu, Jian Zhao, Weiwei Cui, Huamin Qu. Visual Analysis of MOOC Forums with iForum. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'16), 23(1), pp. 201-210, 2017.

Abstract: Discussion forums of Massive Open Online Courses (MOOC) provide great opportunities for students to interact with instructional staff as well as other students. Exploration of MOOC forum data can offer valuable insights for these staff to enhance the course and prepare the next release. However, it is challenging due to the large, complicated, and heterogeneous nature of relevant datasets, which contain multiple dynamically interacting objects such as users, posts, and threads, each one including multiple attributes. In this paper, we present a design study for developing an interactive visual analytics system, called iForum, that allows for effectively discovering and understanding temporal patterns in MOOC forums. The design study was conducted with three domain experts in an iterative manner over one year, including a MOOC instructor and two official teaching assistants. iForum offers a set of novel visualization designs for presenting the three interleaving aspects of MOOC forums (i.e., posts, users, and threads) at three different scales. To demonstrate the effectiveness and usefulness of iForum, we describe a case study involving field experts, in which they use iForum to investigate real MOOC forum data for a course on JAVA programming.

[J7]

Yanhong Wu, Naveen Pitipornvivat, Jian Zhao, Sixiao Yang, Guowei Huang, Huamin Qu. egoSlider: Visual Analysis of Egocentric Network Evolution. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'15), 22(1), pp. 260-269, 2016.

Abstract: Ego-network, which represents relationships between a specific individual, i.e., the ego, and people connected to it, i.e., alters, is a critical target to study in social network analysis. Evolutionary patterns of ego-networks along time provide huge insights to many domains such as sociology, anthropology, and psychology. However, the analysis of dynamic ego-networks remains challenging due to its complicated time-varying graph structures, for example: alters come and leave, ties grow stronger and fade away, and alter communities merge and split. Most of the existing dynamic graph visualization techniques mainly focus on topological changes of the entire network, which is not adequate for egocentric analytical tasks. In this paper, we present egoSlider, a visual analysis system for exploring and comparing dynamic ego-networks. egoSlider provides a holistic picture of the data through multiple interactively coordinated views, revealing ego-network evolutionary patterns at three different layers: a macroscopic level for summarizing the entire ego-network data, a mesoscopic level for overviewing specific individuals' ego-network evolutions, and a microscopic level for displaying detailed temporal information of egos and their alters. We demonstrate the effectiveness of egoSlider with a usage scenario with the DBLP publication records. Also, a controlled user study indicates that in general egoSlider outperforms a baseline visualization of dynamic networks for completing egocentric analytical tasks.

[J6]

Jian Zhao, R. William Soukoreff, Ravin Balakrishnan. Exploring and Modeling Unimanual Object Manipulation on Multi-Touch Displays. International Journal of Human-Computer Studies, 78, pp. 68-80, 2015.

Abstract: Touch-sensitive devices are becoming increasingly wide-spread, and consequently gestural interfaces have become familiar to the public. Despite the fact that many gestures require frequently dragging, pinching, spreading, and rotating the finger-tips, there currently does not exist a human performance model describing this interaction. In this paper, a novel user performance model is derived for virtual object manipulation on touch-sensitive displays, which involves simultaneous translation, rotation, and scaling of the object. Two controlled experiments with dual-finger unimanual manipulations were conducted to validate the new model. The results indicate that the model fits the experimental data well, and performs the best among several alternative models. Moreover, based on the analysis of the empirical data, the simultaneity nature of manipulation in the task is explored and several design implications are provided.

[J5]

Jian Zhao, Nan Cao, Zhen Wen, Yale Song, Yu-Ru Lin, Christopher Collins. #FluxFlow: Visual Analysis of Anomalous Information Spreading on Social Media. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'14), 20(12), pp. 1773-1782, 2014.
 Best Paper Honorable Mention

Abstract: We present FluxFlow, an interactive visual analysis system for revealing and analyzing anomalous information spreading in social media. Everyday, millions of messages are created, commented, and shared by people on social media websites, such as Twitter and Facebook. This provides valuable data for researchers and practitioners in many application domains, such as marketing, to inform decision-making. Distilling valuable social signals from the huge crowd's messages, however, is challenging, due to the heterogeneous and dynamic crowd behaviors. The challenge is rooted in data analysts' capability of discerning the anomalous information behaviors, such as the spreading of rumors or misinformation, from the rest that are more conventional patterns, such as popular topics and newsworthy events, in a timely fashion. FluxFlow incorporates advanced machine learning algorithms to detect anomalies, and offers a set of novel visualization designs for presenting the detected threads for deeper analysis. We evaluated FluxFlow with real datasets containing the Twitter feeds captured during significant events such as Hurricane Sandy. Through quantitative measurements of the algorithmic performance and qualitative interviews with domain experts, the results show that the back-end anomaly detection model is effective in identifying anomalous retweeting threads, and its front-end interactive visualizations are intuitive and useful for analysts to discover insights in data and comprehend the underlying analytical model.

[J4]

Jian Zhao, R. William Soukoreff, Xiangshi Ren, Ravin Balakrishnan. A Model of Scrolling on Touch-Sensitive Displays. International Journal of Human-Computer Studies, 72(12), pp. 805-821, 2014.

Abstract: Scrolling interaction is a common and frequent activity allowing users to browse content that is initially off-screen. With the increasing popularity of touch-sensitive devices, gesture-based scrolling interactions (e.g., finger panning and flicking) have become an important element in our daily interaction vocabulary. However, there are currently no comprehensive user performance models for scrolling tasks on touch displays. This paper presents an empirical study of user performance in scrolling tasks on touch displays. In addition to three geometrical movement parameters --- scrolling distance, display window size, and target width, we also investigate two other factors that could affect the performance, i.e., scrolling modes --- panning and flicking, and feedback techniques --- with and without distance feedback. We derive a quantitative model based on four formal assumptions that abstract the real-world scrolling tasks, which are drawn from the analysis and observations of user scrolling actions. The results of a control experiment reveal that our model generalizes well for direct-touch scrolling tasks, accommodating different movement parameters, scrolling modes and feedback techniques. Also, the supporting blocks of the model, the four basic assumptions and three important mathematical components, are validated by the experimental data. In-depth comparisons with existing models of similar tasks indicate that our model performs the best under different measurement criteria. Our work provides a theoretical foundation for modeling sophisticated scrolling actions, as well as offers insights into designing scrolling techniques for next-generation touch input devices.

[J3]

Jian Zhao, Christopher Collins, Fanny Chevalier, Ravin Balakrishnan. Interactive Exploration of Implicit and Explicit Relations in Faceted Datasets. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE VAST'13), 19(12), pp. 2080-2089, 2013.

Abstract: Many datasets, such as scientific literature collections, contain multiple heterogeneous facets which derive implicit relations, as well as explicit relational references between data items. The exploration of this data is challenging not only because of large data scales but also the complexity of resource structures and semantics. In this paper, we present PivotSlice, an interactive visualization technique which provides efficient faceted browsing as well as flexible capabilities to discover data relationships. With the metaphor of direct manipulation, PivotSlice allows the user to visually and logically construct a series of dynamic queries over the data, based on a multi-focus and multi-scale tabular view that subdivides the entire dataset into several meaningful parts with customized semantics. PivotSlice further facilitates the visual exploration and sensemaking process through features including live search and integration of online data, graphical interaction histories and smoothly animated visual state transitions. We evaluated PivotSlice through a qualitative lab study with university researchers and report the findings from our observations and interviews. We also demonstrate the effectiveness of PivotSlice using a scenario of exploring a repository of information visualization literature.

[J2]

Jian Zhao, Fanny Chevalier, Christopher Collins, Ravin Balakrishnan. Facilitating Discourse Analysis with Interactive Visualization. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis'12), 18(12), pp. 2639-2648, 2012.

Abstract: A discourse parser is a natural language processing system which can represent the organization of a document based on a rhetorical structure tree---one of the key data structures enabling applications such as text summarization, question answering and dialogue generation. Computational linguistics researchers currently rely on manually exploring and comparing the discourse structures to get intuitions for improving parsing algorithms. In this paper, we present DAViewer, an interactive visualization system for assisting computational linguistics researchers to explore, compare, evaluate and annotate the results of discourse parsers. An iterative user-centered design process with domain experts was conducted in the development of DAViewer. We report the results of an informal formative study of the system to better understand how the proposed visualization and interaction techniques are used in the real research environment.

[J1]

Jian Zhao, Fanny Chevalier, Emmanuel Pietriga, Ravin Balakrishnan. Exploratory Analysis of Time-Series with ChronoLenses. IEEE Transactions on Visualization and Computer Graphics (Proceedings of IEEE InfoVis'11), 17(12), pp. 2422-2431, 2011.

Abstract: Visual representations of time-series are useful for tasks such as identifying trends, patterns and anomalies in the data. Many techniques have been devised to make these visual representations more scalable, enabling the simultaneous display of multiple variables, as well as the multi-scale display of time-series of very high resolution or that span long time periods. There has been comparatively little research on how to support the more elaborate tasks associated with the exploratory visual analysis of timeseries, e.g., visualizing derived values, identifying correlations, or discovering anomalies beyond obvious outliers. Such tasks typically require deriving new time-series from the original data, trying different functions and parameters in an iterative manner. We introduce a novel visualization technique called ChronoLenses, aimed at supporting users in such exploratory tasks. ChronoLenses perform on-the-fly transformation of the data points in their focus area, tightly integrating visual analysis with user actions, and enabling the progressive construction of advanced visual analysis pipelines.

Refereed Conference Papers

[C39]

Liwei Wu, Yilin Zhang, Justin Leung, Jingyi Gao, April Li, Jian Zhao. Planar or Spatial: Exploring Design Aspects and Challenges for Presentations in Virtual Reality with No-coding Interface. Proceedings of the ACM Interactive Surfaces and Spaces Conference, pp. 528:1-528:23, 2024.

Abstract: The proliferation of virtual reality (VR) has led to its increasing adoption as an immersive medium for delivering presentations, distinct from other VR experiences like games and 360-degree videos by sharing information in richly interactive environments. However, creating engaging VR presentations remains a challenging and time-consuming task for users, hindering the full realization of VR presentation's capabilities. This research aims to explore the potential of VR presentation, analyze users' opinions, and investigate these via providing a user-friendly no-coding authoring tool. Through an examination of popular presentation software and interviews with seven professionals, we identified five design aspects and four design challenges for VR presentations. Based on the findings, we developed VRStory, a prototype for presentation authoring without coding to explore the design aspects and strategies for addressing the challenges. VRStory offers a variety of predefined and customizable VR elements, as well as modules for layout design, navigation control, and asset generation. A user study was then conducted with 12 participants to investigate their opinions and authoring experience with VRStory. Our results demonstrated that, while acknowledging the advantages of immersive and spatial features in VR, users often have a consistent mental model for traditional 2D presentations and may still prefer planar and static formats in VR for better accessibility and efficient communication. We finally shared our learned design considerations for future development of VR presentation tools, emphasizing the importance of balancing of promoting immersive features and ensuring accessibility.

[C38]

Temiloluwa Paul Femi-Gege, Matthew Brehmer, Jian Zhao. VisConductor: Affect-Varying Widgets for Animated Data Storytelling in Gesture-Aware Augmented Video Presentation. Proceedings of the ACM Interactive Surfaces and Spaces Conference, pp. 531:1-531:22, 2024.

Abstract: Augmented video presentation tools provide a natural way for presenters to interact with their content, resulting in engaging experiences for remote audiences, such as when a presenter uses hand gestures to manipulate and direct attention to visual aids overlaid on their webcam feed. However, authoring and customizing these presentations can be challenging, particularly when presenting dynamic data visualization (i.e., animated charts). To this end, we introduce VisConductor, an authoring and presentation tool that equips presenters with the ability to configure gestures that control affect-varying visualization animation, foreshadow visualization transitions, direct attention to notable data points, and animate the disclosure of annotations. These gestures are integrated into configurable widgets, allowing presenters to trigger content transformations by executing gestures within widget boundaries, with feedback visible only to them. Altogether, our palette of widgets provides a level of flexibility appropriate for improvisational presentations and ad-hoc content transformations, such as when responding to audience engagement. To evaluate VisConductor, we conducted two studies focusing on presenters (N = 11) and audience members (N = 11). Our findings indicate that our approach taken with VisConductor can facilitate interactive and engaging remote presentations with dynamic visual aids. Reflecting on our findings, we also offer insights to inform the future of augmented video presentation tools.

[C37]

Ryan Yen, Jian Zhao. Reifying the Reuse of User-AI Conversational Memories. Proceedings of ACM Symposium on User Interface Software and Technology, pp. 58:1-58:22, 2024.

Abstract: As users engage more frequently with AI conversational agents, conversations may exceed their 'memory' capacity, leading to failures in correctly leveraging certain memories for tailored responses. However, in finding past memories that can be reused or referenced, users need to retrieve relevant information in various conversations and articulate to the AI their intention to reuse these memories. To support this process, we introduce Memolet, an interactive object that reifies memory reuse. Users can directly manipulate Memolet to specify which memories to reuse and how to use them. We developed a system demonstrating Memolet's interaction across various memory reuse stages, including memory extraction, organization, prompt articulation, and generation refinement. We examine the system's usefulness with an N=12 within-subject study and provide design implications for future systems that support user-AI conversational memory reusing.

[C36]

Ryan Yen, Jiawen Stefanie Zhu, Sangho Suh, Haijun Xia, Jian Zhao. CoLadder: Supporting Programmers with Hierarchical Code Generation in Multi-Level Abstraction. Proceedings of ACM Symposium on User Interface Software and Technology, pp. 11:1-11:20, 2024.

Abstract: This paper adopted an iterative design process to gain insights into programmers' strategies when using LLMs for programming. We proposed CoLadder, a novel system that supports programmers by facilitating hierarchical task decomposition, direct code segment manipulation, and result evaluation during prompt authoring. A user study with 12 experienced programmers showed that CoLadder is effective in helping programmers externalize their problem-solving intentions flexibly, improving their ability to evaluate and modify code across various abstraction levels, from their task's goal to final code implementation.

[C35]

Maoyuan Sun, Yuanxin Wang, Courtney Bolton, Yue Ma, Tianyi Li, Jian Zhao. Investigating User Estimation of Missing Data in Visual Analysis. Proceedings of the Graphics Interface Conference, pp. 30:1-30:13, 2024.

Abstract: Missing data is a pervasive issue in real-world analytics, stemming from a multitude of factors (e.g., device malfunctions and network disruptions), making it a ubiquitous challenge in many domains. Misperception of missing data impacts decision-making and causes severe consequences. To mitigate risks from missing data and facilitate proper handling, computing methods (e.g., imputation) have been studied, which often culminate in the visual representation of data for analysts to further check. Yet, the influence of these computed representations on user judgment regarding missing data remains unclear. To study potential influencing factors and their impact on user judgment, we conducted a crowdsourcing study. We controlled 4 factors: the distribution, imputation, and visualization of missing data, and the prior knowledge of data. We compared users' estimations of missing data with computed imputations under different combinations of these factors. Our results offer useful guidance for visualizing missing data and their imputations, which informs future studies on developing trustworthy computing methods for visual analysis of missing data.

[C34]

Xinyu Shi, Mingyu Liu, Ziqi Zhou, Ali Neshati, Ryan Rossi, Jian Zhao. Exploring Interactive Color Palettes for Abstraction-Driven Exploratory Image Colorization. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 146:1-146:16, 2024.

Abstract: Color design is essential in areas such as product, graphic, and fashion design. However, current tools like Photoshop, with their concrete-driven color manipulation approach, often stumble during early ideation, favoring polished end results over initial exploration. We introduced Mondrian as a test-bed for abstraction-driven approach using interactive color palettes for image colorization. Through a formative study with six design experts, we selected three design options for visual abstractions in color design and developed Mondrian where humans work with abstractions and AI manages the concrete aspects. We carried out a user study to understand the benefits and challenges of each abstraction format and compare the Mondrian with Photoshop. A survey involving 100 participants further examined the influence of each abstraction format on color composition perceptions. Findings suggest that interactive visual abstractions encourage a non-linear exploration workflow and an open mindset during ideation, thus providing better creative affordance.

[C33]

Xinyu Shi, Yinghou Wang, Yun Wang, Jian Zhao. Piet: Facilitating Color Authoring for Motion Graphics Video. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 148:1-148:17,2024.
 Best Paper

Abstract: Motion graphic (MG) videos are effective and compelling for presenting complex concepts through animated visuals; and colors are important to convey desired emotions, maintain visual continuity, and signal narrative transitions. However, current video color authoring workflows are fragmented, lacking contextual previews, hindering rapid theme adjustments, and not aligning with designers' progressive authoring flows. To bridge this gap, we introduce Piet, the first tool tailored for MG video color authoring. Piet features an interactive palette to visually represent color distributions, support controllable focus levels, and enable quick theme probing via grouped color shifts. We interviewed 6 domain experts to identify the frustrations in current tools and inform the design of Piet. An in-lab user study with 13 expert designers showed that Piet effectively simplified the MG video color authoring and reduced the friction in creative color theme exploration.

[C32]

Li Feng, Ryan Yen, Yuzhe You, Mingming Fan, Jian Zhao, Zhicong Lu. CoPrompt: Supporting Prompt Sharing and Referring in Collaborative Natural Language Programming. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 934:1-934:21, 2024.

Abstract: Natural language (NL) programming has become more approachable due to the powerful code-generation capability of large language models (LLMs). This shift to using NL to program enhances collaborative programming by reducing communication barriers and context-switching among programmers from varying backgrounds. However, programmers may face challenges during prompt engineering in a collaborative setting as they need to actively keep aware of their collaborators' progress and intents. In this paper, we aim to investigate ways to assist programmers' prompt engineering in a collaborative context. We first conducted a formative study to understand the workflows and challenges of programmers when using NL for collaborative programming. Based on our findings, we implemented a prototype, CoPrompt, to support collaborative prompt engineering by providing referring, requesting, sharing, and linking mechanisms. Our user study indicates that CoPrompt assists programmers in comprehending collaborators' prompts and building on their collaborators' work, reducing repetitive updates and communication costs.

[C31]

Pengcheng An, Jiawen Stefanie Zhu, Zibo Zhang, Yifei Yin, Qingyuan Ma, Che Yan, Linghao Du, Jian Zhao. EmoWear: Exploring Emotional Teasers for Voice Message Interaction on Smartwatches. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 279:1-279:16, 2024.

Abstract: Voice messages, by nature, prevent users from gauging the emotional tone without fully diving into the audio content. This hinders the shared emotional experience at the pre-retrieval stage. Research scarcely explored "Emotional Teasers"—pre-retrieval cues offering a glimpse into an awaiting message's emotional tone without disclosing its content. We introduce EmoWear, a smartwatch voice messaging system enabling users to apply 30 animation teasers on message bubbles to reflect emotions. EmoWear eases senders' choice by prioritizing emotions based on semantic and acoustic processing. EmoWear was evaluated in comparison with a mirroring system using color-coded message bubbles as emotional cues (N=24). Results showed EmoWear significantly enhanced emotional communication experience in both receiving and sending messages. The animated teasers were considered intuitive and valued for diverse expressions. Desirable interaction qualities and practical implications are distilled for future design. We thereby contribute both a novel system and empirical knowledge concerning emotional teasers for voice messaging.

[C30]

Xizi Wang, Ben Lafreniere, Jian Zhao. Exploring Visualizations for Precisely Guiding Bare Hand Gestures in Virtual Reality. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 636:1-636:19, 2024.

Abstract: Bare hand interaction in augmented or virtual reality (AR/VR) systems, while intuitive, often results in errors and frustration. However, existing methods, such as a static icon or a dynamic tutorial, can only inform simple and coarse hand gestures and lack corrective feedback. This paper explores various visualizations for enhancing precise hand interaction in VR. Through a comprehensive two-part formative study with 11 participants, we identified four types of essential information for visual guidance and designed different visualizations that manifest these information types. We further distilled four visual designs and conducted a controlled lab study with 15 participants to assess their effectiveness for various single- and double-handed gestures. Our results demonstrate that visual guidance significantly improved users' gesture performance, reducing time and workload while increasing confidence. Moreover, we found that the visualization did not disrupt most users' immersive VR experience or their perceptions of hand tracking and gesture recognition reliability.

[C29]

Liwei Wu, Qing Liu, Jian Zhao, Edward Lank. Interactions across Displays and Space: A Study of Virtual Reality Streaming Practices on Twitch. Proceedings of the ACM Interactive Surfaces and Spaces Conference, pp. 437:1-437:24, 2023.
 Best Paper Honorable Mention

Abstract: The growing live streaming economy and virtual reality (VR) technologies have sparked interest in VR streaming among streamers and viewers. However, limited research has been conducted to understand this emerging streaming practice. To address this gap, we conducted an in-depth thematic analysis of 34 streaming videos from 12 VR streamers with varying levels of experience, to explore the current practices, interaction styles, and strategies, as well as to investigate the challenges and opportunities for VR streaming. Our findings indicate that VR streamers face challenges in building emotional connections and maintaining streaming flow due to technical problems, lack of fluid transitions between physical and virtual environments, and not intentionally designed game scenes. As a response, we propose six design implications to encourage collaboration between game designers and streaming app developers, facilitating fluid, rich, and broad interactions for an enhanced streaming experience. In addition, we discuss the use of streaming videos as user-generated data for research, highlighting the lessons learned and emphasizing the need for tools to support streaming video analysis. Our research sheds light on the unique aspects of VR streaming, which combines interactions across displays and space.

[C28]

Qing Liu, Gustavo Alves, Jian Zhao. Challenges and Opportunities for Software Testing in Virtual Reality Application Development. Proceedings of the Graphics Interface Conference, 2023 (In Press).

Abstract: Testing is a core process for the development of Virtual Reality (VR) software, which could ensure the delivery of high-quality VR products and experiences. As VR applications have become more popular in different fields, more challenges and difficulties have been raised during the testing phase. However, few studies have explored the challenges of software testing in VR development in detail. This paper aims to fill in the gap through a qualitative interview study composed of 14 professional VR developers and a survey study with 33 additional participants. As a result, we derived 10 key challenges that are often confronted by VR developers during software testing. Our study also sheds light on potential design directions for VR development tools based on the identified challenges and needs of the VR developers to alleviate existing issues in testing.

[C27]

Xinyu Shi, Ziqi Zhou, Jingwen Zhang, Ali Neshati, Anjul Tyagi, Ryan Rossi, Shunan Guo, Fan Du, Jian Zhao. De-Stijl: Facilitating Graphics Design with Interactive 2D Color Palette Recommendation. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 122:1-122:19, 2023.

Abstract: Selecting a proper color palette is critical in crafting a high-quality graphic design to gain visibility and communicate ideas effectively. To facilitate this process, we propose De-Stijl, an intelligent and interactive color authoring tool to assist novice designers in crafting harmonic color palettes, achieving quick design iterations, and fulfilling design constraints. Through De-Stijl, we contribute a novel 2D color palette concept that allows users to intuitively perceive color designs in context with their proportions and proximities. Further, De-Stijl implements a holistic color authoring system that supports 2D palette extraction, theme-aware and spatial-sensitive color recommendation, and automatic graphical elements (re)colorization. We evaluated De-Stijl through an in-lab user study by comparing the system with existing industry standard tools, followed by in-depth user interviews. Quantitative and qualitative results demonstrate that De-Stijl is effective in assisting novice design practitioners to quickly colorize graphic designs and easily deliver several alternatives.

[C26]

Fengjie Wang, Xuye Liu, Oujing Liu, Ali Neshati, Tengfei Ma, Min Zhu, Jian Zhao. Slide4N: Creating Presentation Slides from Computational Notebooks with Human-AI Collaboration. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 364:1-364:18, 2023.

Abstract: Data scientists often have to use other presentation tools (e.g., Microsoft PowerPoint) to create slides to communicate their analysis obtained using computational notebooks. Much tedious and repetitive work is needed to transfer the routines of notebooks (e.g., code, plots) to the presentable contents on slides (e.g., bullet points, figures). We propose a human-AI collaborative approach and operationalize it within Slide4N, an interactive AI assistant for data scientists to create slides from computational notebooks. Slide4N leverages advanced natural language processing techniques to distill key information from user-selected notebook cells and then renders them in appropriate slide layouts. The tool also provides intuitive interactions that allow further refinement and customization of the generated slides. We evaluated Slide4N with a two-part user study, where participants appreciated this human-AI collaborative approach compared to fully-manual or fully-automatic methods. The results also indicate the usefulness and effectiveness of Slide4N in slide creation tasks from notebooks.

[C25]

Chang Liu, Arif Usta, Jian Zhao, Semih Salihoglu. Governor: Turning Open Government Data Portals into Interactive Databases. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 415:1-415:16, 2023.

Abstract: The launch of open governmental data portals (OGDPs) has popularized the open data movement of last decade. Although the amount of data in OGDPs is increasing, their functionalities are limited to finding datasets with titles/descriptions and downloading the actual files. This hinders the end users, especially those without technical skills, to find the open data tables and make use of them. We present Governor, an open-sourced web application developed to make OGDPs more accessible to end users by facilitating searching actual records in the tables, previewing them directly without downloading, and suggesting joinable and unionable tables to users based on their latest working tables. Governor also manages the provenance of integrated tables allowing users and their collaborators to easily trace back to the original tables in OGDP. We evaluate Governor with a two-part user study and the results demonstrate its value and effectiveness in finding and integrating tables in OGDP.

[C24]

Emily Kuang, Ehsan Jahangirzadeh Soure, Mingming Fan, Jian Zhao, Kristen Shinohara. Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text). Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 116:1-116:15, 2023.

Abstract: AI is promising in assisting UX evaluators with analyzing usability tests, but its judgments are typically presented as non-interactive visualizations. Evaluators may have questions about test recordings, but have no way of asking them. Interactive conversational assistants provide a Q&A dynamic that may improve analysis efficiency and evaluator autonomy. To understand the full range of analysis-related questions, we conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice. We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics. Those who used the text assistant asked more questions, but the question lengths were similar. The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust. We also provide design considerations for future conversational AI assistants for UX evaluation.

[S6]

Maoyuan Sun, Yue Ma, Yuanxin Wang, Tianyi Li, Jian Zhao, Yujun Liu, Ping-Shou Zhong. Toward Systematic Considerations of Missingness in Visual Analytics. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 110-114, 2022.
 Best Paper Honorable Mention

Abstract: Data-driven decision making has been a common task in today's big data era, from simple choices such as finding a fast way to drive home, to complex decisions on medical treatment. It is often supported by visual analytics. For various reasons (e.g., system failure, interrupted network, intentional information hiding, or bias), visual analytics for sensemaking of data involves missingness (e.g., data loss and incomplete analysis), which impacts human decisions. For example, missing data can cost a business millions of dollars, and failing to recognize key evidence can put an innocent person in jail. Being aware of missingness is critical to avoid such catastrophes. To fulfill this, as an initial step, we consider missingness in visual analytics from two aspects: data-centric and human-centric. The former emphasizes missingness in three data-related categories: data composition, data relationship, and data usage. The latter focuses on the human-perceived missingness at three levels: observed-level, inferred-level, and ignored-level. Based on them, we discuss possible roles of visualizations for handling missingness, and conclude our discussion with future research opportunities.

[C23]

Sangho Suh, Jian Zhao, Edith Law. CodeToon: Story Ideation, Auto Comic Generation, and Structure Mapping for Code-Driven Storytelling. Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 13:1-13:16, 2022.

Abstract: Recent work demonstrated how we can design and use coding strips, a form of comic strips with corresponding code, to enhance teaching and learning in programming. However, creating coding strips is a creative, time-consuming process. Creators have to generate stories from code (code→story) and design comics from stories (story→comic). We contribute CodeToon, a comic authoring tool that facilitates this code-driven storytelling process with two mechanisms: (1) story ideation from code using metaphor and (2) automatic comic generation from the story. We conducted a two-part user study that evaluates the tool and the comics generated by participants to test whether CodeToon facilitates the authoring process and helps generate quality comics. Our results show that CodeToon helps users create accurate, informative, and useful coding strips in a significantly shorter time. Overall, this work contributes methods and design guidelines for code-driven storytelling and opens up opportunities for using art to support computer science education.

[C22]

Nikhita Joshi, Matthew Lakier, Daniel Vogel, Jian Zhao. A Design Framework for Contextual and Embedded Information Visualizations in Spatial Augmented Reality. Proceedings of the Graphics Interface Conference, pp. 24:1-24:12, 2022.

Abstract: Spatial augmented reality (SAR) displays digital content in a real environment in ways that are situated, peripheral, and viewable by multiple people. These capabilities change how embedded information visualizations are used, designed, and experienced. But a comprehensive design framework that captures the specific characteristics and parameters relevant to SAR is missing. We contribute a new design framework for leveraging context and surfaces in the environment for SAR visualizations. An accompanying design process shows how designers can apply the framework to generate and describe SAR visualizations. The framework captures how the user's intent, interaction, and six environmental and visualization factors can influence SAR visualizations. The potential of this design framework is illustrated through eighteen exemplar application scenarios and accompanying envisionment videos.

[C21]

Gloria Fernandez-Nieto, Pengcheng An, Jian Zhao, Simon Buckingham Shum, Roberto Martinez-Maldonado. Classroom Dandelions: Visualising Participants' Position, Trajectories and Body Orientation Augments Teachers' Sensemaking. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 564:1-564:17, 2022.

Abstract: Despite the digital revolution, physical space remains the site for teaching and learning embodied knowledge and skills. Both teachers and students must develop spatial competencies to effectively use classroom spaces, enabling fluid verbal and non-verbal interaction. While video permits rich activity capture, it provides no support for quickly seeing activity patterns that can assist learning. In contrast, position tracking systems permit the automated modelling of spatial behaviour, opening new possibilities for feedback. This paper introduces the design rationale for Dandelion Diagrams that integrate location, trajectory and body orientation over a variable period. Applied in two authentic teaching contexts (a science laboratory, and a nursing simulation) we show how heatmaps showing only teacher/student location led to misinterpretations that were resolved by overlaying Dandelion Diagrams. Teachers also identified a variety of ways they could aid professional development. We conclude Dandelion Diagrams assisted sensemaking, but discuss the ethical risks of over-interpretation.

[C20]

Pengcheng An, Ziqi Zhou, Qing Liu, Yifei Yin, Linghao Du, Da-Yuan Huang, Jian Zhao. VibEmoji: Exploring User-authoring Multi-modal Emoticons in Social Communication. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 493:1-493:17, 2022.

Abstract: Emoticons are indispensable in online communications. With users' growing needs for more customized and expressive emoticons, recent messaging applications begin to support (limited) multi-modal emoticons: e.g., enhancing emoticons with animations or vibrotactile feedback. However, little empirical knowledge has been accumulated concerning how people create, share and experience multi-modal emoticons in everyday communication, and how to better support them through design. To tackle this, we developed VibEmoji, a user-authoring multi-modal emoticon interface for mobile messaging. Extending existing designs, VibEmoji grants users greater flexibility to combine various emoticons, vibrations, and animations on-the-fly, and offers non-aggressive recommendations based on these components' emotional relevance. Using VibEmoji as a probe, we conducted a four-week field study with 20 participants, to gain new understandings from in-the-wild usage and experience, and extract implications for design. We thereby contribute to both a novel system and various insights for supporting users' creation and communication of multi-modal emoticons.

[C19]

Mingming Fan, Xianyou Yang, Tsz Tung Yu, Vera Q. Liao, Jian Zhao. Human-AI Collaboration for UX Evaluation: Effects of Explanation and Synchronization. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), pp. 96:1-96:32, 2022.

Abstract: Analyzing usability test videos is arduous. Although recent research showed the promise of AI in assisting with such tasks, it remains largely unknown how AI should be designed to facilitate effective collaboration between user experience (UX) evaluators and AI. Inspired by the concepts of agency and work context in human and AI collaboration literature, we studied two corresponding design factors for AI-assisted UX evaluation: explanations and synchronization. Explanations allow AI to further inform humans how it identifies UX problems from a usability test session; synchronization refers to the two ways humans and AI collaborate: synchronously and asynchronously. We iteratively designed a tool, AI Assistant, with four versions of UIs corresponding to the two levels of explanations (with/without) and synchronization (sync/async). By adopting a hybrid wizard-of-oz approach to simulating an AI with reasonable performance, we conducted a mixed-method study with 24 UX evaluators identifying UX problems from usability test videos using AI Assistant. Our quantitative and qualitative results show that AI with explanations, regardless of being presented synchronously or asynchronously, provided better support for UX evaluators' analysis and was perceived more positively; when without explanations, synchronous AI better improved UX evaluators' performance and engagement compared to the asynchronous AI. Lastly, we present the design implications for AI-assisted UX evaluation and facilitating more effective human-AI collaboration.

[C18]

Xingjun Li, Yuanxin Wang, Hong Wang, Yang Wang, Jian Zhao. NBSearch: Semantic Search and Visual Exploration of Computational Notebooks. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 308:1-308:14, 2021.

Abstract: Code search is an important and frequent activity for developers using computational notebooks (e.g., Jupyter). The flexibility of notebooks brings challenges for effective code search, where classic search interfaces for traditional software code may be limited. In this paper, we propose, NBSearch, a novel system that supports semantic code search in notebook collections and interactive visual exploration of search results. NBSearch leverages advanced machine learning models to enable natural language search queries and intuitive visualizations to present complicated intra- and inter-notebook relationships in the returned results. We developed NB- Search through an iterative participatory design process with two experts from a large software company. We evaluated the models with a series of experiments and the whole system with a controlled user study. The results indicate the feasibility of our analytical pipeline and the effectiveness of NBSearch to support code search in large notebook collections.

[C17]

Siyuan Xia, Nafisa Anzum, Semih Salihoglu, Jian Zhao. KTabulator: Interactive Ad hoc Table Creation using Knowledge Graphs. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 100:1-100:14, 2021.

Abstract: The need to find or construct tables arises routinely to accomplish many tasks in everyday life, as a table is a common format for organizing data. However, when relevant data is found on the web, it is often scattered across multiple tables on different web pages, requiring tedious manual searching and copy-pasting to collect data. We propose KTabulator, an interactive system to effectively extract, build, or extend ad hoc tables from large corpora, by leveraging their computerized structures in the form of knowledge graphs. We developed and evaluated KTabulator using Wikipedia and its knowledge graph DBpedia as our testbed. Starting from an entity or an existing table, KTabulator allows users to extend their tables by finding relevant entities, their properties, and other relevant tables, while providing meaningful suggestions and guidance. The results of a user study indicate the usefulness and efficiency of KTabulator in ad hoc table creation.

[S5]

Jian Zhao, Maoyuan Sun, Patrick Chiu, Francine Chen, Bee Liew. Know-What and Know-Who: Document Searching and Exploration using Topic-Based Two-Mode Networks. Proceedings of the IEEE Pacific Visualization Symposium, pp. 81-85, 2021.

Abstract: This paper proposes a novel approach for analyzing search results of a document collection, which can help support know-what and know-who information seeking questions. Search results are grouped by topics, and each topic is represented by a two-mode network composed of related documents and authors (i.e., biclusters). We visualize these biclusters in a 2D layout to support interactive visual exploration of the analyzed search results, which highlights a novel way of organizing entities of biclusters. We evaluated our approach using a large academic publication corpus, by testing the distribution of the relevant documents and of lead and prolific authors. The results indicate the effectiveness of our approach compared to traditional 1D ranked lists. Moreover, a user study with 12 participants was conducted to compare our proposed visualization, a simplified variation without topics, and a text-based interface. We report on participants' task performance, their preference of the three interfaces, and the different strategies used in information seeking.

[C16]

Takanori Fujiwara, Jian Zhao, Francine Chen, Kwan‑Liu Ma. A Visual Analytics Framework for Contrastive Network Analysis. Proceedings of the IEEE Conference on Visual Analytics Science and Technology, pp. 48-59, 2020.

Abstract: A common network analysis task is comparison of two networks to identify unique characteristics in one network with respect to the other. For example, when comparing protein interaction networks derived from normal and cancer tissues, one essential task is to discover protein-protein interactions unique to cancer tissues. However, this task is challenging when the networks contain complex structural (and semantic) relations. To address this problem, we design ContraNA, a visual analytics framework leveraging both the power of machine learning for uncovering unique characteristics in networks and also the effectiveness of visualization for understanding such uniqueness. The basis of ContraNA is cNRL, which integrates two machine learning schemes, network representation learning (NRL) and contrastive learning (CL), to generate a low-dimensional embedding that reveals the uniqueness of one network when compared to another. ContraNA provides an interactive visualization interface to help analyze the uniqueness by relating embedding results and network structures as well as explaining the learned features by cNRL. We demonstrate the usefulness of ContraNA with two case studies using real-world datasets. We also evaluate ContraNA through a controlled user study with 12 participants on network comparison tasks. The results show that participants were able to both effectively identify unique characteristics from complex networks and interpret the results obtained from cNRL.

[C15]

John Wenskovitch, Jian Zhao, Scott Carter, Matthew Cooper, Chris North. Albireo: An Interactive Tool for Visually Summarizing Computational Notebook Structure. Proceedings of the IEEE Symposium on Visualization in Data Science, pp. 1-10, 2019.

Abstract: Computational notebooks have become a major medium for data exploration and insight communication in data science. Although expressive, dynamic, and flexible, in practice they are loose collections of scripts, charts, and tables that rarely tell a story or clearly represent the analysis process. This leads to a number of usability issues, particularly in the comprehension and exploration of notebooks. In this work, we design, implement, and evaluate Albireo, a visualization approach to summarize the structure of notebooks, with the goal of supporting more effective exploration and communication by displaying the dependencies and relationships between the cells of a notebook using a dynamic graph structure. We evaluate the system via a case study and expert interviews, with our results indicating that such a visualization is useful for an analyst's self-reflection during exploratory programming, and also effective for communication of narratives and collaboration between analysts.

[S4]

Cheonbok Park, Inyoup Na, Yongjang Jo, Sungbok Shin, Yoo Jaehyo, Bum Chul Kwon, Jian Zhao, Hyungjong Noh, Yeonsoo Lee, Jaegul Choo. SANVis: Visual Analytics for Understanding Self-Attention Networks. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 146-150, 2019.

Abstract: Attention networks, a deep neural network architecture inspired by humans' attention mechanism, have seen significant success in im- age captioning, machine translation, and many other applications. Recently, they have been further evolved into an advanced approach called multi-head self-attention networks, which can encode a set of input vectors, e.g., word vectors in a sentence, into another set of vectors. Such encoding aims at simultaneously capturing diverse syntactic and semantic features within a set, each of which corresponds to a particular attention head, forming altogether multi-head attention. Meanwhile, the increased model complexity prevents users from easily understanding and manipulating the inner workings of models. To tackle the challenges, we present a visual analytics sys- tem called SANVis, which helps users understand the behaviors and the characteristics of multi-head self-attention networks. Using a state-of-the-art self-attention model called Transformer, we demon- strate usage scenarios of SANVis in machine translation tasks. Our system is available at http://short.sanvis.org.

[S3]

Jian Zhao, Maoyuan Sun, Francine Chen, Patrick Chiu. MissBiN: Visual Analysis of Missing Links in Bipartite Networks. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 71-75, 2019.

Abstract: The analysis of bipartite networks is critical in a variety of application domains, such as exploring entity co-occurrences in intelligence analysis and investigating gene expression in bio-informatics. One important task is missing link prediction, which infers the existence of unseen links based on currently observed ones. In this paper, we propose MissBiN that involves analysts in the loop for making sense of link prediction results. MissBiN combines a novel method for link prediction and an interactive visualization for examining and understanding the algorithm outputs. Further, we conducted quantitative experiments to assess the performance of the proposed link prediction algorithm and a case study to evaluate the overall effectiveness of MissBiN.

[S2]

Maoyuan Sun, David Koop, Jian Zhao, Chris North, Naren Ramakrishnan Interactive Bicluster Aggregation in Bipartite Graphs. Proceedings of the IEEE Visualization and Visual Analytics Conference, pp. 246-250, 2019.

Abstract: Exploring coordinated relationships is important for sensemaking of data in various fields, such as intelligence analysis. To support such investigations, visual analysis tools use biclustering to mine relationships in bipartite graphs and visualize the resulting biclusters with standard graph visualization techniques. Due to overlaps among biclusters, such visualizations can be cluttered (e.g., with many edge crossings), when there are a large number of biclusters. Prior work attempted to resolve this problem by automatically ordering nodes in a bipartite graph. However, visual clutter is still a serious problem, since the number of displayed biclusters remains unchanged. We propose bicluster aggregation as an alternative approach, and have developed two methods of interactively merging biclusters. These interactive bicluster aggregations help organize similar biclusters and reduce the number of displayed biclusters. Initial expert feedback indicates potential usefulness of these techniques in practice.

[C14]

Mona Loorak, Wei Zhou, Ha Trinh, Jian Zhao, Wei Li. Hand-Over-Face Input Sensing for Interaction with Smartphones through the Built-in Camera. Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 32:1-32:12, 2019.
 Best Paper

Abstract: This paper proposes using face as a touch surface and employing hand-over-face (HOF) gestures as a novel input modality for interaction with smartphones, especially when touch input is limited. We contribute InterFace, a general system framework that enables the HOF input modality using advanced computer vision techniques. As an examplar of the usage of this framework, we demonstrate the feasibility and usefulness of HOF with an Android application for improving single-user and group selfie-taking experience through providing appearance customization in real-time. In a within-subjects study comparing HOF against touch input for single-user interaction, we found that HOF input led to significant improvements in accuracy and perceived workload, and was preferred by the participants. Qualitative results of an observational study also demonstrated the potential of HOF input modality to improve the user experience in multi-user interactions. Based on the lessons learned from our studies, we propose a set of potential applications of HOF to support smartphone interaction. We envision that the affordances provided by the this modality can expand the mobile interaction vocabulary and facilitate scenarios where touch input is limited or even not possible.

[C13]

Hao-Fei Cheng, Bowen Yu, Siwei Fu, Jian Zhao, Brent Hecht, Joseph Konstan, Loren Terveen, Svetlana Yarosh, Haiyi Zhu. Teaching UI Design at Global Scales: A Case Study of the Design of Collaborative Capstone Projects for MOOCs. Proceedings of the ACM Conference on Learning at Scale, pp. 11:1-11:11, 2019.

Abstract: Group projects are an essential component of teaching user interface (UI) design. We identified six challenges in transferring traditional group projects into the context of Massive Open Online Courses: managing dropout, avoiding free-riding, appropriate scaffolding, cultural and time zone differences, and establishing common ground. We present a case study of the design of a group project for a UI Design MOOC, in which we implemented technical tools and social structures to cope with the above challenges. Based on survey analysis, interviews, and team chat data from the students over a six-month period, we found that our socio-technical design addressed many of the obstacles that MOOC learners encountered during remote collaboration. We translate our findings into design implications for better group learning experiences at scale.

[C12]

Chidansh Bhatt, Matthew Cooper, Jian Zhao. SeqSense: Video Recommendation Using Topic Sequence Mining. Proceedings of the International Conference on Multimedia Modeling, pp. 252-263, 2018.

Abstract: This paper examines content-based recommendation in domains exhibiting sequential topical structure. An example is educational video, including Massive Open Online Courses (MOOCs) in which knowledge builds within and across courses. Conventional content-based or collaborative filtering recommendation methods do not exploit courses' sequential nature. We describe a system for video recommendation that combines topic-based video representation with sequential pattern mining of inter-topic relationships. Unsupervised topic modeling provides a scalable and domain-independent representation. We mine inter-topic relationships from manually constructed syllabi that instructors provide to guide students through their courses. This approach also allows the inclusion of multi-video sequences among the recommendation results. Integrating the resulting sequential information with content-level similarity provides relevant as well as diversified recommendations. Quantitative evaluation indicates that the proposed system, SeqSense, recommends fewer redundant videos than baseline methods, and instead emphasizes results consistent with mined topic transitions.

[C11]

Jian Zhao, Chidansh Bhatt, Matthew Cooper, David Shamma. Flexible Learning with Semantic Visual Exploration and Sequence-Based Recommendation of MOOC Videos. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 329:1-329:13, 2018.

Abstract: Massive Open Online Course (MOOC) platforms have scaled online education to unprecedented enrollments, but remain limited by their rigid, predetermined curricula. This paper presents MOOCex, a technique that can offer a more flexible learning experience for MOOCs. MOOCex can recommend lecture videos across different courses with multiple perspectives, and considers both the video content and also sequential inter-topic relationships mined from course syllabi. MOOCex is also equipped with interactive visualization allowing learners to explore the semantic space of recommendations within their current learning context. The results of comparisons to traditional methods, including content-based recommendation and ranked list representation, indicate the effectiveness of MOOCex. Further, feedback from MOOC learners and instructors suggests that MOOCex enhances both MOOC-based learning and teaching.

[C10]

Siwei Fu, Jian Zhao, Hao-Fei Cheng, Haiyi Zhu, Jennifer Marlow. T-Cal: Understanding Team Conversation Data with Calendar-based Visualization. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 500:1-500:13, 2018.

Abstract: Understanding team communication and collaboration patterns is critical for improving work efficiency in organizations. This paper presents an interactive visualization system, T-Cal, that supports the analysis of conversation data from modern team messaging platforms (e.g., Slack). T-Cal employs a user-familiar visual interface, a calendar, to enable seamless multi-scale browsing of data from different perspectives. T-Cal also incorporates a number of analytical techniques for disentangling interleaving conversations, extracting keywords, and estimating sentiment. The design of T-Cal is based on an iterative user-centered design process including field studies, requirements gathering, initial prototypes demonstration, and evaluation with domain users. The resulting two case studies indicate the effectiveness and usefulness of T-Cal in real-world applications, including student group chats during a MOOC and daily conversations within an industry research lab.

[C9]

Mingqian Zhao, Yijia Su, Jian Zhao, Shaoyu Chen, Huamin Qu. Mobile Situated Analytics of Ego-centric Network Data. Proceedings of the ACM SIGGRAPH Asia Symposium on Visualization, pp. 14:1-14:8, 2017.

Abstract: Situated Analytics has become popular and important with the resurge of Augmented Reality techniques and the prevalence of mobile platforms. However, existing Situated Analytics could only assist in simple visual analytical tasks such as data retrieval, and most visualization systems capable of aiding complex Visual Analytics are only designed for desktops. Thus, there remain lots of open questions about how to adapt desktop visualization systems to mobile platforms. In this paper, we conduct a study to discuss challenges and trade-offs during the process of adapting an existing desktop system to a mobile platform. With a specific example of interest, egoSlider {Wu et al. 2016}, a four-view dynamic ego-centric network visualization system is tailored to adapt the iPhone platform. We study how different view management techniques and interactions influence the effectiveness of presenting multi-scale visualizations including Scatterplot and Storyline visualizations. Simultaneously, a novel Main view+Thumbnails interface layout is devised to support smooth linking between multiple views on mobile platforms. We assess the effectiveness of our system through expert interviews with four experts in data visualization.

[C8]

Jian Zhao, Michael Glueck, Fanny Chevalier, Yanhong Wu, Azam Khan. Egocentric Analysis of Dynamic Networks with EgoLines. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 5003-5014, 2016.
 Best Paper Honorable Mention

Abstract: The egocentric analysis of dynamic networks focuses on discovering the temporal patterns of a subnetwork around a specific central actor (i.e., an ego-network). These types of analyses are useful in many application domains, such as social science and business intelligence, providing insights about how the central actor interacts with the outside world. We present EgoLines, an interactive visualization to sup- port the egocentric analysis of dynamic networks. Using a "subway map" metaphor, a user can trace an individual actor over the evolution of the ego-network. The design of EgoLines is grounded in a set of key analytical questions pertinent to egocentric analysis, derived from our interviews with three domain experts and general network analysis tasks. We demonstrate the effectiveness of EgoLines in egocentric analysis tasks through a controlled experiment and a case study with a domain expert.

[C7]

Jian Zhao, Zhicheng Liu, Mira Dontcheva, Aaron Hertzmann, Alan Wilson. MatrixWave: Visual Comparison of Event Sequence Data. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 259-268, 2015.
 Best Paper Honorable Mention

Abstract: Event sequence data analysis is common in many domains, including web and software development, transportation, and medical care. Few have investigated visualization techniques for comparison analysis of multiple event sequence datasets. Grounded in the real-world characteristics of web clickstream data, we explore visualization techniques for comparison of two clickstream datasets collected on different days or from users with different demographics. Through iterative design with web analysts, we designed MatrixWave, a matrix-based representation that allows analysts to get an overview of differences in traffic patterns and interactively explore paths through the website. We use color to encode differences and size to offer context over traffic volume. User feedback on MatrixWave is positive. Participants in a laboratory study were more accurate with MatrixWave than the conventional Sankey diagram.

[C6]

Fan Du, Nan Cao, Jian Zhao, Yu-Ru Lin. Trajectory Bundling for Animated Transitions. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 289-298, 2015.

Abstract: Animated transition has been a popular design choice when switching between different views or layouts, in which moving trajectories are created as cues for tracking objects between their location shifting. Tracking moving objects, however, becomes difficult when objects' moving paths overlap or tracking targets increase. In our work, we propose a new design to facilitate tracking moving objects in animated transitions. Instead of simply moving an object along a straight line, we create "bundled" moving trajectories for a group of objects that are close to one another and share similar moving directions. To study the effect of bundled trajectories, we untangle variations due to different aspects of tracking complexity in a comprehensive controlled user study. The results ascertain the effectiveness of using bundled trajectories, especially when the number of tracking targets grow and the object movement involves high degree of occlusion. We discuss the implication of our new design and study.

[C5]

Jian Zhao, Liang Gou, Fei Wang, Michelle Zhou. PEARL: An Interactive Visual Analytic Tool for Understanding Personal Emotion Style Derived from Social Media. Proceedings of the IEEE Symposium on Visual Analytics Science and Technology, pp. 203-212, 2014.

Abstract: Hundreds of millions of people leave digital footprints on social media (e.g., Twitter and Facebook). Such data not only disclose a person's demographics and opinions, but also reveal one's emotional style. Emotional style captures a person's patterns of emotions over time, including his overall emotional volatility and resilience. Understanding one's emotional style can provide great benefits for both individuals and businesses alike, including the support of self-reflection and delivery of individualized customer care. We present PEARL a timeline-based visual analytic tool that allows users to interactively discover and examine a person's emotional style derived from this person's social media text. Compared to other visual text analytic systems, our work offers three unique contributions. First, it supports multi-dimensional emotion analysis from social media text to automatically detect a person's expressed emotions at different time points and summarize those emotions to reveal the person's emotional style. Second, it effectively visualizes complex, multi-dimensional emotion analysis results to create a visual emotional profile of an individual, which helps users browse and interpret one's emotional style. Third, it supports rich visual interactions that allow users to interactively explore and validate emotion analysis results. We have evaluated our work extensively through a series of studies. The results demonstrate the effectiveness of our tool both in emotion analysis from social media and in support of interactive visualization of the emotion analysis results.

[C4]

Ji Wang, Jian Zhao, Sheng Guo, Chris North, Naren Ramakrishnan. ReCloud: Semantics-based Word Cloud Visualization of User Reviews. Proceedings of the Graphics Interface Conference, pp. 151-158, 2014.

Abstract: User reviews, like those found on Yelp and Amazon, have become an important reference for decision making in daily life, for example, in dining, shopping and entertainment. However, large amounts of available reviews make the reading process tedious. Existing word cloud visualizations attempt to provide an overview. However their randomized layouts do not reveal content relationships to users. In this paper, we present ReCloud, a word cloud visualization of user reviews that arranges semantically related words as spatially proximal. We use a natural language processing technique called grammatical dependency parsing to create a semantic graph of review contents. Then, we apply a force-directed layout to the semantic graph, which generates a clustered layout of words by minimizing an energy model. Thus, ReCloud can provide users with more insight about the semantics and context of the review content. We also conducted an experiment to compare the efficiency of our method with two alternative review reading techniques: random layout word cloud and normal text-based reviews. The results showed that the proposed technique improves user performance and experience of understanding a large number of reviews.

[C3]

Jian Zhao, Daniel Wigdor, Ravin Balakrishnan. TrailMap: Facilitating Information Seeking in a Multi-Scale Digital Map via Implicit Bookmarking. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 3009-3018, 2013.

Abstract: Web applications designed for map exploration in local neighborhoods have become increasingly popular and important in everyday life. During the information-seeking process, users often revisit previously viewed locations, repeat earlier searches, or need to memorize or manually mark areas of interest. To facilitate rapid returns to earlier views during map exploration, we propose a novel algorithm to automatically generate map bookmarks based on a user's interaction. TrailMap, a web application based on this algorithm, is developed, providing a fluid and effective neighborhood exploration experience. A one-week study is conducted to evaluate TrailMap in users' everyday web browsing activities. Results showed that TrailMap's implicit bookmarking mechanism is efficient for map exploration and the interactive and visual nature of the tool is intuitive to users.

[S1]

Jian Zhao, Steven Drucker, Danyel Fisher, Donald Brinkman. TimeSlice: Interactive Faceted Browsing of Timeline Data. Proceedings of the International Working Conference on Advanced Visual Interfaces, pp. 433-436, 2012.

Abstract: Temporal events with multiple sets of metadata attributes, i.e., facets, are ubiquitous across different domains. The capabilities of efficiently viewing and comparing events data from various perspectives are critical for revealing relationships, making hypotheses, and discovering patterns. In this paper, we present TimeSlice, an interactive faceted visualization of temporal events, which allows users to easily compare and explore timelines with different attributes on a set of facets. By directly manipulating the filtering tree, a dynamic visual representation of queries and filters in the facet space, users can simultaneously browse the focused timelines and their contexts at different levels of detail, which supports efficient navigation of multi-dimensional events data. Also presented is an initial evaluation of TimeSlice with two datasets - famous deceased people and US daily flight delays.

[C2]

R. William Soukoreff, Jian Zhao, Xiangshi Ren. The Entropy of a Rapid Aimed Movement: Fitts' Index of Difficulty versus Shannon's Entropy. Proceedings of 13th IFIP TC 13 International Conference on Human Computer Interaction, Vol Part 4, pp. 222-239, 2011.

Abstract: A thought experiment is proposed that reveals a difference between Fitts' index of difficulty and Shannon's entropy, in the quantification of the information content of a series of rapid aimed movements. This implies that the contemporary Shannon formulation of the index of difficulty is similar to, but not identical to, entropy. Preliminary work is reported toward developing a model that resolves the problem. Starting from first principles (information theory), a formulation for the entropy of a Fitts' law style rapid aimed movement is derived, that is similar in form to the traditional formulation. Empirical data from Fitts' 1954 paper are analysed, demonstrating that the new model fits empirical data as well as the current standard approach. The novel formulation is promising because it accurately describes human movement data, while also being derived from first principles (using information theory), thus providing insight into the underlying cause of Fitts' law.

[C1]

Jian Zhao, Fanny Chevalier, Ravin Balakrishnan. KronoMiner: Using Multi-Foci Navigation for the Visual Exploration of Time-Series Data. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 1737-1746, 2011.

Abstract: The need for pattern discovery in long time-series data led researchers to develop interactive visualization tools and analytical algorithms for gaining insight into the data. Most of the literature on time-series data visualization either focus on a small number of tasks or a specific domain. We propose KronoMiner, a tool that embeds new interaction and visualization techniques as well as analytical capabilities for the visual exploration of time-series data. The interface design has been iteratively refined based on feedback from expert users. Qualitative evaluation with an expert user not involved in the design process indicates that our prototype is promising for further research.

Book Chapter

[B1]

Jian Zhao, Fanny Chevalier, Christopher Collins. Designing Tree Visualization Techniques for Discourse Analysis. LingVis: Visual Analytics for Linguistics, M. Butt, A. Hautli-Janisz, and V. Lyding (Editors), Chapter 3, Center for the Study of Language and Information, 2020.

Abstract: A discourse parser is a natural language processing system which can represent the organization of a document based on a rhetorical structure tree - one of the key data structures enabling applications such as text summarization question answering and dialogue generation. Computational linguists currently rely on manually exploring and comparing the discourse structures to get intuitions for improving parsing algorithms. In this paper, we revisit our earlier work on DAViewer, an interactive visualization system for assisting computational linguists to explore, compare, evaluate, and annotate the results of discourse parsers. We present an investigation of the rationales guiding design decisions for discourse analysis and compare three alternative representations of discourse parse trees. We report the results of an expert review of these design alternatives for the task of comparing discourse parsing algorithms.

Work-in-Progress and Others

[W18]

Ryan Yen, Jian Zhao, Daniel Vogel. Code Shaping: Iterative Code Editing with Free-form Sketching. Adjunct Proceedings of the ACM Symposium on User Interface Software and Technology (Poster), pp. 101:1-101:3, 2024.
 Jury Best Poster Honorable Mention

Abstract: We present an initial step towards building a system for programmers to edit code using free-form sketch annotations drawn directly onto editor and output windows. Using a working prototype system as a technical probe, an exploratory study (N = 6) examines how programmers sketch to annotate Python code to communicate edits for an AI model to perform. The results reveal personalized workflow strategies and how similar annotations vary in abstractness and intention across different scenarios and users.

[W17]

Ryan Yen, Yelizaveta Brus, Leyi Yan, Jimmy Lin, Jian Zhao. Scholarly Exploration via Conversations with Scholars-Papers Embedding. Proceedings of the IEEE Conference Visualization and Visual Analytics (Poster), 2024.

Abstract: The rapid expansion of academic publications across various sub-domains necessitates advanced visual analytics systems to help researchers efficiently navigate and explore the academic landscape. Recent advancements in retrieval augmented generation enable users to engage with data through complex, context-driven question-answering capabilities. However, existing approaches fail to provide adequate user control over the retrieval and generation process and do not reconcile visualizations with question-answering mechanisms. To address these limitations, we propose a system that supports contextually aware, controllable, and interactive exploration of academic publications and scholars. By enabling bidirectional interaction between question-answering components and Scholets, the 2D projections of scholarly works' embeddings, our system enables users to textually and visually interact with large amounts of publications. We report the system design and demonstrate its utility through an exploratory study with graduate researchers.

[W16]

Ryan Yen, Nicole Sultanum, Jian Zhao. To Search or To Gen? Exploring the Synergy between Generative AI and Web Search in Programming. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pp. 327:1-327:8, 2024.

Abstract: The convergence of generative AI and web search is reshaping problem-solving for programmers. However, the lack of understanding regarding their interplay in the information-seeking process often leads programmers to perceive them as alternatives rather than complementary tools. To analyze this interaction and explore their synergy, we conducted an interview study with eight experienced programmers. Drawing from the results and literature, we have identified three major challenges and proposed three decision-making stages, each with its own relevant factors. Additionally, we present a comprehensive process model that captures programmers' interaction patterns. This model encompasses decision-making stages, the information-foraging loop, and cognitive activities during system interaction, offering a holistic framework to comprehend and optimize the use of these convergent tools in programming.

[W15]

Jiawen Stefanie Zhu, Zibo Zhang, Jian Zhao. Facilitating Mixed-Methods Analysis with Computational Notebooks. Proceedings of the First Workshop on Human-Notebook Interactions, 2024.

Abstract: Data exploration is an important aspect of the workflow of mixed-methods researchers, who conduct both qualitative and quantitative analysis. However, there currently exists few tools that adequately support both types of analysis simultaneously, forcing researchers to context-switch between different tools and increasing their mental burden when integrating the results. To address this gap, we propose a unified environment that facilitates mixed-methods analysis in a computational notebook-based settings. We conduct a scenario study with three HCI mixed-methods researchers to gather feedback on our design concept and to understand our users' needs and requirements.

[W14]

Yue Lyu, Pengcheng An, Huan Zhang, Keiko Katsuragawa, Jian Zhao. Designing AI-Enabled Games to Support Social-Emotional Learning for Children with Autism Spectrum Disorders. Proceedings of the Second Workshop on Child-Centred AI, 2024.

Abstract: Children with autism spectrum disorder (ASD) experience challenges in grasping social-emotional cues, which can result in difficulties in recognizing emotions and understanding and responding to social interactions. Social-emotional intervention is an effective method to improve emotional understanding and facial expression recognition among individuals with ASD. Existing work emphasizes the importance of personalizing interventions to meet individual needs and motivate engagement for optimal outcomes in daily settings. We design a social-emotional game for ASD children, which generates personalized stories by leveraging the current advancement of artificial intelligence. Via a co-design process with five domain experts, this work offers several design insights into developing future AI-enabled gamified systems for families with autistic children. We also propose a fine-tuned AI model and a dataset of social stories for different basic emotions.

[W13]

Negar Arabzadeh, Kiarash Golzadeh, Christopher Risi, Charles Clarke, Jian Zhao. KnowFIRES: a Knowledge-graph Framework for Interpreting Retrieved Entities from Search. Advances in Information Retrieval (Proceedings of ECIR'24 (Demo)), pp. 182-188, 2024.

Abstract: Entity retrieval is essential in information access domains where people search for specific entities, such as individuals, organizations, and places. While entity retrieval is an active research topic in Information Retrieval, it is necessary to explore the explainability and interpretability of them more extensively. KnowFIRES addresses this by of- fering a knowledge graph-based visual representation of entity retrieval results, focusing on contrasting different retrieval methods. KnowFIRES allows users to better understand these differences through the juxtaposition and superposition of retrieved sub-graphs.

[W12]

Catherine Thomas, Xuejun Du, Kai Wang, Jayant Rai, Kenichi Okamoto, Miles Li, Jian Zhao. A Novel Data Analysis Pipeline for Fiber-based in Vivo Calcium Imaging. Neuroscience Reports, 15(1), pp. S342-S343, 2023.

Abstract: Examining in vivo neural circuit dynamics in relation to behaviour is crucial to advances in understanding how the brain works. Two techniques that are often used to examine these dynamics are one photon calcium imaging and optogenetics. Fiber-based micro-endoscopy provides a versatile, modular, and lightweight option for combining in vivo calcium imaging and optogenetics in freely behaving animals. One challenge with this technique is that the data collected from such an approach are often complex and dense. Extraction of meaningful conclusions from these data can be computationally challenging and often requires coding experience. While numerous powerful analysis pipelines exist for detection and extraction of one photon calcium imaging data from head-mounted mini microscopes, few options are available for data using fiber-based imaging techniques. Further, available options for fiber-based imaging are not optimized, often requiring significant troubleshooting, and providing limited results. Lastly, the existing pipelines cannot combine in vivo calcium imaging data with optogenetics and behavioural parameters collected in the same experimental system (hardware and software). As such, as a collaborative endeavour between behavioural neuroscientists, optical engineers, and computer science visual processing experts, we have developed a novel pipeline for extraction, examination, and visualization of calcium imaging data for fiber-based approaches. This pipeline offers a user friendly, code-free interface with customizable features and parameters, capable of integrating imaging, optogenetics, and behavioural measures for holistic experimental visualization and analysis. This pipeline significantly expands the opportunities afforded to behavioural neuroscience researchers and shifts forward the possible research opportunities when examining circuit dynamics in freely behaving animals.

[W11]

Pengcheng An, Chaoyu Zhang, Haicheng Gao, Ziqi Zhou, Linghao Du, Che Yan, Yage Xiao, Jian Zhao. Affective Affordance of Message Balloon Animations: An Early Exploration of AniBalloons. Companion Publication of the ACM Conference on Computer-Supported Cooperative Work and Social Computing, pp. 138-143, 2023.

Abstract: We introduce the preliminary exploration of AniBalloons, a novel form of chat balloon animations aimed at enriching nonverbal affective expression in text-based communications. AniBalloons were designed using extracted motion patterns from affective animations and mapped to six commonly communicated emotions. An evaluation study with 40 participants assessed their effectiveness in conveying intended emotions and their perceived emotional properties. The results showed that 80% of the animations effectively conveyed the intended emotions. AniBalloons covered a broad range of emotional parameters, comparable to frequently used emojis, offering potential for a wide array of affective expression in daily communication. The findings suggest AniBalloons' promise for enhancing emotional expressiveness in text-based communication and provide early insights for future affective design.

[W10]

Pengcheng An, Chaoyu Zhang, Haicheng Gao, Ziqi Zhou, Zibo Zhang, Jian Zhao. Animating Chat Balloons to Convey Emotions: theDesign Exploration of AniBalloons. Proceedings of the Graphics Interface Conference (Poster), 2023.

Abstract: Text message-based communication has limitations in conveying nonverbal emotional expressions, resulting in less sense of connectedness and increased likelihood of miscommunication. While emoticons may partially compensate for this limitation, we argue that chat balloon animations could be a new and unique channel to further complement affective cues in text messages. In this paper, we present the design of AniBalloons, a set of 30 chat-balloon animations conveying six types of emotions, and evaluate their affect recognizability and emotional properties. Our results show that animated chat balloons, as independent from the message content, are effective in communicating intended emotions and cover a variety of valence-arousal parameters for daily communication. Our results thereby suggest the potential of chat-balloon animations as a unique affective channel for text messages.

[W9]

Zejiang Shen, Jian Zhao, Melissa Dell, Yaoliang Yu, Weining Li. OLALA: Object-Level Active Learning Based Layout Annotation. Proceedings of the EMNLP 5th Workshop on NLP and Computational Social Science, 2022.

Abstract: Document images often have intricate layout structures, with numerous content regions (e.g. texts, figures, tables) densely arranged on each page. This makes the manual annotation of layout datasets expensive and inefficient. These characteristics also challenge existing active learning methods, as image-level scoring and selection suffer from the overexposure of common objects.Inspired by recent progresses in semi-supervised learning and self-training, we propose an Object-Level Active Learning framework for efficient document layout Annotation, OLALA. In this framework, only regions with the most ambiguous object predictions within an image are selected for annotators to label, optimizing the use of the annotation budget. For unselected predictions, the semi-automatic correction algorithm is proposed to identify certain errors based on prior knowledge of layout structures and rectifies them with minor supervision. Additionally, we carefully design a perturbation-based object scoring function for document images. It governs the object selection process via evaluating prediction ambiguities, and considers both the positions and categories of predicted layout objects. Extensive experiments show that OLALA can significantly boost model performance and improve annotation efficiency, given the same labeling budget.

[W8]

Zhaoyi Yang, Pengcheng An, Jinchen Yang, Samuel Strojny, Zihui Zhang, Dongsheng Sun, Jian Zhao. Designing Mobile EEG Neurofeedback Games for Children with Autism: Implications from Industry Practice. Proceedings of the ACM International Conference on Mobile Human-Computer Interaction (Industry Perspectives), pp. 23:1-23:6, 2021.

Abstract: Neurofeedback games are an effective and playful approach to enhance certain social and attentional capabilities in children with autism, which becomes increasingly accessible with commercialized mobile EEG modules. However, little industry-based experiences are shared, regarding how to better design neurofeedback games to fine-tune their playability and user experiences for autistic children. In this paper, we review the experiences we gained from industry practice, in which a series of mobile EEG neurofeedback games have been developed for preschool autistic children. We briefly describe our design and development in a one-year collaboration with a special education center involving a group of stakeholders: children with autism and their caregivers and parents. We then summarize four concrete implications we learnt concerning the design of game characters, game narratives, as well as gameplay elements, which aim to support future work in creating better neurofeedback games for prescho ol children with autism.

[W7]

Brad Glasbergen, Michael Abebe, Khuzaima Daudjee, Daniel Vogel, Jian Zhao. Sentinel: Understanding Data Systems. Proceedings of the ACM SIGMOD Conference (Demo), pp. 2729-2732, 2020.
 Best Demo

Abstract: The complexity of modern data systems and applications greatly increases the challenge in understanding system behaviour and diagnosing performance problems. When these problems arise, system administrators are left with the difficult task of remedying them by relying on large debug log files, vast numbers of metrics, and system-specific tooling. We demonstrate the Sentinel system, which enables administrators to analyze systems and applications by building models of system execution and comparing them to derive key differences in behaviour. The resulting analyses are then presented as system reports to administrators and developers in an intuitive fashion. Users of Sentinel can locate, identify and take steps to resolve the reported performance issues. As Sentinel's models are constructed online by intercepting debug logging library calls, Sentinel's functionality incurs little overhead and works with all systems that use standard debug logging libraries.

[W6]

Chidansh Bhatt, Jian Zhao, Hideto Oda, Francine Chen, Matthew Lee. OPaPi: Optimized Parts Pick-up Routing for Efficient Manufacturing. Proceedings of the ACM SIGMOD Workshop on Human-In-the-Loop Data Analytics, 5:1-5:8, 2019.

Abstract: Manufacturing environments require changes in work procedures and settings based on changes in product demand affecting the types of products for production. Resource re-organization and time needed for worker adaptation to such frequent changes can be expensive. For example, for each change, managers in a factory may be required to manually create a list of inventory items to be picked up by workers. Uncertainty in predicting the appropriate pick-up time due to differences in worker-determined routes may make it difficult for managers to generate a fixed schedule for delivery to the assembly line. To address these problems, we propose OPaPi, a human-centric system that improves the efficiency of manufacturing by optimizing parts pick-up routes and scheduling. OPaPi leverages frequent pattern mining and the traveling salesman problem solver to suggest rack placement for more efficient routes. The system further employs interactive visualization to incorporate an expert’s domain knowledge and different manufacturing constraints for real-time adaptive decision making.

[W4]

Matthew Cooper, Jian Zhao, Chidansh Bhatt, David Shamma. Using Recommendation to Explore Educational Video. Proceedings of the ACM International Conference on Multimedia Retrieval (Demo), 2018.

Abstract: Massive Open Online Course (MOOC) platforms have scaled online education to unprecedented enrollments, but remain limited by their rigid, predetermined curricula. Increasingly, professionals consume this content to augment or update specific skills rather than complete degree or certification programs. To better address the needs of this emergent user population, we describe a visual recommender system called MOOCex. The system recommends lecture videos across multiple courses and content platforms to provide a choice of perspectives on topics. The recommendation engine considers both video content and sequential inter-topic relationships mined from course syllabi. Furthermore, it allows for interactive visual exploration of the semantic space of recommendations within a learner's current context.

[W3]

Ji Wang, Jian Zhao, Sheng Guo, Chris North. Clustered Layout Word Cloud for User Generated Review. Yelp Dataset Challenge (Grand Prize Winner), 2013.

Abstract: User reviews, like those found on Yelp and Amazon, have become an important reference for decision making in daily life, for example, in dining, shopping and entertainment. However, large amounts of available reviews make the reading process tedious. Existing word cloud visualizations attempt to provide an overview. However their randomized layouts do not reveal content relationships to users. In this paper, we present ReCloud, a word cloud visualization of user reviews that arranges semantically related words as spatially proximal. We use a natural language processing technique called grammatical dependency parsing to create a semantic graph of review contents. Then, we apply a force-directed layout to the semantic graph, which generates a clustered layout of words by minimizing an energy model. Thus, ReCloud can provide users with more insight about the semantics and context of the review content. We also conducted an experiment to compare the efficiency of our method with two alternative review reading techniques: random layout word cloud and normal text-based reviews. The results showed that the proposed technique improves user performance and experience of understanding a large number of reviews.

[W2]

Jian Zhao. A Particle Filter Based Approach of Visualizing Time-varying Volume. Proceedings of the IEEE Symposium on Large-Scale Data Analysis and Visualization (Poster), 2012.

Abstract: Extracting and presenting essential information of time-varying volumetric data is critical in many fields of sciences. This paper introduces a novel approach of identifying important aspects of the dataset under the particle filter framework in computer vision. With the view of time-varying volumes as dynamic voxels moving along time, an algorithm for computing the 3D voxel transition curves is derived. Based on the curves which characterize the local data temporal behavior, this paper also introduces several post-processing techniques to visualize important features such as curve clusters by k-means and curve variations computed from curve gradients.

[W1]

Jian Zhao, R. William Soukoreff, Ravin Balakrishnan. A Model of Multi-touch Manipulation Proceedings of the 2nd Annual Grand Conference (Poster), 2011.

Abstract: As touch-sensitive devices become increasingly popular, fundamentally understanding the human performances of multi-touch gestures is critical. However, there is currently no mathematical model for interpreting such gestures. In this paper, a novel model of multi-touch interaction is derived by combining the Mahalanobis distance metric and Fitts' law. The model describes the time required to complete an object manipulation task that includes translocation, rotation, and scaling. Empirical data is reported that validates the new model (R2>0.9). Linear relationship between the difficulty and time elapsed is revealed indicating that the model can provide guidelines for interface designers for empirically comparing gestures and devices.

Thesis

[T1]

Jian Zhao Interactive Visual Data Exploration: A Multi-Focus Approach. Department of Computer Science, University of Toronto, 2015.

Abstract: Recently, the amount of digital information available in the world has been growing at a tremendous rate. This huge, heterogeneous, and complicated data that we are continuously generating could be an incredible resource for us to seek insights and make informed decisions. For this knowledge extraction to be efficient, visual exploration of data is demanded in addition to fully automatic methods, because visual exploration can integrate the creativity, flexibility, and general experience of the human user into the sense-making process through interaction and visualization techniques.

Due to the scale and complexity of data, robust conclusions are usually formed by coordinating many sub-regions in an information space, which leads to the approach of multi-focus visual exploration that allows browsing different data segments with multiple views and perspectives simultaneously. While prior research has proposed a myriad of information visualization techniques, there still lacks comprehensive understanding about how visual exploration can be facilitated by multi-focus interactive visualizations. This dissertation investigates issues and techniques of multi-focus visual exploration through five design studies, touching various types of data in a range of application domains.

The first two design studies address the exploration of numerical data values. KronoMiner presents a multi-purpose visual tool for exploring time-series based on a dynamic radial hierarchy; and the ChronoLenses system supports exploratory visual analysis of time-series by allowing users to progressively construct advanced analytical pipelines. The third design study focuses on the exploration of logical data structures, and presents DAViewer that facilitates computational linguistics researchers to explore and compare rhetorical trees. The last two design studies consider the exploration of heterogeneous data attributes (or facets). TimeSlice facilitates the browsing of multi-faceted events timelines by organizing visual queries in a tree structure; and PivotSlice aids the mining of relationships in multi-attributed networks through a dynamic subdivision of data with customized semantics.

This dissertation ends with critical reflections and generalizations of the experiences obtained from the case studies. High-level design considerations, conceptual models, and visualization theories are distilled to inform researchers and practitioners in information visualization for devising effective multi-focus visual interfaces.

Patents

[P22]

Wei Zhou, Mona Loorak, Ghazaleh Saniee-Monfared, Sachi Mizobuchi, Pourang Irani, Jian Zhao, Wei Li. Methods, Devices, Media for Input/Output Space Mapping in Head-Based Human-Computer Interactions. US11797081B2, Filed in 2021, Granted in 2023.

[P21]

Takanori Fujiwara, Jian Zhao, Francine Chen. System and Method for Contrastive Network Analysis and Visualization. US11538552B2, Filed in 2020, Granted in 2022.

[P20]

Jian Zhao System and Method for Summarizing and Steering Multi-User Collaborative Data Analysis. US10937213B2, Filed in 2019, Granted in 2021.

[P19]

[P18]

Hideto Oda, Chidansh Bhatt, Jian Zhao. Optimized Parts Pickup List and Routes for Efficient Manufacturing using Frequent Pattern Mining and Visualization. US20200226505A1, Filed in 2018, Granted in 2021.

[P17]

Patrick Chiu, Chelhwon Kim, Hajime Ueno, Yulius Tjahjadi, Anthony Dunnigan, Francine Chen, Jian Zhao,Bee-Yian Liew, Scott Carter. System for Searching Documents and People based on Detecting Documents and People around a Tables. US10810457B2, Filed in 2018, Granted in 2020.

[P16]

Jian Zhao, Francine Chen, Patrick Chiu. A Visual Analysis Framework for Understanding Missing Links in Bipartite Networks. US11176460B2, Filed in 2018, Granted in 2021.

[P15]

John Wenskovitch, Jian Zhao, Matthew Cooper, Scott Catter System and Method for Computational Notebook Interface. US10768904B2, Filed in 2018, Granted in 2020.

[P14]

Francine Chen, Jian Zhao, Yan-Ying Chen. System and Method for Generating Titles for Summarizing Conversational Documents. US20200026767A1, Filed in 2018, Abandoned.

[P13]

Jian Zhao, Yan-Ying Chen, Francine Chen. System and Method for Creating Visual Representation of Data based on Generated Glyphs. US10649618B2, Filed in 2018, Granted in 2020.

[P12]

Jian Zhao, Chidansh Bhatt, Matthew Cooper, Ayman Shamma. System and Method for Visualizing and Recommending Media Content Based on Sequential Context. US10776415B2, Filed in 2018, Granted in 2020.

[P11]

Jian Zhao, Siwei Fu. System and Method for Analyzing and Visualizing Team Conversational Data. US11086916B2, Filed in 2017, Granted in 2021.

[P10]

Jian Zhao, Francine Chen, Patrick Chiu. System for Visually Exploring Coordinated Relationships in Data. US10521445B2, Filed in 2017, Granted in 2019.

[P9]

Jian Zhao, Francine Chen, Patrick Chiu. System and Method for Visual Exploration of Sub-Network Patterns in Two-Mode Networks. US11068121B2, Filed in 2017, Granted in 2021.

[P8]

Jian Zhao, Francine Chen, Patrick Chiu. System and Method for Visually Exploring Search Results in Two-Mode Networks. US10521445B2, Filed in 2017, Granted in 2021.

[P7]

Francine Chen, Jian Zhao, Yan-Ying Chen. System and Method for User-Oriented Topic Selection and Browsing. US11080348B2, Filed 2017, Granted in 2021.

[P6]

Michael Glueck, Azam Khan, Jian Zhao. Handoff Support in Asynchronous Analysis Tasks using Knowledge Transfer Graphs. US20180081885A1, 2017.

[P5]

Jian Zhao, Michael Glueck, Azam Khan, Simon Breslay. Techniques For Mixed-Initiative Visualization of Data. US11663235B2, Filed in 2017, Granted in 2023.

[P4]

Jian Zhao, Michael Glueck, Azam Khan. Node-Centric Analysis of Dynamic Networks. US10142198B2, Filed in 2017, Granted in 2018.

[P3]

Mira Dontcheva, Jian Zhao, Aaron Hertzmann, Allan Wilson, Zhicheng Liu. Providing Visualizations of Event Sequence Data. US9577897B2, Filed in 2015, Granted in 2017.

[P2]

Liang Gou, Fei Wang, Jian Zhao, Michelle Zhou. Personal Emotion State Monitoring from Social Media. US20150213002A1, Filed in 2014, Abandoned.

[P1]

Jian Zhao, Steven Drucker, Danyel Fisher, Donald Brinkman. Relational Rendering of Multi-Faceted Data. US8872849B2, Filed in 2011, Granted in 2014.

Other Cool Stuff

See my arXiv author page.