Annotations in Visualizations
I study how annotations, such as textual descriptions, highlights, and graphical marks, support interpretation, communication, and analysis in visualizations. My work has led to a taxonomy and design space that captures how annotations are structured, combined, and used across a wide range of chart types and tasks. I also developed a grammar-based extension that treats annotations as first-class elements in declarative visualization systems. I'm also investigating how professional visualization designers use annotations in practice, uncovering common strategies and challenges. These contributions aim to make annotations more integral, expressive, and reusable within visualization tools and workflows.
Using Behavioral Nudges in Peer Review
I designed and developed a peer review dashboard aimed at improving the quality and engagement of student peer assessments. The system allows students to view their peers' work and provide structured feedback within a shared interface. Building on principles from behavioral science, we plan to incorporate targeted nudges, such as social comparisons, progress indicators, and reflective prompts, to encourage students to spend more time on peer reviews and provide thoughtful, high-quality feedback. The goal is to foster more meaningful peer-to-peer learning interactions and improve the overall effectiveness of peer assessment in educational settings. This work bridges human-computer interaction and learning science by exploring how subtle design interventions can positively influence review behavior.
Understanding the Role of Visualizations in Decision-Making with ML-Based Recommender Systems
We investigate how visualizations influence user trust and decision-making in machine learning–based recommender systems. Our work explores how different representations—such as bar charts, scatterplots, and descriptive accuracy summaries—affect users' ability to interpret recommendation quality, particularly in contexts involving class imbalance or uncertain predictions. Through a series of controlled user studies, we found that visual and descriptive cues can significantly shape users' trust in both the system and individual recommendations, supporting more informed decision-making. This research contributes to the design of transparent, user-centered ML systems by highlighting the role of visualization in communicating model performance and limitations.