Bhada Yun

Bhada Yun

I'm a master's student at ETH Zürich studying Machine Intelligence and Visual and Interactive Computing. My research is supervised by Prof. Dr. Mennatallah El-Assady and Prof. Dr. April Yi Wang. Previously, I completed my bachelor's degree in Computer Science at University of California, Berkeley.

My work broadly focuses on human-AI interaction, where I develop systems and empirically evaluate how the integration of AI affects stakeholders across various domains. I have ongoing projects exploring AI for creativity, learning, and self-actualization. I am cautiously optimistic about the future, and want to understand how technology can support human agency and purpose, beyond just productivity.

I'm particularly interested in AI phenomenology: how humans perceive, make sense of, and relate to AI systems. As autonomous systems grow more inscrutable and exhibit capabilities that exceed human intuition, I believe mental models (e.g., the implicit concepts people carry regarding AI) will become one of the most critical human factors shaping interaction, and I aim to contribute to this research agenda.

  • Grind
    硏 /jʌn/ to polish
    stone grinded till even
  • Research
    究 /ku/ to research
    a group investigating a cave
AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Published in CHI '26 · 🏅 Honorable Mention Award
Bhada Yun, Renn Su, April Yi Wang
20 people texted a chatbot for a month about their daily lives. The AI built profiles of their values, then explained its reasoning in a 2-hour interview. 13 participants left convinced the AI truly understood them.
AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations thumbnail 1
Does My Chatbot Have an Agenda? Understanding Human and AI Agency in Human-Human-like Chatbot Interaction
Published in CHI '26 · 🏅 Honorable Mention Award
Bhada Yun, Evgenia Taranova, April Yi Wang
22 adults chatted with Day, our AI companion, for a month. Who decided when to greet, change topics, or say goodbye? Participants thought they were in control, but the AI was quietly steering depth and breadth.
Does My Chatbot Have an Agenda? Understanding Human and AI Agency in Human-Human-like Chatbot Interaction thumbnail 1
From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI–Mediated Software Engineering
Published in CHI '26 · 🏅 Honorable Mention Award
Dana Feng*, Bhada Yun*, April Yi Wang
Juniors code with AI from day one. Seniors learned the hard way, then adapted. We studied both through debugging tasks and interviews, comparing how they delegate to agentic tools like Cursor.
From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI–Mediated Software Engineering thumbnail 1
AI Phenomenology for Understanding Human-AI Experiences Across Eras
Accepted Workshop Paper at CHI '26
Bhada Yun, Evgenia Taranova, Dana Feng, Renn Su, April Yi Wang
We believe that the question of "how did it feel interacting with the AI" is just as important as "what is the usability of the system?" We trace a philosophical lineage from Husserl through postphenomenology to Actor-Network Theory, providing a methodological throughline through three HAI papers.
AI Phenomenology for Understanding Human-AI Experiences Across Eras thumbnail 1
Generative AI in Knowledge Work: Design Implications for Data Navigation and Decision-Making
Published in CHI '25 · 🏅 Honorable Mention Award
Bhada Yun*, Dana Feng*, Ace Chen, Afshin Nikzad, Niloufar Salehi
Product managers drown in scattered information across platforms, so we built Yodeai to help them synthesize it all. 16 PMs tested it for real decisions. They praised the adaptability and control, but hit limits: overreliance, isolation, and blind spots AI couldn't see.
Generative AI in Knowledge Work: Design Implications for Data Navigation and Decision-Making thumbnail 1
Wrapped in Anansi's Web: Unweaving the Impacts of Generative-AI Personalization and VR Immersion in Oral Storytelling
Published in AH '25 (Augmented Humans)
Carrie Lau, Bhada Yun, Samuel Saruba, Efe Bozkir, Enkelejda Kasneci
We built a VR experience of Ghanaian Anansi folktales with AI-driven personalization to preserve oral traditions. 48 participants tried it. VR boosted their cultural interest as expected, but the AI personalization did something unexpected: it turned their focus inward, sparking self-reflection more than cultural learning.
Wrapped in Anansi's Web: Unweaving the Impacts of Generative-AI Personalization and VR Immersion in Oral Storytelling thumbnail 1Wrapped in Anansi's Web: Unweaving the Impacts of Generative-AI Personalization and VR Immersion in Oral Storytelling thumbnail 2Wrapped in Anansi's Web: Unweaving the Impacts of Generative-AI Personalization and VR Immersion in Oral Storytelling thumbnail 3

You can find me on LinkedIn or bhayun@ethz.ch.

© ∞ Bhada Yun