The Design of Harmony
Published 03/20/2026 in Scholar Travel Stipend
Written
by Abigail Leyva |
03/20/2026
As a UX designer at an AI startup, my daily challenge is to bridge the gap between complex machine intelligence and the messy, intuitive reality of human life. In my work, I seek inventive ways to help people help themselves lead productive and satisfying lives. This past month, I traveled through Japan in search of design principles that might inform how we build the next generation of AI. What I found was not simply aesthetic inspiration, but a fundamentally different relationship between guidance, safety, minimalism, and technology—one that offers important lessons for AI design.
Navigational Clarity and the Flow of Intent
In Japan, the environment performs much of this cognitive labor for the user. Persistent arrows on station floors clearly indicate directionality, allowing crowds to move with a sense of collective harmony. This is a strong example of information foraging in practice—the theory that humans seek information in ways that minimize cognitive effort while maximizing perceived value (Pirolli & Card, 1999).
In AI design, we often present users with a “blank canvas,” such as an empty chat box, under the assumption that unlimited freedom equates to power. Research on information foraging suggests the opposite: users are most effective when systems provide clear cues about where to go next. Japan taught me that true empowerment comes from directional guidance. Good AI should not merely offer a search bar; it should provide the digital equivalent of “arrows on the ground” that guide users toward meaningful next steps.

The Logic of the Gate: Structural Guardrails
Safety in Japanese infrastructure is not merely advisory; it is physically embedded. On train platforms, glass and steel gates separate passengers from the tracks, preventing accidents before human error can occur. These guardrails acknowledge an essential truth of design: humans are fallible, especially in complex or high-stakes environments.
In the AI domain, discussions of “safety” are often framed in ethical or policy terms, such as fairness, transparency, and accountability (Floridi et al., 2018; OECD, 2019). While these frameworks are essential, they frequently place the burden of safety on the user through warnings about hallucinations, bias, or misuse. Japan’s physical guardrails offer a more proactive model.

If AI is used in sensitive domains like medical research or legal analysis, safety should be structurally enforced through interface constraints—such as requiring verification at critical steps or restricting unsupported inferential leaps—rather than relying solely on user discretion. Designing for safety means preventing users from “falling onto the tracks” in the first place.
Minimalism and the Power of "Enough"
In Western cultures, value is often equated with abundance—more features, more information, more output. This mindset has shaped much of modern technology, resulting in feature-dense applications that frequently overwhelm users and create decision paralysis (Iyengar & Lepper, 2000).
In Japan, I encountered a different philosophy. Restaurants characterized by minimalist interiors and close connections to nature emphasize presence over consumption. Portion sizes are intentionally sufficient rather than excessive. I left meals feeling satisfied and energized, not overextended.

This principle has direct implications for AI design. Large language models are often praised for producing lengthy, comprehensive responses, yet research in cognitive load theory suggests that excessive information can hinder learning and understanding (Sweller, 1988). In educational or knowledge-seeking contexts, the most effective response is often not the longest one, but the one that provides just enough information to spark curiosity and independent thought. Designing AI that fosters human capital means leaving space for the human to think.
Immersive Personalization: From User to Co-Creator
At TeamLab Borderless in Tokyo, I experienced a compelling vision of personalized, immersive technology. In the Sketch Aquarium exhibit, visitors draw creatures on paper, scan them, and watch as their drawings become animated participants in a shared digital ecosystem. The boundary between creator, observer, and environment dissolves.
This experience reflects a broader shift in human-computer interaction away from static tools and toward adaptive, immersive environments (Norman & Verganti, 2014). AI is increasingly moving beyond task-based assistance toward systems that respond to users’ mental models, preferences, and creative input in real time. Rather than treating users as operators, these systems position them as co-creators. A similar effect was present at the Mori Art Museum’s Moon Underwater exhibit, where AI-driven simulations of bubbles, fog, and light interacted organically with natural forms. Here, AI was not deployed for productivity or efficiency, but to deepen aesthetic and emotional engagement. It demonstrated that AI can serve as a medium for expression and meaning, not merely optimization.

Anticipatory Design: The Spirit of Omotenashi
Perhaps the most profound lesson from Japan was the concept of omotenashi. Often translated as “hospitality,” the term more accurately describes the practice of anticipating needs before they are explicitly stated (Kondo, 2015). I experienced this in subtle but powerful ways: a hotel providing a bookmark for an open book, or a ramen shop offering cards to communicate one’s mood without speaking.
Contemporary AI systems are largely reactive, centered around explicit prompts. Yet research in anticipatory design suggests that systems which understand context and intent can reduce friction and improve user experience (Maes, 1994). Applying omotenashi to AI means designing systems that surface help, insights, or warnings at precisely the moment they are needed—before the user realizes they need them.
As the sole designer at a startup focused on service management, my goal is to build interfaces that do not merely wait for a problem to be reported. Instead, they should proactively identify potential bottlenecks and offer support before systems fail. True hospitality in technology is not about giving users everything, but about giving them exactly what they need, exactly when they need it.

Conclusion
My journey through Japan reshaped my understanding of my role as a designer. Rather than building increasingly powerful machines, I see my responsibility as creating guided, safe, and human-centered experiences. By embracing the navigational clarity of Japan’s transit systems, the structural safety of its guardrails, the restraint of its minimalism, and the immersion of its art, we can design AI that truly helps people help themselves.
References
● Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
● Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating. Journal of Personality and Social Psychology, 79(6), 995–1006.
● Kondo, M. (2015). Omotenashi: The Japanese art of hospitality. Japan Publishing Industry Foundation for Culture.
● Maes, P. (1994). Agents that reduce work and information overload. Communications of the ACM, 37(7), 30–40.
● Norman, D. A., & Verganti, R. (2014). Incremental and radical innovation: Design research vs. technology and meaning change. Design Issues, 30(1), 78–96.
● OECD. (2019). OECD principles on artificial intelligence.
● Pirolli, P., & Card, S. (1999). Information foraging. Psychological Review, 106(4), 643–675.
● Sweller, J. (1988). Cognitive load during problem solving. Cognitive Science, 12(2), 257–285.