Josh Lovejoy is the Lead UX for People + AI Research (PAIR) at Google. He's a leader in the artificial intelligence realm, with work across design, technology, and social impact. Having worked across the intersections of product design and ethics he is on a mission to bring deep humanity to AI. He drives this ethos forward as the head of UX for People + AI Research (PAIR), a Google initiative to conduct fundamental research, invent new technology, and create frameworks for design in order to drive a humanistic approach to artificial intelligence.
Josh came to Portland to participate in Particle Design's Empathy, Bots, and the Future of AI event during Design Week Portland this past April. We asked him a few questions about his work to accompany his talk, The Human-Centered Approach to AI.
Could you please tell us a little about the work you do leading UX for People + AI Research (PAIR) at Google?
My approach to design is to address the root cause of people’s needs, whether that’s in products, process, or in organizations. For a company-wide initiative like PAIR, that means thinking about "the user" across a spectrum. For the folks writing and shaping machine learning models directly, like research scientists and engineers, we want it to be easier to build, interpret, and evaluate those models. For those seeking to apply machine learning to augment their personal or professional expertise, like doctors, designers, and musicians, we want to support the development of flexible and augmentative tools. And for the consumer, we want to empower the development of technology that recognizes the uniqueness and agency of every individual.
What have been some of your favorite projects at Google?
I'm in love with the Magenta project, which is all about applying AI to augment the creative process of musicians and artists. The spirit of the work is so pure and relatable; we can all think about a time where we were "stuck," or conversely when we got wonderfully lost in the flow of collaboration. And because machine learning is fundamentally a pattern-finding tool, we've been having a ton of fun using it as a design material for self-reflection and playful experimentation in the serendipity of unexplored spaces.
What led to your interest in human-centered design for AI?
As a product designer, I've needed to help people navigate from a mindset of expecting ML to figure out stuff on its own, to embracing just how much guidance and oversight it actually needs in order to return good results. The pull of AI future-tech sci-fi is so strong, but the reality is that "AI-first" product design—just like "mobile-first" and every-other-first that's preceded it—demands the same human-centered process we've had all along, just with a new tool in the toolkit. Because if you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small — or perhaps nonexistent — problem.
What applications of AI are you excited about?
I'm super excited about the opportunities for augmenting ambient sensory awareness. As a person living with pretty significantly impaired vision, the potential to be able to "see" more of the world by augmenting my other senses is an extremely rich innovation space. But I have to underscore how profoundly important privacy and data ownership are in that story. As a design community, we have to evolve to a place where the default isn't automation or centralization; that the goals of AIs are powered by human uniqueness.
What is unique about designing UX for AI? What is universal?
What's unique—and universal at the same time—is how our most fundamental mandate continues to be user advocacy. With the UX of AI, we have to expand our understanding of the user journey to include the full lifecycle of "the backend". Whereas with traditional logic, design can encode expected outcomes in the form of mocks and specs, machine learning is an innately fuzzy science. Imperfection needs to be embraced in the design process, because the job of a machine learning algorithm is essentially to develop a "hunch" about something. Therefore the UX of an AI should start with the assumption that a human being will always have the final say. The burden of proof that any part of a system is automatable should hinge on the strength of agreement about what legitimately useful outcomes actually look like; and how well that agreement holds up across a diversity of users, use cases, and environments of use.