Crystal Rutland, founder of Particle Design, hosted an event as part of Design Week Portland on April 18 called “Empathy, Bots & the Future of AI”. A long-time user experience designer, Rutland saw this year’s festival as an opportunity to step back from the minutia of human-technology interaction and explore the larger implications of growing AI involvement in daily life.

The event approached the AI space from a variety of perspectives, interspersing expert talks with performances and installations from artists inspired by its possibilities. Other speakers besides Rutland included: Anima Anandkumar, a machine learning pioneer who conducts research for Amazon AI and lectures at Caltech; Josh Lovejoy, an AI researcher and UX lead at Google’s PAIR initiative; and the futurist, inventor, and science-fiction author Brian David Johnson.

I spoke with Rutland shortly before the event about its driving ideas and intent, and in the process learned a bit more about the current challenges of integrating AI into society.

Q: How’d you decide on the subject of AI?

The UX design agency I run does a lot of work with big tech companies figuring out the human impact of emerging technologies.

AI has been a real topic of exploration for us, especially the way it affects the real world because of its emergence in the digital. Humans are getting to a seamless relationship now, where we sometimes live as much in the digital world as the real. As that digital life is increasingly shaped by AI, how does that impact who we are as people?

Q: How about the format of the event?

We decided on a TED-like event, with short-form talks from people in the industry, interspersed with artist performances. That kind of structure can get you into a mindset that provokes thoughts you might not otherwise have had.

Q: How did you choose the other three presenters?

I wanted to address empathy as it relates to AI, so I'll be speaking on that. I brought in Brian because of his expertise in bots. And through her work with Amazon AI, and as a professor at Caltech, Anima is a preeminent scholar in the Machine Learning space, which is also a huge part of AI development.

Q: AI has been a recurring subject of Science Fiction and actual research for decades. Why is it suddenly so prominent in the real world?

I think we're finally at the stage where we have the processing speed and memory to bring some of these visions of AI capability to fruition. Processes that in the ‘90s would have taken weeks or months, can now literally be done in seconds, and that changes the whole game.

AI is all around us now, but the general public is just catching up. When I search on Google, AI is powering that; when I'm looking at Facebook or my newsfeed, AI algorithms are shaping what I read next, based on a combination of past behaviors and the goals of the companies behind them — which may or may not be aligned with my goals.

Q: Historically, popular depictions of AI have been pretty reductive: an object of fear (Terminator) or a foil for humanity (My Robot Buddy, or Data from Star Trek). How do you think this misses the mark?

When we talk about AI, we should really be talking about AIs, plural, and understand what we mean by intelligence.

AI that could extinguish humanity is a specific kind of intelligence, similar to human intelligence. Right now, humans are the only species that can take information from completely different areas and combine them in order to innovate and make better solutions. We can look at how bees build hives and how beavers build dams, and create honeycomb-structured dams as a result (this is someone else's analogy). If we define intelligence as the ability to meet goals flexibly, then we’re actually talking about something that’s called Artificial General Intelligence (AGI), or Super-Intelligence when a machine possesses it.

But if you're talking about driving a car from A to B, or beating a human at Go, that's Narrow AI, and that’s my focus. It presents opportunities, especially as the goals of Narrow AI become more ambitious — like autonomous driving. I’m interested in ways AI can have an ambitious, narrow goal tied to human morals and values, like driving from A to B without hitting someone.

Q: Does that suggest a way forward?

I think there are real, immediate things we could spend our time on in the narrow AI space, that feel not only more productive than AGI research, but also more helpful to both goals. For example, some of the fears in the industry right now — I'm not saying they aren't credible — can lead to a lack of government funding in AI.

Q: Where do you see AI having a positive impact on everyday user experience?

Advancements in medicine, for one. The amount of data we're creating is growing astronomically, and AI can process it in ways humans can’t. It can feed us insights into what's happening in the human body, and across disease populations, in unprecedented ways — and that can expand the quality and length of human life. I think that's pretty stunning.

From a basic human safety perspective, I also think we have a moral imperative to push autonomous driving as far and as quickly as we can.

Q: What questions are you hoping to address during the event?

The BIG question is: what does it mean for us, as a civilization, that our increasingly digital existence is being defined by artificial intelligences that are growing in their power to define how we think, feel, and act? And what is our role in designing how that power gets used?

Q: What topics do you think we’re not talking about enough?

Social media. I hear a lot more about problems than solutions.

Many companies involve what some have called Surveillance Capitalism, where your business model is essentially to surveil your users, and then tailor ads or content based on what you’ve gathered. I’m not Facebook's customer; I’m Facebook's user. Facebook's customers are the advertisers giving them revenue — because of the Cambridge Analytica story, this is suddenly at the surface.

Q: How is that an AI issue?

Well, for instance, an AI could run algorithms that determine how often a particular author is accurate and factual; Reddit does this by letting users vote things up and down. AI could also look across a system and see that a "news outlet" has news articles that are X% accurate, and that’s useful for readers to know. We’ve got all this AI processing power and intelligence available to us — why not use it to help us understand the veracity of what we’re seeing?

Q: So you think the real topic is what tasks should we be unleashing the power of AI on?

Exactly! The goal of the AI needs to shift.

Until business models also shift to support those goals, we have a goal alignment problem. How can we expect Facebook to give us stuff that's accurate, calm, and measured, if the way they make money is by delivering clickbait?