UND Today

University of North Dakota’s Official News Source

The everyday ethics of AI

As artificial intelligence takes on bigger roles, ethical concerns mount, Juliette Powell tells UND’s Olafson Ethics Symposium

Juliette Powell, featured speaker at the 17th Annual Olafson Ethics Symposium at UND, speaks on Nov. 3 to an audience in Nistler Hall on ‘The AI Dilemma.’ Photo by Tom Dennis/UND Today.

Editor’s note: A video of the Olafson Ethics Symposium can be found at the end of this story.

Call it the Artificial Intelligence Dilemma, and recognize it as one of the most important tech challenges of the 21st Century.

But don’t misunderstand, said Juliette Powell, featured speaker at the 17th Annual Olafson Ethics Symposium, an event hosted by UND’s Nistler College of Business & Public Administration.

“Because when I talk about dilemmas, it’s not that the robots are going to come and steal our jobs,” Powell told the audience. “It’s not that they’re going to become the robot overlords.”

Instead, the AI Dilemma is both less exotic than the above examples, but at the same time, more profound. “It’s about how will we deal with the technology that we use every single day of our lives,” she said – because AI already is giving us billboard ads that analyze us even as we stare at them, algorithms that predict criminal recidivism and inform sentencing, and speech-generating software so accurate that it’ll make even the person being “deepfaked” wonder, “Why on Earth did I ever say that?”

As the saying goes, with great power comes great responsibility, Powell noted. But now that much of humanity is carrying around smartphones that hold unlimited power, the saying is relevant to all of us, not just kings and queens.

“More and more, modern life is being driven by artificial intelligence,” she said. “So even if you’re not into technology, even if AI is not something that you’ve ever really thought about, this might be of interest if you’re interested in being part of the human race.”

Amy Henley, dean of the Nistler College of Business & Public Administration at UND, introduces Juliette Powell on Nov. 3 at the College’s 17th Annual Olafson Ethics Symposium. Photo by Tom Dennis/UND Today.

Author, analyst and commentator

Powell is an author and consultant at the intersection of technology, business and ethics, and she has advised organizations large and small about how to deal with AI-enabled technological innovation.

The winner of the 1989 Miss Canada pageant, Powell has worked on television as a host, business reporter and analyst. She has offered live commentary on Bloomberg, BNN, NBC, CNN, ABC and BBC, and made presentations at institutions such as The Economist, Harvard and MIT on topics that center on digital literacy and the responsible deployment of AI.

The annual Olafson Ethics Symposium is meant to give students and the business community the chance to explore the importance of both personal and professional ethics, said Amy Henley, dean of the Nistler College of Business & Public Administration. The event is funded through the support of Robert Olafson, a mathematics and business-administration graduate of UND and his dedication to ethical business practices and the University. Additional support has been provided by SEI Investments Company.

This year’s Symposium was the first to be held in the new Nistler Hall, the Nistler College’s brand-new building, Henley noted. Additionally, Henley said, she was delighted to at last welcome Powell as the keynote speaker.

“We’ve been talking to Juliette for over two years now, through all of the COVID challenges,” Henley told the Barry Auditorium audience of about 200 people on Nov. 3. “We didn’t get her here as soon as we would have liked, so we’re thrilled to finally have her on campus.”

Powell said that for her, the visit was worth the wait. In addition to her getting to see the “fantastic, fantastic” new Nistler Hall, “everyone that I’ve spoken with at the school so far has just made me feel so at home and welcome,” she said.

The emails from the college were filled with that sentiment, and when travel obstacles arose, “there was somebody always here to make me feel like it was going to be OK,” she said.

“And that is such a gift. I’ve spoken all over the world, and I’ve rarely encountered such a warm and hearty welcome.”

An author, analyst and commentator, Juliette Powell described the vexing problems that artificial intelligence is posing and will continue to pose to society. Photo by Tom Dennis/UND Today.

The four logics of power

In her talk, Powell recapped some of the most prominent projects that governments are considering to regulate AI. And the best way to understand them, she said, is from the perspective of risks and benefits, not necessarily right and wrong.

First, consider the “four logics of power,” four approaches to decisionmaking that tend to vary depending on a person’s position in society. For example, corporate logic is the logic of markets and competitive advantage. It prioritizes profit, growth, expansion and new business, all on behalf of shareholder value, Powell said.

Engineering logic is the logic technologists use. It prioritizes efficiency and seamlessness, and values technology as a way to solve human problems.

Government logic is the view of authority. It prioritizes law and order, and values technology as a way to track, serve and protect people and institutions.

And last but not least, social justice logic prioritizes humanity. In this view, people are more important than profit or efficiency, Powell said. This view values people as a way to solve human and technological problems.

“The key here is not to focus on one specific logic, but to try to keep all of them in mind when you’re making decisions,” she said.

The EU’s AI Act

Notably, the European Union’s AI Act is an attempt to do just that.

The AI Act is a proposed European law on artificial intelligence. Though it has not yet taken effect, it’s the first such law on AI to be proposed by a major regulator anywhere, and it’s being studied in detail around the world because so many tech companies do extensive business in the EU.

The law assigns applications of AI to four risk categories, Powell said. First, there’s “minimal risk” – benign applications that don’t hurt people. Think AI-enabled video games or spam filters, for example, and understand that the EU proposal allows unlimited use of those applications.

Then there are “limited risk” systems such as chatbots, in which – the AI Act declares — the user must be made aware that they’re interacting with a machine. That would satisfy the EU’s goal that users decide for themselves whether to continue the interaction or step back.

“High risk” systems can cause real harm – and not only physical harm, as can happen in self-driving cars. These systems also can hurt employment prospects (by sorting resumes, for example, or by tracking productivity on a warehouse floor). They can deny credit or loans or the ability to cross an international border. And they can influence criminal-justice outcomes through AI-enhanced investigation and sentencing programs.

According to the EU, “any producer of this type of technology will have to give not just justifications for the technology and its potential harms, but also business justifications as to why the world needs this type of technology,” Powell said.

“This is the first time in history, as far as I know, that companies are held accountable to their products to this extent of having to explain the business logic of their code.”

Then there is the fourth level: “unacceptable risk.” And under the AI Act, all systems that pose a clear threat to the safety, livelihoods and rights of people will be banned, plain and simple.

“Again, I’m not here to tell you what’s right and what’s wrong,” Powell said. “The question is, can we as a society decide – and not just for ourselves, but for our kids and future generations?”

That’s the AI Dilemma, and solving it will be a society-wide challenge, she said. “But that’s the exciting part, because we’re actually living in a time of history where we can decide the future. … For that to happen, it means we need to step up when the call is there; and I’m making the call to all of you tonight.”