Aarhus University Seal

Power and algorithms: Rune Nyrup educates the AI ethicists of tomorrow

Artificial intelligence can shape our reality if we don't understand how it works. Rune Nyrup researches how to create greater transparency – and teaches students about the power behind the technology.

Are We Being Manipulated When We Use Artificial Intelligence?

“There will always be a risk that the technology is deliberately used to manipulate.”

So concludes Rune Nyrup, associate professor at the Centre for Science Studies, Department of Mathematics at Aarhus University.

He teaches, among other things, ethics for computer science students, and notes that the risk applies whether you’re being recommended a route by Google Maps, shown a reel on Instagram, or receiving an answer from ChatGPT.

“There’s an unequal power dynamic between you and those who control the technology,” he explains:

“Those who design the systems choose to highlight what they most want you to focus on. And that’s problematic, regardless of their intentions.”

Precisely because there is so much power in being able to feed the algorithms behind many of the apps and websites we use daily, it’s important to understand the machinery behind them.

“Our research group looks at how we can make complex computer systems understandable, so we can avoid political and ethical manipulation,” he says.

Rune Nyrup teaches undergraduate courses in philosophy of science and ethics within computer science, IT product development, and data science at Aarhus University.

At the master's level, he teaches courses in philosophy of science and AI ethics as part of science studies. His master's courses are available as electives for all graduate students in the NAT and TECH faculties at AU.

He also supervises projects in philosophy of science and AI ethics – for example, bachelor's projects, master's theses, or individual elective courses.

The Goal: More Democratic Algorithms

When you're online shopping or scrolling through social media, you're often recommended similar products or posts:

“We think you’ll like this” or “You might enjoy this.”

Some platforms give you the option to see why you're being recommended something. It might say that people similar to you bought the same item. But there’s a deeper explanation, says Rune Nyrup.

The algorithm is designed to present you with whatever is most likely to lead to more purchases — or to keep you scrolling.

Typically, it’s a small number of wealthy companies that control the most advanced technology, and therefore also decide what you are shown.

“I hope that through our research, we can find new ways of thinking that help distribute power more evenly — and thus make it more democratic,” he says.

One way to “democratize” artificial intelligence is by developing explanation models that are more user-controlled — rather than controlled by those developing the systems behind the scenes.

Another way, Rune Nyrup adds, is through digital literacy and education.

“It requires that people, as citizens, develop a more critical awareness and understand that the explanations you’re given aren’t necessarily the whole truth,” he says.

Systems Are Always Biased

The overarching goal of the research is to make the systems behind artificial intelligence more transparent and controlled by a broader group of stakeholders.

Even in cases where we don’t believe the technology is being misused, the use of AI can still be problematic, Nyrup adds.

Maybe you trust that public institutions wouldn’t abuse artificial intelligence?

Maybe you assume that private companies are subject to enough regulation to prevent misuse?

“Systems always make ethical, value-based, and political decisions based on what some engineers have programmed them to do,” says Rune Nyrup.

In extreme cases, there’s Elon Musk’s AI chatbot Grok, which suddenly began spreading conspiracy theories about a genocide of white people in South Africa.

A more subtle example is Google’s image model, Gemini, which had been programmed to promote ethnic diversity. On the surface, that may sound fine — but it resulted in images showing ethnically diverse German soldiers from 1943, which distorts historical facts.

“Behind the systems, there are people making value judgments about, for example, what constitutes proper diversity,” he emphasizes.

Distinguishing Between Manipulation and Benefit

The research group led by Rune Nyrup hopes to make it easier to understand how these complex computer systems actually function.

The goal is to formulate principled ethical criteria that can help distinguish between manipulative explanations and beneficial ones.

To do this, they are investigating foundational questions in both philosophy of science and political philosophy.

“For example, we explore what it means to give a good explanation, what counts as a democratic and legitimate distribution of power — and how these ideas interact in relation to artificial intelligence,” he says.