Μετάβαση στο περιεχόμενο

Timing challenge for self-driving cars

CIVIS Highlights

9 Μαρτίου 2026

When we talk about traffic, who goes first in a roundabout, for example, might seem obvious to humans - as they rely on eye contact, subtle motions, and timing. But what about self-driving cars? For them it’s a complex puzzle, so specialists at Stockholm University’s Department of Computer and Systems Sciences are exploring exactly this challenge.

In the AI in Motion research project, specialists have focused on how autonomous vehicles can learn to interpret and respond to human social behavior in traffic. Barry Brown, professor of human–computer interaction and the project’s principal investigator, explains:

To show our intentions in traffic, we sometimes drive slowly and sometimes quickly. This is a fundamental skill in driving a car. Computers, however, struggle to understand the kind of social interaction that takes place between people. So if we are going to have a self-driving system controlled by a computer, it has to be able to understand the social interactions on the road

How do humans interpret machines?

In the project, researchers from Stockholm University collaborate with peers from Linköping University. Assistant Professor Hannah Pelikan is particularly interested in how people interpret and understand machine’s behaviour in everyday environments. So she analysed people’s behaviour as they encountered self-driving vehicles:

I’ve filmed self-driving shuttle buses by riding them over a long period of time, observing how they move, and filming and documenting the interactions from different camera angles.

In the recordings, researchers can identify the moments when something goes wrong. Annotating what is actually happening enables them to understand why a robot bus halting suddenly is confusing for other road users:

“People are often under time pressure to get somewhere, and a vehicle suddenly stopping in front of them can be frustrating, even if this is the safest behaviour from a technical perspective. This ‘safe’ behaviour clashes with human drivers and pedestrians”, Hannah says.

Hannah Pelikan has filmed self-driving shuttle buses by riding them / Photo: Magnus Johansson, LiU

Filming self-driving systems in the United States

Barry uses the same method and has been in the United States to film self-driving systems currently in operation:

If you’ve ever driven in the US, for example in San Francisco, you’ll know the city has lots of hills and lots of four-way intersections. In Europe we more often have roundabouts. At a four-way intersection you have to yield, and it becomes a kind of subtle cooperative dance where the question is who arrived at the intersection first.

But self-driving cars struggle with this. When they are supposed to yield, they drive up to the intersection, stop, and wait. It takes them a very long time to decide whether to go or stay. And when they start moving again, it’s at the same time as another car, which results in them stopping again. For human drivers this can be confusing: What is the self-driving car doing now? Is it going or not?

Four-way intersections are a clear example, but the same problem arises in many other traffic situations where vehicles must yield. For humans, this usually happens smoothly, but for computers this type of interaction is very difficult to manage:

I think one of the challenges is timing. As humans, we want to get where we’re going smoothly, so we’re quite efficient and quick in how we act. We often anticipate other drivers’ movements - maybe by seeing that a car moves a little bit or slows down slightly - and interpret that as a signal of what it is going to do. Based on that, we make our own driving decisions. So, we react very quickly to what other cars do and try to predict their next move.

In human interaction, there is no “time-out mode”

Hanna argues that a central challenge is being able to read what people are doing and adapt accordingly:

This is a fundamental question we are trying to highlight, especially within human–robot interaction: how should we actually think about interaction? In human interaction, there is no ‘time-out mode.’ Stopping is not always the safe option. Simply showing uncertainty by stopping can in itself create new problems. That’s why it is often more important to be able to see and interpret what people are doing. Are they perhaps making space for the robot or the car to go? If so, the system should be able to accelerate quickly and take that opportunity.

The insights Hannah and Barry capture by analysing the filmed material are then brought into interdisciplinary contexts. If researchers with technical backgrounds can understand this and develop algorithms that can handle it, the knowledge can eventually be passed on to industry.

At the same time, Hannah emphasises that developing vehicles and robots is an enormous challenge, as moving among people requires an understanding of how people act. Bringing knowledge about human interaction into technical domains is a prerequisite for building technology that can coexist with people and move together with them.

More details are available in the original story, in Swedish.