Should AI First Do No Harm?
Medicine is one of those disciplines where ethics and moral philosophy are more than just academic interests; they have real-world implications.
I was once part of the surgical team where a patient died waiting for a liver transplant, because we deemed he wasn't enough of a priority to receive the liver that was available. I have also seen a very underweight patient with bowel cancer die as a result of the very surgical operation we performed that was intended to save her life.
It's quite possible that if these patients were seen in different hospitals, under different teams, in similar circumstances, a different decision would have been made, extending both their lives. Does that mean one decision was wrong and the other right?
These are examples of difficult, ethical decisions. They are examples that happened, that medical teams around the country will be dealing with today, and will certainly be presented with again in the future. Shouldn't we legislate a policy to standardise for this? National guidelines do exist, of course, but there are always shades of grey, scenarios that cannot be legislated for.
There is a well-known thought experiment in the field of ethics called the trolley problem. At the heart of the dilemma is a decision about the extent to which you would go to save lives. A runaway trolley (read train) hurtles down a track heading towards a group of five people trapped in its path, some way down.
You stand between them and the runaway trolley, on course to certainly kill the five trapped people. You are positioned to one side of the track, next to a lever that can switch the points and reroute the trolley down a different branch of the track. If you pull the lever, the five people are saved. But there's a catch. Another person is trapped on the alternate branch.
Would you pull the lever? Condemning one person to die, in order to save five others?
When surveyed, most people respond by pulling the lever. When I mention this problem to people, I've noticed that the decision to act or not is often accompanied by a strong moral conviction. It's just the right thing to do.
This is where things get interesting. The dilemma can be modified, with additional details testing the strength of this moral conviction, moving us from moral certainty, into increasing uncertainty.
I'm going to proceed assuming you would pull the lever, with some scenarios to test your resolve.
First off, the scenario is the same, but you are now told the man on the alternate track is married - do you pull the lever?
Yes?
What if the scenario is the same, but you are now told the man on the alternate track is a father?
Still yes?
OK - the scenario is the same, but you are now told the man on the alternate track is your father.
Let's assume you are someone who would not pull the lever in the first scenario. We can add details in the opposite direction.
The scenario is the same, but you are now told the man on the alternate track is a violent criminal - do you pull the lever?
No?
I won't go on... you get the idea.
In all these scenarios, we are trying to find an approach we can apply in every situation. A rule that is morally consistent. This is not easy, even for those who study the problem. A 2009 survey of professional philosophers found that 68% would pull the lever (sacrifice the one life to save the five), 8% would not, and the rest either had another view or could not answer.
Zoe Fritz presents increasingly nuanced medical scenarios, challenging the current laws on organ donation. There are many others, in many other domains.
At the heart of all of these different scenarios is the central dilemma: should we save more lives, at the cost of fewer different lives? This is the trolley problem, and it's a problem that does not go away by being ignored. Our choice is action versus inaction, with significant consequences for both.
Hippocrates is often quoted as saying:
First, do no harm.
The implication being that he would not have pulled the lever.
Hippocrates would have struggled to imagine the world we live in today. A world where it's not only humans, but new technologies, with artificial intelligence, are now capable of pulling the lever.
AI brings incredible new opportunities to help people and society. But it also presents us with new challenging trolley problems.
For example, imagine a national screening programme, developed to prevent premature deaths from breast cancer. Screening requires a clinical, radiological, and histological (biopsy) assessment. The programme is a success, with a measurable reduction in mortality.
But the screening programme has resourcing problems. We could do so much more, save more lives, but we just don't have enough people.
Now imagine that someone invents an AI that can replace the human in the radiological and histological assessment. Data shows the AI is just as good as a human. In fact, the AI is better, as it misses fewer cancers than people do. But there's a catch. Although it misses fewer cancers, the cancers that it does miss are not ones that humans miss. We save more lives, at the cost of fewer different lives. Another trolley problem.
The same can be stated for autonomous cars. When these self-driving vehicles become safer than a human driver, but the accidents that do occur are not those that a human would make, we have the same dilemma, another trolley problem. We have fewer, but different, AI-determined deaths.
From healthcare, to transport, the judiciary, to the military, any application where AI replaces a human in a life-or-death decision can give us a new trolley problem.
We, of course, do not have to implement these technologies, we don't have to go down this path. Society may not be ready for AI to make these life-or-death decisions. Mistakes made by machines may be a step too far for a sceptical public. Just because we can, doesn't mean we should. But these technologies are already in the wild in and outside of healthcare, and governments around the world appear to be in a race to get their slice of the new AI economic benefits.
As AI technologies permeate into our public services, we need to make sure we understand the real consequences. Our assessments, both quantitative and qualitative, need to be better. The cost of action or inaction will be significant. This is not a question we can ignore. We need a public debate, and to make a conscious decision: do we let AI pull the lever? Or should AI first, do no harm?