Stanford professors discuss ethics involving driverless cars, Stanford News
Stanford scholars, researchers discuss key ethical questions self-driving cars present
Self-driving technology presents vast ethical challenges and questions. Several professors and interdisciplinary groups at Stanford who are tackling this issue suggest their perspectives on the topic.
By Alex Shashkevich
The self-driving car revolution reached a momentous milestone with the U.S. Department of Transportation’s release in September two thousand sixteen of its very first handbook of rules on autonomous vehicles.
Many members of the Stanford community are debating ethical issues that will arise when humans turn over the wheel to algorithms. (Pic credit: AlealL / Getty Pics)
Discussions about how the world will switch with driverless cars on the roads and how to make that future as ethical and responsible as possible are intensifying.
Some of these conversations are taking place at Stanford. The topic of ethics and autonomous cars will be discussed during a free live taping of an gig of Philosophy Talk, a nationally syndicated radio demonstrate co-hosted by professors Ken Taylor and John Perry, on Wednesday, May 24, at the Cubberley Auditorium.
Stanford News Service talked to several Stanford scholars for their insights on the most significant ethical questions and concerns when it comes to letting algorithms take the wheel.
Trolley problem debated
A common argument on behalf of autonomous cars is that they will decrease traffic accidents and thereby increase human welfare. Even if true, deep questions remain about how car companies or public policy will engineer for safety.
“Everyone is telling how driverless cars will take the problematic human out of the equation,” said Taylor, a professor of philosophy. “But we think of humans as moral decision-makers. Can artificial intelligence actually substitute our capacities as moral agents?”
That question leads to the “trolley problem,” a popular thought experiment ethicists have mulled over for about fifty years, which can be applied to driverless cars and morality.
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?
Engineers of autonomous cars will now have to tackle this question and other, more complicated scripts, said Taylor and Rob Reich, the director of Stanford’s McCoy Family Center for Ethics in Society.
“It won’t be just the choice inbetween killing one or killing five,” said Reich, who is also a professor of political science. “Will these cars optimize for overall human welfare, or will the algorithms prioritize passenger safety or those on the road? Or imagine if automakers determine to put this decision into the consumers’ forearms, and have them choose whose safety to prioritize. Things get a lot trickier.”
Minimizing risk
But Stephen Zoepf, executive director of the Center for Automotive Research at Stanford (CARS), along with several other Stanford scholars, including mechanical engineering Professor Chris Gerdes, argue that torturous over the trolley problem isn’t helpful.
“It’s not productive,” Zoepf said. “People make all sorts of bad decisions. If there is a way to improve on that with driverless cars, why wouldn’t we?”
Zoepf said the more significant ethical question is what is the level of risk society would be willing to incur with self-driving cars on the road. For the past several months, Zoepf and his CARS colleagues have been working on a project on ethical programming of automotive vehicles.
“We say, ‘let’s look at the tradeoffs inherent in safety and mobility,’” Zoepf said. “Should there be a designated right of way for automated vehicles, for example, or how swift should we permit automated vehicles to travel?”
Loss of jobs
Another ethical concern is the number of jobs that will be lost if self-driving vehicles become the norm, Taylor and Reich said.
More than Trio.Five million truck drivers haul cargo on U.S. roads, according to the latest statistics by the American Trucking Associations, a trade association for the U.S. trucking industry.
“You can’t outsource driving,” Taylor said. “Technology has always demolished jobs but created other jobs. But with the current technology revolution, things may look differently.”
Technological developments can cause the loss of jobs. But tech companies and governments can and must take steps to prepare for those losses, said Margaret Levi, professor of political science and the director of the Center for Advanced Examine in the Behavioral Sciences.
“We have to be ready for this job loss and know how to deal with it,” Levi said. “That’s part of the ethical responsibility of society. What do we do with people who are displaced? But it is not only the transformation in labor. It is also the transformation in transport, private and public. We must plan for that, too.”
Transparency and collaboration
Some scholars have also pointed out the need for greater transparency in the design of driverless cars.
“Should it be semi-transparent how the algorithms of these cars are made?” Reich said. “The public interest is at stake, and transparency is an significant consideration to inform public debate.”
But no matter their stance on a particular issue with self-driving cars, the scholars agree that there needs to be greater collaboration among disciplines in the development stage of this and other revolutionary technology.
“We need social scientists and ethicists on the design teams from the get-go,” Levi said. “That won’t resolve all the questions, but it would at least be a commence to dealing with some of them.”
At Stanford, some of these collaborations are already taking place.
Jason Millar, an engineer and postdoctoral research fellow with the Center of Ethics in Society, is also working on the CARS ethical programming project. He is tackling how to translate skill developed in academic and philosophical circles into the daily design work of technology and artificial intelligence products.
“The idea is to address the concerns upfront, designing good technology that fits into people’s social worlds,” Millar said.
Stanford professors discuss ethics involving driverless cars, Stanford News
Stanford scholars, researchers discuss key ethical questions self-driving cars present
Self-driving technology presents vast ethical challenges and questions. Several professors and interdisciplinary groups at Stanford who are tackling this issue suggest their perspectives on the topic.
By Alex Shashkevich
The self-driving car revolution reached a momentous milestone with the U.S. Department of Transportation’s release in September two thousand sixteen of its very first handbook of rules on autonomous vehicles.
Many members of the Stanford community are debating ethical issues that will arise when humans turn over the wheel to algorithms. (Picture credit: AlealL / Getty Pictures)
Discussions about how the world will switch with driverless cars on the roads and how to make that future as ethical and responsible as possible are intensifying.
Some of these conversations are taking place at Stanford. The topic of ethics and autonomous cars will be discussed during a free live taping of an gig of Philosophy Talk, a nationally syndicated radio showcase co-hosted by professors Ken Taylor and John Perry, on Wednesday, May 24, at the Cubberley Auditorium.
Stanford News Service talked to several Stanford scholars for their insights on the most significant ethical questions and concerns when it comes to letting algorithms take the wheel.
Trolley problem debated
A common argument on behalf of autonomous cars is that they will decrease traffic accidents and thereby increase human welfare. Even if true, deep questions remain about how car companies or public policy will engineer for safety.
“Everyone is telling how driverless cars will take the problematic human out of the equation,” said Taylor, a professor of philosophy. “But we think of humans as moral decision-makers. Can artificial intelligence actually substitute our capacities as moral agents?”
That question leads to the “trolley problem,” a popular thought experiment ethicists have mulled over for about fifty years, which can be applied to driverless cars and morality.
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?
Engineers of autonomous cars will now have to tackle this question and other, more complicated scripts, said Taylor and Rob Reich, the director of Stanford’s McCoy Family Center for Ethics in Society.
“It won’t be just the choice inbetween killing one or killing five,” said Reich, who is also a professor of political science. “Will these cars optimize for overall human welfare, or will the algorithms prioritize passenger safety or those on the road? Or imagine if automakers determine to put this decision into the consumers’ forearms, and have them choose whose safety to prioritize. Things get a lot trickier.”
Minimizing risk
But Stephen Zoepf, executive director of the Center for Automotive Research at Stanford (CARS), along with several other Stanford scholars, including mechanical engineering Professor Chris Gerdes, argue that painful over the trolley problem isn’t helpful.
“It’s not productive,” Zoepf said. “People make all sorts of bad decisions. If there is a way to improve on that with driverless cars, why wouldn’t we?”
Zoepf said the more significant ethical question is what is the level of risk society would be willing to incur with self-driving cars on the road. For the past several months, Zoepf and his CARS colleagues have been working on a project on ethical programming of automotive vehicles.
“We say, ‘let’s look at the tradeoffs inherent in safety and mobility,’” Zoepf said. “Should there be a designated right of way for automated vehicles, for example, or how quick should we permit automated vehicles to travel?”
Loss of jobs
Another ethical concern is the number of jobs that will be lost if self-driving vehicles become the norm, Taylor and Reich said.
More than Trio.Five million truck drivers haul cargo on U.S. roads, according to the latest statistics by the American Trucking Associations, a trade association for the U.S. trucking industry.
“You can’t outsource driving,” Taylor said. “Technology has always demolished jobs but created other jobs. But with the current technology revolution, things may look differently.”
Technological developments can cause the loss of jobs. But tech companies and governments can and must take steps to prepare for those losses, said Margaret Levi, professor of political science and the director of the Center for Advanced Explore in the Behavioral Sciences.
“We have to be ready for this job loss and know how to deal with it,” Levi said. “That’s part of the ethical responsibility of society. What do we do with people who are displaced? But it is not only the transformation in labor. It is also the transformation in transport, private and public. We must plan for that, too.”
Transparency and collaboration
Some scholars have also pointed out the need for greater transparency in the design of driverless cars.
“Should it be semi-transparent how the algorithms of these cars are made?” Reich said. “The public interest is at stake, and transparency is an significant consideration to inform public debate.”
But no matter their stance on a particular issue with self-driving cars, the scholars agree that there needs to be greater collaboration among disciplines in the development stage of this and other revolutionary technology.
“We need social scientists and ethicists on the design teams from the get-go,” Levi said. “That won’t resolve all the questions, but it would at least be a embark to dealing with some of them.”
At Stanford, some of these collaborations are already taking place.
Jason Millar, an engineer and postdoctoral research fellow with the Center of Ethics in Society, is also working on the CARS ethical programming project. He is tackling how to translate skill developed in academic and philosophical circles into the daily design work of technology and artificial intelligence products.
“The idea is to address the concerns upfront, designing good technology that fits into people’s social worlds,” Millar said.
Stanford professors discuss ethics involving driverless cars, Stanford News
Stanford scholars, researchers discuss key ethical questions self-driving cars present
Self-driving technology presents vast ethical challenges and questions. Several professors and interdisciplinary groups at Stanford who are tackling this issue suggest their perspectives on the topic.
By Alex Shashkevich
The self-driving car revolution reached a momentous milestone with the U.S. Department of Transportation’s release in September two thousand sixteen of its very first handbook of rules on autonomous vehicles.
Many members of the Stanford community are debating ethical issues that will arise when humans turn over the wheel to algorithms. (Picture credit: AlealL / Getty Photos)
Discussions about how the world will switch with driverless cars on the roads and how to make that future as ethical and responsible as possible are intensifying.
Some of these conversations are taking place at Stanford. The topic of ethics and autonomous cars will be discussed during a free live taping of an gig of Philosophy Talk, a nationally syndicated radio display co-hosted by professors Ken Taylor and John Perry, on Wednesday, May 24, at the Cubberley Auditorium.
Stanford News Service talked to several Stanford scholars for their insights on the most significant ethical questions and concerns when it comes to letting algorithms take the wheel.
Trolley problem debated
A common argument on behalf of autonomous cars is that they will decrease traffic accidents and thereby increase human welfare. Even if true, deep questions remain about how car companies or public policy will engineer for safety.
“Everyone is telling how driverless cars will take the problematic human out of the equation,” said Taylor, a professor of philosophy. “But we think of humans as moral decision-makers. Can artificial intelligence actually substitute our capacities as moral agents?”
That question leads to the “trolley problem,” a popular thought experiment ethicists have mulled over for about fifty years, which can be applied to driverless cars and morality.
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?
Engineers of autonomous cars will now have to tackle this question and other, more complicated scripts, said Taylor and Rob Reich, the director of Stanford’s McCoy Family Center for Ethics in Society.
“It won’t be just the choice inbetween killing one or killing five,” said Reich, who is also a professor of political science. “Will these cars optimize for overall human welfare, or will the algorithms prioritize passenger safety or those on the road? Or imagine if automakers determine to put this decision into the consumers’ palms, and have them choose whose safety to prioritize. Things get a lot trickier.”
Minimizing risk
But Stephen Zoepf, executive director of the Center for Automotive Research at Stanford (CARS), along with several other Stanford scholars, including mechanical engineering Professor Chris Gerdes, argue that torturous over the trolley problem isn’t helpful.
“It’s not productive,” Zoepf said. “People make all sorts of bad decisions. If there is a way to improve on that with driverless cars, why wouldn’t we?”
Zoepf said the more significant ethical question is what is the level of risk society would be willing to incur with self-driving cars on the road. For the past several months, Zoepf and his CARS colleagues have been working on a project on ethical programming of automotive vehicles.
“We say, ‘let’s look at the tradeoffs inherent in safety and mobility,’” Zoepf said. “Should there be a designated right of way for automated vehicles, for example, or how prompt should we permit automated vehicles to travel?”
Loss of jobs
Another ethical concern is the number of jobs that will be lost if self-driving vehicles become the norm, Taylor and Reich said.
More than Three.Five million truck drivers haul cargo on U.S. roads, according to the latest statistics by the American Trucking Associations, a trade association for the U.S. trucking industry.
“You can’t outsource driving,” Taylor said. “Technology has always demolished jobs but created other jobs. But with the current technology revolution, things may look differently.”
Technological developments can cause the loss of jobs. But tech companies and governments can and must take steps to prepare for those losses, said Margaret Levi, professor of political science and the director of the Center for Advanced Probe in the Behavioral Sciences.
“We have to be ready for this job loss and know how to deal with it,” Levi said. “That’s part of the ethical responsibility of society. What do we do with people who are displaced? But it is not only the transformation in labor. It is also the transformation in transport, private and public. We must plan for that, too.”
Transparency and collaboration
Some scholars have also pointed out the need for greater transparency in the design of driverless cars.
“Should it be semi-transparent how the algorithms of these cars are made?” Reich said. “The public interest is at stake, and transparency is an significant consideration to inform public debate.”
But no matter their stance on a particular issue with self-driving cars, the scholars agree that there needs to be greater collaboration among disciplines in the development stage of this and other revolutionary technology.
“We need social scientists and ethicists on the design teams from the get-go,” Levi said. “That won’t resolve all the questions, but it would at least be a commence to dealing with some of them.”
At Stanford, some of these collaborations are already taking place.
Jason Millar, an engineer and postdoctoral research fellow with the Center of Ethics in Society, is also working on the CARS ethical programming project. He is tackling how to translate skill developed in academic and philosophical circles into the daily design work of technology and artificial intelligence products.
“The idea is to address the concerns upfront, designing good technology that fits into people’s social worlds,” Millar said.
Stanford professors discuss ethics involving driverless cars, Stanford News
Stanford scholars, researchers discuss key ethical questions self-driving cars present
Self-driving technology presents vast ethical challenges and questions. Several professors and interdisciplinary groups at Stanford who are tackling this issue suggest their perspectives on the topic.
By Alex Shashkevich
The self-driving car revolution reached a momentous milestone with the U.S. Department of Transportation’s release in September two thousand sixteen of its very first handbook of rules on autonomous vehicles.
Many members of the Stanford community are debating ethical issues that will arise when humans turn over the wheel to algorithms. (Pic credit: AlealL / Getty Photos)
Discussions about how the world will switch with driverless cars on the roads and how to make that future as ethical and responsible as possible are intensifying.
Some of these conversations are taking place at Stanford. The topic of ethics and autonomous cars will be discussed during a free live taping of an scene of Philosophy Talk, a nationally syndicated radio showcase co-hosted by professors Ken Taylor and John Perry, on Wednesday, May 24, at the Cubberley Auditorium.
Stanford News Service talked to several Stanford scholars for their insights on the most significant ethical questions and concerns when it comes to letting algorithms take the wheel.
Trolley problem debated
A common argument on behalf of autonomous cars is that they will decrease traffic accidents and thereby increase human welfare. Even if true, deep questions remain about how car companies or public policy will engineer for safety.
“Everyone is telling how driverless cars will take the problematic human out of the equation,” said Taylor, a professor of philosophy. “But we think of humans as moral decision-makers. Can artificial intelligence actually substitute our capacities as moral agents?”
That question leads to the “trolley problem,” a popular thought experiment ethicists have mulled over for about fifty years, which can be applied to driverless cars and morality.
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?
Engineers of autonomous cars will now have to tackle this question and other, more complicated screenplays, said Taylor and Rob Reich, the director of Stanford’s McCoy Family Center for Ethics in Society.
“It won’t be just the choice inbetween killing one or killing five,” said Reich, who is also a professor of political science. “Will these cars optimize for overall human welfare, or will the algorithms prioritize passenger safety or those on the road? Or imagine if automakers determine to put this decision into the consumers’ palms, and have them choose whose safety to prioritize. Things get a lot trickier.”
Minimizing risk
But Stephen Zoepf, executive director of the Center for Automotive Research at Stanford (CARS), along with several other Stanford scholars, including mechanical engineering Professor Chris Gerdes, argue that painful over the trolley problem isn’t helpful.
“It’s not productive,” Zoepf said. “People make all sorts of bad decisions. If there is a way to improve on that with driverless cars, why wouldn’t we?”
Zoepf said the more significant ethical question is what is the level of risk society would be willing to incur with self-driving cars on the road. For the past several months, Zoepf and his CARS colleagues have been working on a project on ethical programming of automotive vehicles.
“We say, ‘let’s look at the tradeoffs inherent in safety and mobility,’” Zoepf said. “Should there be a designated right of way for automated vehicles, for example, or how quick should we permit automated vehicles to travel?”
Loss of jobs
Another ethical concern is the number of jobs that will be lost if self-driving vehicles become the norm, Taylor and Reich said.
More than Three.Five million truck drivers haul cargo on U.S. roads, according to the latest statistics by the American Trucking Associations, a trade association for the U.S. trucking industry.
“You can’t outsource driving,” Taylor said. “Technology has always demolished jobs but created other jobs. But with the current technology revolution, things may look differently.”
Technological developments can cause the loss of jobs. But tech companies and governments can and must take steps to prepare for those losses, said Margaret Levi, professor of political science and the director of the Center for Advanced Explore in the Behavioral Sciences.
“We have to be ready for this job loss and know how to deal with it,” Levi said. “That’s part of the ethical responsibility of society. What do we do with people who are displaced? But it is not only the transformation in labor. It is also the transformation in transport, private and public. We must plan for that, too.”
Transparency and collaboration
Some scholars have also pointed out the need for greater transparency in the design of driverless cars.
“Should it be see-through how the algorithms of these cars are made?” Reich said. “The public interest is at stake, and transparency is an significant consideration to inform public debate.”
But no matter their stance on a particular issue with self-driving cars, the scholars agree that there needs to be greater collaboration among disciplines in the development stage of this and other revolutionary technology.
“We need social scientists and ethicists on the design teams from the get-go,” Levi said. “That won’t resolve all the questions, but it would at least be a embark to dealing with some of them.”
At Stanford, some of these collaborations are already taking place.
Jason Millar, an engineer and postdoctoral research fellow with the Center of Ethics in Society, is also working on the CARS ethical programming project. He is tackling how to translate skill developed in academic and philosophical circles into the daily design work of technology and artificial intelligence products.
“The idea is to address the concerns upfront, designing good technology that fits into people’s social worlds,” Millar said.