artificial intelligence

Would you ride in a driverless car? GM says its driverless car could be in fleets by next year. But some polls show Americans are still skeptical about the idea of this technology. Are driverless cars safe? Who would be liable in an accident? Advocates of driverless cars say they could benefit the environment and improve everyday lifestyles.

We talk about our future as drivers…or riders. Our guests:

In July, Tesla and SpaceX CEO Elon Musk said artificial intelligence, or AI, is a "fundamental existential risk for human civilization.” Musk wasn’t alone in sharing those concerns, leading many people to ask what will happen when humans develop super intelligent AI. As AI continues to advance, it raises questions about the job sector (Will it eliminate jobs or create them?), the education system (Could robots eventually replace teachers?), human safety (Could AI systems outsmart us and lead to our demise?), and more.

This hour, our panel of experts helps us understand AI and its implications. In studio:

  • Henry Kautz, director of the Institute for Data Science at the University of Rochester
  • Dhireesha Kudithipudi, professor and chair of the graduate program in computer engineering at RIT
  • Matt Huenerfauth, professor of information sciences and technologies at RIT

Note: the recording of this program was corrupted, so we are unable to provide a replay of this show.    

Are human beings about to unleash a scientific nightmare? Right now, researchers are trying to create artificial superintelligence. It's a kind of machine that is more intelligent than a human being. But imagine a machine that's one thousand times smarter than the smartest human. Would that machine continue to carry out programmed tasks, or could that machine gain autonomy? Would we have something to fear if it did? Author James Barrat believes ASI is the great risk to our future, and he explains in this hour why he wrote the book Our Final Invention.