
0070: We Don’t Get to Choose
Or do we?   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3
Radio and PodcastLive Radio & PodcastsOpening Radio and Podcast...

Radio and PodcastLive Radio & PodcastsFetching podcast shows and categories...
Radio and PodcastLive Radio & PodcastsFetching podcast episodes...

Is there an existential risk to humanity from AI? If so, what do we do about it?

Or do we?   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3

Ted interviews Jacob Ward, former editor of Popular Science, journalist at many outlets. Jake’s article about the book he’s writ...

Sane or insane?

We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!

http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3

There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-epis...

We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-...

Ted gave a live talk a few weeks ago.

  http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3

Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?

Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You c...

SpectreAttack.com               http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0059-2018-01-14...

There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of...

If the Universe Is Teeming With Aliens, Where is Everybody?                 http://traffic.libsyn.co...

Julia Hu, founder and CEO of Lark, an AI health coach, is our guest this episode. Her tech is really cool and clearly making a positive diff...

Ted had a fascinating conversation with Sean Lane, founder and CEO of Crosschx.

We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when...

Great voice memos from listeners led to interesting conversations.

We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also:...

Rodney Brooks article: The Seven Deadly Sins of Predicting the Future of AI

3rd in a series about future of current narrow AIs.

Read After On by Rob Reid, before you listen or because you listen.

This is our 2nd episode thinking about possible paths to superintelligence focusing on one kind of narrow AI each show. This episode is abou...

For show notes, please see https://concerning.ai/2017/08/29/0048-ai-xprize-and-thrival-festival-special-mini-episode/

How might we get from today's narrow AIs to AGI? This episode focus is tools.

Is all AI-involved science fiction the same?

We talked about the Nexus Trilogy of novels as a way to further our thinking about the wizard hat idea Tim Urban wrote about in his article...

Are we living our lives as if AI were an existential threat?

Listener Feedback this episode

Tim Urban's article at Wait But Why: Elon Musk's Neuralink and the Brain’s Magical Future

Mostly a listener feedback episode. Lots of great stuff here!

We need better language to talk about these difficult technical topics. See https://concerning.ai/2017/03/31/0039-we-need-more-sparrow-fable...

See https://concerning.ai/2017/03/17/0038-we-dont-want-to-die/

Listener Voicemail & Comments Eric’s voicemail Evan’s comment (Our interview with Evan: ep 0011: Evan Prodromou, AI practiti...

Main topic of this show: Unexpected Consequences of Self Driving Cars by Rodney Brooks

What should our values be? Could "Life is Precious" replace the Consumption Story?

Do we need to do philosophy on a deadline? Can AI help make us better humans?

Wind up your propeller hats! This one is a doozy. Hopefully someone can explain it to me (Ted).

In which we talk about Westworld, among other things.

Too time constrained for show notes this time. If you want to send us notes to be added here, please do it! Best place to reach is is the Co...

It's been a while since we recorded. What have we been up to?

We recorded this episode on Nov 6, 2016, two days before the US election. Sorry it’s taken so long to get out. Also, no show notes due...

Nick Bostrom’s Superintelligence Fiction from Liu Cixin: The Three Body Problem The Dark Forest Death’s End We’re a lot more bea...

Korey’s comment: … one question you asked on ‘The Locality Principle’, was what other people are doing to avert a po...

http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0026-2016-09-18.mp3 These notes won’t be much use without listening to the e...

Some things we talked about: Companies developing narrow AI, not giving one thought about AI safety, because just getting the thing to work...

No notes this time, just a speculative conversation about some possible implications of the idea that we could be living in a simulation. Su...

Are people better than robots?

This episode, we talk about this paper: A Few Useful Things to Know about Machine Learning

We want robot surgeons, bus and taxi drivers and investment advisors. Do you?