Ars on your lunch break: The fate we might be making for ourselves


Enlarge / Suck it, Skynet.

Today we’re presenting the second installment of my conversation with Naval Ravikant about existential risks. Naval is one of tech’s most successful angel investors and the founder of multiple startups—including seed-stage investment platform AngelList. Part one of our conversation ran yesterday. If you missed it, click right here. Otherwise, you can press play on the embedded audio player or pull up the transcript—both of which are below.

This interview first appeared in March as two back-to-back episodes of the After On Podcast (which offers a 50-episode archive of unhurried conversations with world-class thinkers, founders, and scientists). As I mentioned in yesterday’s article, my conversation with Naval led to a last-minute invite to give a related talk at April’s TED conference. TED posted that talk to their site this morning, and if you feel like watching it, it’s right here:

“How synthetic biology could wipe out humanity—and how we can stop it.”

My talk focuses on the dangers that abuses of synthetic biology technology could lead to. Naval and I will tackle that subject in our next two installments. Today, we focus on that time-honored Hollywood staple—super AI risk.

Read 9 remaining paragraphs | Comments



Source link