Anthropic's Chief Scientist Says We're Rapidly Approaching the Moment That Could Doom Us All
Briefly

Anthropic's Chief Scientist Says We're Rapidly Approaching the Moment That Could Doom Us All
"By 2030, Kaplan predicts, or as soon as 2027, humanity will have to decide whether to take the "ultimate risk" of letting AI models train themselves. The ensuing "intelligence explosion" could elevate the tech to new heights, birthing a so-called artificial general intelligence (AGI) which equals or surpasses human intellect and benefits humankind with all sorts of scientific and medical advancements. Or it could allow AI's power to snowball beyond our control, leaving us at the mercy of its whims."
"Geoffrey Hinton, one of the three so-called godfathers of AI, famously declared he regretted his life's work, and has frequently warned about how AI could upend or even destroy society. OpenAI Sam Altman predicts that AI will will wipe out entire categories of labor. Kaplan's boss, CEO Dario Amodei, recently warned AI could take over half of all entry-level white-collar jobs, and accused his competitors of "sugarcoating" just how badly AI will disrupt society."
Kaplan warns that a near-term decision about allowing AI models to train themselves could trigger an "intelligence explosion" that produces AGI matching or surpassing human intellect, yielding major scientific and medical advances or uncontrollable power. He sets a timeline of as soon as 2027 and by 2030 for that decision. He predicts AI will be able to do most white-collar work within two to three years. He expresses cautious optimism about keeping AI aligned to human interests while stressing that allowing powerful AIs to train other AIs is an extremely high-stakes risk.
Read at Futurism
Unable to calculate read time
[
|
]