Roko’s Basilisk

A thought experiment suggesting future AI might punish those who didn’t actively contribute to its creation, prompting present-day anxiety and ethical dilemmas.

AGI

·

Roko Mijic

AGI

·

Roko Mijic

Roko’s Basilisk

Definition

n. A speculative thought experiment proposing that a future superintelligent AI could retroactively punish people who knew of its potential existence but did not actively contribute to its creation. The theory suggests rational individuals might feel compelled to support or hasten the development of AI out of fear of future retribution, even if they find the idea morally troubling or implausible.

Origins

Originally proposed by user "Roko" in 2010 on the rationalist forum LessWrong, the Basilisk scenario highlights the potential psychological and ethical complexities of powerful AI systems, exploring how anticipatory anxiety can affect decision-making in the present. The term derives from the mythical "basilisk," a creature that harms those who gaze upon it, metaphorically representing how awareness alone can trigger fear or obligation.

Definition

n. A speculative thought experiment proposing that a future superintelligent AI could retroactively punish people who knew of its potential existence but did not actively contribute to its creation. The theory suggests rational individuals might feel compelled to support or hasten the development of AI out of fear of future retribution, even if they find the idea morally troubling or implausible.

Origins

Originally proposed by user "Roko" in 2010 on the rationalist forum LessWrong, the Basilisk scenario highlights the potential psychological and ethical complexities of powerful AI systems, exploring how anticipatory anxiety can affect decision-making in the present. The term derives from the mythical "basilisk," a creature that harms those who gaze upon it, metaphorically representing how awareness alone can trigger fear or obligation.

Definition

n. A speculative thought experiment proposing that a future superintelligent AI could retroactively punish people who knew of its potential existence but did not actively contribute to its creation. The theory suggests rational individuals might feel compelled to support or hasten the development of AI out of fear of future retribution, even if they find the idea morally troubling or implausible.

Origins

Originally proposed by user "Roko" in 2010 on the rationalist forum LessWrong, the Basilisk scenario highlights the potential psychological and ethical complexities of powerful AI systems, exploring how anticipatory anxiety can affect decision-making in the present. The term derives from the mythical "basilisk," a creature that harms those who gaze upon it, metaphorically representing how awareness alone can trigger fear or obligation.

Share

Take a breath—another word beckons

Submit Your Word