
India News
Artificial General Intelligence (AGI), expected by 2030, may pose existential threats, warns a Google DeepMind paper. It highlights four major risk types: misuse, misalignment, mistakes, and structural flaws. Among these, misuse—where AGI is intentionally used to cause harm—is a central concern. The study stresses that deciding what counts as severe harm should not be left to researchers alone but should reflect society’s collective values and risk tolerance.
Although specific extinction scenarios are not detailed, the paper calls for strong, preventive safeguards. With AGI’s potential power, DeepMind emphasizes that early action is critical to prevent irreversible damage. The need for international cooperation in AGI development and risk management is urgent.
DeepMind CEO Demis Hassabis suggests a global oversight model with three parts: a collaborative research body like CERN, a monitoring agency similar to the IAEA, and a UN-style governing organization, all working to ensure AGI evolves safely and ethically.
Advertisment