World models enable efficient AI planning but create risks from adversarial corruption, goal misgeneralization, and human bias, demonstrated via attacks that amplify errors and reduce rewards on models like RSSM and DreamerV3.
Robust multi-agent reinforcement learning against adversarial attacks for cooperative self- driving vehicles.IET Radar, Sonar & Navigation
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
citation-role summary
background 1
citation-polarity summary
fields
cs.CR 1years
2026 1verdicts
UNVERDICTED 1roles
background 1polarities
support 1representative citing papers
citing papers explorer
-
Safety, Security, and Cognitive Risks in World Models
World models enable efficient AI planning but create risks from adversarial corruption, goal misgeneralization, and human bias, demonstrated via attacks that amplify errors and reduce rewards on models like RSSM and DreamerV3.