If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy

AI "alignment" is a buzzword, not a feasible safety goal.

Feb 11, 2025 - 13:01
 0
If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy
AI "alignment" is a buzzword, not a feasible safety goal.