If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy
AI "alignment" is a buzzword, not a feasible safety goal.
![If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy](https://cdn.mos.cms.futurecdn.net/8Y2EFc8ZJgBAtDj5o3PndM.jpg?#)
Feb 11, 2025 0
Feb 11, 2025 0
Or register with email
Feb 9, 2025 0
Feb 9, 2025 0
Feb 10, 2025 0
Feb 9, 2025 0
Feb 9, 2025 0
This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.