Get the latest Science News and Discoveries
If we can’t trust our own decision-making processes, how can we build AI systems that accurately reflect what we truly need?
We must look at the AI alignment problem before our own creation forces us to rediscover our own purpose. What happens when our greatest creation outsmarts us—not maliciously, but by giving us exactly what we think we want?" A 2-tired trust issue is emerging that is challenging our needs and desires. This issue runs on two fronts: first, whether we can trust AI to align with human goals, and second, whether we can even trust ourselves to know what we truly want.
None
Or read this on r/EverythingScience