Recently, the risks about Artificial Intelligence and the need for ‘alignment’ have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there’s been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?
This is a companion discussion topic for the original entry at https://thegreatsimplification.libsyn.com/algorithmic-cancer-why-ai-development-is-not-what-you-think-with-connor-leahy