Today\'s "machine-learning" systems, trained by data, are so effective that we\'ve invited them to see and hear for us--and to make decisions on our behalf.
Systems cull r sum s until, y.
Researchers call this the Alignment problem.
When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge.
Recent years have seen an eruption of concern as the field of Machine Learning advances.
But alarm bells are ringing.
Today\'s "machine-learning" systems, trained by data, are so effective that we\'ve invited them to see and hear for us--and to make decisions on our behalf