Is AI really better than nothing? That is the question we are immediately going to be asking as two stories emerge this week about some potentially concerning uses for AI. In one, the FDA announced it would be using AI to speed up the process for approving various products as being safe for human consumption.On the same day, Meta announced they would be using AI to assess the potential risk of its upcoming products to humans.
Meta plans to shift the task of assessing its products’ potential harms away from human reviewers, instead leaning more heavily on AI to automate the process. Internal documents seen by the publication note that Meta is aiming to have up to 90 percent of risk assessments fall on AI, NPR reports, and is considering using AI reviews even in areas such as youth risk and “integrity,” which covers violent content, misinformation and more.

The logic behind these moves is clear. When there is a large backlog and humans are unable to expedite reviews as quickly as needed, naturally AI could be a tool to speed this up. The dystopian implications are also clear. If AI decided that it wanted to poison us, the FDA tool could approve all sorts of dangerous things. Or allow Meta to launch products with known risks to humans.
Ultimately the question is one that came up in several conversations this week for me as well. Where is the line between the decisions that AI should be allowed to make on our behalf, and the human intervention required before those decisions get made?