The advent of what business strategists are fond of calling the “Cambrian explosion” in artificial intelligence (AI) has spawned a legion of concerns about downstream effects. It’s hardly an argument at this point to say AI can have unintended, even harmful consequences for social life. The many calls to make AI fair and accountable, transparent and ethical, attest to such impacts, or the threat thereof. Acting on these calls often requires description and explanation: what, upstream, caused downstream effects? But as Jonathan Roberge and Michael Castelle claim in the introduction to their edited volume, The Cultural Life of Machine Learning, the up-/downstream dynamic, when used as an explanatory frame, tends to uphold what is “largely (if unconsciously) a positivist project” (12), one for which the harms of AI ultimately require technical solutions. It does so, they explain, by eliding AI’s own “sociotechnical genesis” (3), the dense mesh of decisions,...

You do not currently have access to this content.