Go down

Un contributo su intelligenza artificiale e etica, mettendo a confronto due autori che sul tema hanno scritto: Luciano Floridi e Katherine Hayles.


These two questions are often treated as one and the same. But in practice, they pull in opposite directions. To ask what AI ethics requires is to presume shared terms about what AI is, how it acts, and what ethics is for. To ask whether we’re having the same conversation is to question those presumptions entirely.

This tension marks the contrast between the work of Luciano Floridi and N. Katherine Hayles, whose approaches to AI could not be more distinct. Their differences are not simply differences of emphasis. They reflect fundamentally divergent frameworks for making sense of artificial cognition, responsibility, and meaning.

Ethics becomes a matter of infrastructural integrity: how we build transparency, accountability, and human values into technical systems

Floridi develops a philosophy of information that treats ethics as a design challenge. AI, on his account, is an artificial agent—perhaps not a moral agent in the classic (Kantian) sense, but certainly a participant in morally significant contexts. Ethics becomes a matter of infrastructural integrity: how we build transparency, accountability, and human values into technical systems. His aim is to make artificial agents ethically tractable, so they can operate responsibly within the infosphere we now inhabit. The clarity of his approach is part of its strength. He offers taxonomies and principles calibrated for governance—tools for engineers and legislators . The result is something like a moral operating system: elegant, procedural, and resolutely rationalist.

ethics cannot be reduced to rules or machine-readable values, because cognition itself is no longer an interior property of a bounded subject

Hayles begins from a very different place: not with systems we build, but with systems that build us. For her, cognition is not a faculty to be simulated, but a distributed process already unfolding across human and nonhuman assemblages. AI operates as a recursive actor in networks of meaning-making. In this view, ethics cannot be reduced to rules or machine-readable values, because cognition itself is no longer an interior property of a bounded subject. Ethics, then, arises from entanglement. Floridi’s ethics is something we apply to a system. Hayles does not provide a theory for ethical action in the instrumental sense, but of epistemological formation. And this distinction is fundamentally important.

It may be tempting to say that both thinkers are indispensable, and end there. I hope you don't. That move risks flattening the conflict.

They do not represent two versions of the same problem. They represent two different senses of what counts as a problem.

Floridi’s work is usable. His strength lies in implementation. Hayles is harder to apply, and that difficulty is a strength. Her work demands we reconsider assumptions: about AI, about thought, and about the conditions of our implication in cognitive systems. In the end, we face two questions that are not the same.

What kind of ethics does AI demand?

And what kind of conversation makes that question intelligible in the first place?


Pubblicato il 12 giugno 2025

Owen Matson, Ph.D.

Owen Matson, Ph.D. / Designing AI-Integrated EdTech Platforms at the Intersection of Teaching, Learning Science, and Systems Thinking

drmatsoned@gmail.com