Artificial Intelligence technology is reconfiguring the balance of power between capital and labor, accelerating the widening of social inequality we have witnessed during the past two decades. The proverbial valley is getting wider and deeper. Social inequality of this magnitude is a stress to political cohesion as it gives a few people much influence over governments and leaves many more without any. Governments that care about inequalities and cohesion need to reinstall many of the substantive redistribution programs that vanished in the last century. In this respect, arguably AI ethics can become more robust when foregrounding issues of wealth redistribution.
This is not simply about the past. Rather, we have to take into account the subtle social changes that have, and are, occurring in politics and technology. Due to many current geopolitical undercurrents, such as the strengthening of alliances that challenge the post-Cold War ruleset, or the selective decoupling from the world market by leading powers for example, the nature of governance is shifting day-by-day. Some institutions such as the World Trade Organization seem poised to lose influence, while secretariats such as the African Continental Free Trade Area are on the rise. This re-configuration of power, influence, and clout gives new opportunities for new players to shape the global discussion about AI regulation.
At the same time, states necessarily continue to remain relevant actors for AI governance. Indeed, making regulation at the supra-national level like the African Union or European Union may foster perceptions that governance occurs well beyond the influence of ordinary people. Although popular support is not the sole consideration for regulatory initiatives, not much is gained by the impression that these initiatives are indifferent to citizens’ thoughts. Efforts for inclusivity in the global governance of AI must exercise judgment about where to apply political pressure, who to apply pressure with, and which institutions will endure in the long term.
In contrast to supra-national bodies, there are emerging ideas for data unions, which aspire to negotiate payment terms for a user-generated data from a user community. Similar ideas are being proposed by activists in the Global South who wish to push for community data sovereignty as part of the broad effort to decolonize digital society. Often undertaken with great care and with firm conditions, there is still value in ensuring that both kinds of campaigns do not inadvertently reinforce the commodification of data, even as the groups undertaking economic exchange over data may change. It is the commodification of data that incentivizes the kind of surveillance capitalism Shoshana Zuboff has written about.
On this topic, the affordances of platforms can introduce new political issues and act as conduits to create coalitions and movements, as Zuboff describes. But they also reveal and amplify existing social tensions, such as xenophobia. In approaching the regulation of platforms we must be careful not to scapegoat them for all social ills. Frequently platforms are unfairly blamed for prior and long-standing social policy failures by governments. This is not to say that platforms are innocent. Rather it is that some things that appear on platforms are expressions of wider social phenomena, many set in motion well before platforms came on the scene.
In a global race to develop AI and similar advanced technologies there is not much room for informed researchers promoting due caution and regulatory oversight. Shareholders do not look kindly on people who wish to put guardrails on the activities that create superprofits. In conditions where there is a fiduciary responsibility to maximize profit and returns to shareholders, voluntary self-regulation based on AI ethics becomes a kind of ornamentation. So there is absolutely a need for due caution about an agenda that seeks to decentralize regulatory authority as this is the crowbar to pry away the market shaping effects of governments, many of whom are legitimate representative democracies.
As marketplaces have a variety of firms and interests, there is another vector for governance from this venue. Consider how the insurance sector might respond to large language models. With AI applications like ChatGPT rife with hallucinations, misleading content, and harms from racial, gender, class, and geographic bias, it is likely a matter of time before one of these systems causes a series of errors in an organization. There are plausible scenarios where errors in software could result in injury or loss of life, for example. With corporate liability bringing damages, the insurance sector may introduce provisions which curtail the use of AI in certain professions or for certain kinds of tasks.
Given the wider social situations I have covered, we must have reasonable expectations of the field of AI ethics. Can AI ethics stop exploitation, for example? Ethical frameworks do not enforce themselves. This point is useful to keep in mind given the rapid multiplication of AI ethical frameworks. While the proliferation of these frameworks is heartening (it shows that researchers and engineers do care about the consequences of their products), at the same time few of these frameworks centrally address class divides and the transfer of wealth from the poor to the rich. Nor do many directly seek to curtail shareholder’s rights or prioritize full employment over automation.
This brings us to the question of alignment, a topic that is rather vain if simply meaning that outcomes match the intents and purposes of an AI’s design team. A more impactful question is whether these technicians align AI to promote human rights or a private property regime. A more musculature AI ethics would have stances on all the issues I have raised.
An adequate global digital compact must begin by centrally addressing power and profit, and entrench public interest as the over-determining principle in all circumstances. This is the starting point for discussion.