Blog of the International Journal of Constitutional Law

Winter is Coming: the freedom to conduct a business as a limit and a sword in the governance of Artificial Intelligence

Inês Neves, Guest Lecturer at the Faculty of Law, University of Porto (Portugal), Researcher at the Centre for Legal Research (CIJ) – Rita Ferreira Gomes, Law graduate from the Faculty of Law, University of Porto

The freedom to conduct a business is a fundamental right that is explicitly enshrined in several national constitutions around the world. Whether standing alone, alongside classical economic fundamental rights, or derived from other fundamental rights and freedoms, the freedom to conduct a business is a fundamental right that demonstrates its potential to apply to non-human subjects – corporations as legal persons. Its prominence in the context of a fundamental rights approach to the governance of Artificial Intelligence demonstrates the ability of fundamental rights to provide an effective framework: one that mitigates the risks of Artificial Intelligence (‘AI’), without silencing the opportunities it presents for freedoms themselves.

In favour of an impartial or neutral starting point

The approach to business freedom tends to be negative, pessimistic, or at least functional-instrumental. The freedom to conduct a business is claimed to be a ‘terrible freedom’ or a ‘taboo right’, the exercise of which risks jeopardising the rights and freedoms of human beings, whether they are workers employed by the company, consumers of the products it produces, or users or beneficiaries of the services it provides.

We believe this is a distorted view that ignores the fact that, like any other right, the freedom of enterprise is also limited in its definitive scope. The fact that it is so does not make it less worthy. It merely reflects the nature of fundamental rights, whose strength in the abstract should not blind us to their limits in concrete.

Of course, the freedom to conduct a business is special. Because of its extroversion and its function in society. But it is not a mortgaged or inferior freedom, as that would imply accepting a hierarchy of value(s) between fundamental rights.

Just as their freedom par excellence, businesses are not all ‘bad’ or ‘dangerous’. On the contrary, their actions as actors in society (subject to power) and as economic agents in relation to others may well give rise to scenarios of subjection or danger, justifying the intervention of fundamental rights in their favour (as their holders).

Companies and their freedoms in the context of a fundamental rights approach to AI governance

The context of Artificial Intelligence is no exception. Again, an unbiased view of businesses and their freedoms is important.

The recent copyright infringement lawsuit brought by a group of plaintiffs (including the Game of Thrones author) against OpenAI – ChatGPT owner – illustrates this. While the case has been framed in terms of the freedom to conduct a business vs. copyright, it is perfectly possible to frame it as a conflict between the freedom of enterprise of the producers and developers of AI technology and large language models (LLMs), on the one hand, and the freedom of enterprise of the authors, on the other. In both cases, what is at stake is the protection of an economic activity on the market, even if the authors’ is ‘qualified’ and subject to additional protection. What’s more, there is nothing to prevent a lawsuit being filed, not only by authors, but also by publishers, or even other companies that may have been harmed by ChatGPT’s activity.

Regardless of the nuances, it is believed that the example serves to demonstrate that a strictly human-centred approach to AI regulation may not be sufficient to address the challenges involved. This is because, contrary to what it may appear, AI stakeholders include not only human beings but also legal persons and, in particular, companies. Companies that are not only on the ‘dark side of the Force’, but that can also be harmed by the use of AI (as users) or should at least be the subject of positive measures by public authorities (as potential competitors of incumbent companies, and facing difficulties in accessing the market).

For this reason, a fundamental rights approach is considered more advantageous than a human rights approach. Their different scope and reach, effectiveness and ‘dogmatics’ make fundamental rights more suited to the kind of conflicts and collisions that the AI governance landscape will need to resolve.

AI needs fundamental rights just as a company needs its freedom to do business

The message is clear. Artificial Intelligence is a reality associated with various risks. But it is also a phenomenon with unique opportunities and benefits that cannot be ignored. It is a reality that favours both human beings and companies.

Even these (companies), when they disguise themselves as legal persons, should be viewed impartially. Not necessarily as evil or powerful. But as subjects of the community, whose behaviour, actions and dynamics must always be framed in relational terms, i.e. vis-à-vis public authorities and other subjects of the political-legal community. It will be up to the dogmatics of fundamental rights and the idea of justice that underlies it to resolve the problem, guaranteeing i) the retreat of freedom, whenever necessary for a fair composition of the rights in conflict and ii) its affirmation, either as a protective shield vis-à-vis public authorities and other private individuals, or as a positive mandate, that binds public authorities to act.

This dual function of fundamental rights, which in reality conceals their complexity and multiplicity, is supported by the ‘how’ of regulating Artificial Intelligence.

What we will then be dealing with (in a hard law framework) is the limitation of the prima facie scope of the freedom to conduct a business of producers, developers, importers, distributors and users of AI systems in order to guarantee the rights and interests that conflict and clash with it. Copyright is just one example, among many others, including privacy, data protection, children’s rights, freedom of expression, etc.

While there is a legitimate justification to ‘restrict’, it is important that solutions remain within the framework of proportionality, i.e. adequacy, necessity and the prohibition of excess. After all, not all businesses are the same. And the impact of regulatory frameworks must be designed to ensure that risk mitigation does not end up stifling opportunities and innovation. It is clear that the misuse of Artificial Intelligence must be sanctioned and contained by effective frameworks. It is also important, however, that these frameworks are not designed with an exclusively humanistic approach, ignoring the fact that other subjectivities may also emerge as holders of fundamental rights that must be protected: without prejudice and without inferiority.

Suggested citation: Inês Neve and Rita Ferreira Gomez, Winter is Coming: the freedom to conduct a business as a limit and a sword in the governance of Artificial Intelligence, Int’l J. Const. L. Blog, Oct. 13 2023, at: http://www.iconnectblog.com/winter-is-coming-the-freedom-to-conduct-a-business-as-a-limit-and-a-sword-in-the-governance-of-artificial-intelligence/

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest

Connect