Justice and artificial intelligence: a possible collaboration?

by Gregorio Torchia

For years, we have been looking for the possible practical applications of algorithms and systems of artificial intelligence in the world of justice for the most disparate functions, from the mere search for precedents in similar cases up to the elaboration of the sentence or decree ... by a robot judge !

Although the innovative drive has generally been positive, one of the main problems of these technologies is that they are often affected by confirmation bias (a phenomenon that directs research through setting premises that in practice become platform filters) and / or by the prejudices of those who have developed the technologies, with the negative effect, already verified in some foreign experiences, of operating a data processing that is "addressed" and thus obtaining desired results.

In the United Kingdom, for example, the Hart software, currently in use in some judicial offices, has helped to help and speed up the work, but has also been accused of discriminatory treatment against some categories that have seen themselves without a correct motivation penalized in the 'assignment of the so-called “risk level”.

It is evident that any mathematical system that analyzes data is agnostic about the value of the underlying data. However, the algorithmic decision-making systems can only make their own and process the data that have been entered in a distorted way, so that the systems end up "absorbing" any prejudices of those who created the system, which become the basis and premise in the production of the result. .

An algorithm can be used to predict recidivism rates among criminals, but if inputs are biased towards people of specific ethnicities, the algorithms will overestimate the risk of recidivism for those ethnicities and underestimate the risks for others. Similarly, a predictive language algorithm can anticipate the likelihood that some words will be used in tandem, such as "Paris" with "France" or as "Seoul" with "South Korea", but associate "man" with "doctor" and "woman" with "housewife" means to take for granted a prejudice that sees graduated men and housewives women, so as to affect the search result because the underlying data entered into the system reflects a prejudice.

If we add to this a general tendency to trust the neutrality and accuracy of computers much more than that of humans, we immediately realize how automated decision-making systems powered by Artificial Intelligence can instead lead to second-class citizenship. , to the detriment of that part of the population that will have more difficulty in asserting their rights.

While with regard to the risk of the discriminating effects, which we have described above, it seems that in recent years there has been a progressive awareness, it is precisely the general and uncritical tendency to trust systems based on Artificial Intelligence more than on the action of ' man who carries less obvious but no less alarming dangers: the very fact of delegating technical skills to automated mechanisms constitutes a progressively greater danger, since their increasingly widespread diffusion and a consequent shift of skills from people to machines, even with them ability to correct / refine oneself, risks creating a perspective that undermines the very existence of society.

Basically, the problem of automation “prejudices” would become more acute in the application of specific and complex sector regulations / procedures; think of the use of drones to detect building differences or cameras with facial recognition to control social behavior.

In these areas, if companies increasingly rely on automated systems considering them to be better equipped than humans are to master a wide range of complicated rules, the discriminating settings added to the self-management capacity of AI-based systems, will really put people's freedom at risk, creating a new element characterizing social difference, that is the different degree of access to the setting and management of systems that will be more and more pervasive in our daily life, having the ability not only to control, but also the address of our conduct and our choices: we think of a system that supports political propaganda, based on the processing of data (collected through Google, Twitter, Facebook, Instagram etc.) from which it draws the profiles of the inhabitants of a country, thus being able to know the values ​​in which they believe, their desires-fears, their religious beliefs, etc.

Big Brother has really arrived this time! Are we ready to hand over the management of our choices and our fundamental freedoms to instruments over which we will have less and less control?

Articles

Menu