Because according to Stefano Quintarelli, entrepreneur and venture capitalist, it is the one that is already making a difference in the development of businesses. And it has nothing to do with generative AI.
“The problem is we who use the knife, not the knife.” Therefore, says Stefano Quintarelli, not artificial intelligence itself but who develops and uses it. Quintarelli is a serial entrepreneur, a leading name in the digital world in our country and beyond: founder of I.net, the first Italian internet provider, he played a primary role in the creation and development of our internet ecosystem. A former parliamentarian from 2013 to 2018, he created Spid – the single digital identity – and was a member of the European Commission’s High Level Group of Experts on Artificial Intelligence, one of the advisory bodies that prepared the field for the recent AI Act.
AI, a new way of making software
“Artificial intelligence is a name that describes a way of making software” explains Quintarelli, anticipating his speech at the next annual meeting of the Global alliance for banking on values (Gabv), the summit of the global network of ethical finance, for the first time in Italy from 26 to 29 February 2024 hosted by the Banca Etica Group between Padua and Milan.
“In particular, it is a new way of making software, which allows us to address problems different from previous ones that could only be addressed algorithmically. It is true that banks use a lot of technology and a lot of data analysis but always with traditional statistical systems – continues the entrepreneur -. Using artificial intelligence, and therefore a new way of making software, allows us to tackle new problems and create new applications.
In the future all data analysis will be much more pervasive, today it is limited because it is expensive but things will change, there will be new forms of interaction with customers through artificial intelligence and document management will be facilitated. Many processes will be made more efficient both towards users and within the bank”.
Speaking of ethics: many say they are convinced that artificial intelligence must have one. Others, however, think that assigning indispensable values to a technology, however powerful, risks doing more harm than good. And in any case, even the most astute in the sector wonder whether the cornerstones established in the AI Act are sufficient or whether this is just the beginning of an inevitable new (and rich) regulatory season. “
I believe that there is a basic misinterpretation on this issue because it is attributed to artificial intelligence, which is a technology, characteristics that it does not have, it is anthropomorphised – says Quintarelli -. Artificial intelligence is unethical, just like a knife is unethical. It is the use we make of it that may or may not be ethical. If we make the knife an instrument for killing, it is not the instrument that is unethical, it is the person doing the action.”
For the former president of the Steering Committee of the Agency for Digital Italy, ethics must be in those who produce the tools, think about them, use them, deploy them and not in the tool itself. In other words, “we are the ones who have to be ethical. Compared to the AI Act, this sets limits that are useful but we must avoid excess euphoria, thinking about the thaumaturgical qualities of AI”.
Long termism? The vast majority of professionals do not deal with it
The European Union working group set up in 2018, which concluded its work in 2021, produced as its main results the ethical guidelines for the creation of artificial intelligence systems and the policy recommendations for the finished commission then in the text approved by the European institutions.
Yet the most powerful technologies at the moment, from those of OpenAI and Microsoft to those of Google and Meta, seem to repeat the pattern we have witnessed in the last thirty years: the most powerful platforms are once again in the hands of a handful of private companies , mostly Americans, who share what they want if they want it when they want it. And they often seem to be led by Silicon Valley multibillionaires struck by the path of so-called long-termism, that is, the concern for the population of a remote future of the planet which often risks deepening the injustices in today’s world.
“I contest the fact that a significant part is inspired by long-termism – replies Quintarelli, criticizing a certain journalistic narrative -. I don’t think that’s the case, it’s just that a lot of them are in the media and therefore get more coverage. Piero Angela once told me a story about his first job as a journalist. He was sent to follow the launching of a ship and when he came back with the piece, his director told him ‘No, you don’t understand.
The piece is relevant if the ship sinks, if the launch goes badly, not if the launch goes well’. Here, in short, is the reason why the representation that is mostly given is of these catastrophic scenarios associated with long-termism”. In other words: there are few (and rich and powerful, it’s true) those who follow this line. But “the vast majority of professionals do not consider these things. In our group of experts, in Brussels, made up of 52 people, we had written three pages on ethical guidelines and these hypothetical themes.
After that, the person among us who we had elected as our chairman, moderator of the meeting, said: ‘But who believes in this eventuality?’. We had written three pages on these topics and out of all those present there was only one who said he believed it and two who said ‘No I don’t believe it, but if by chance it happens, at least we wrote something’.
The theme we set ourselves was: if we deal with it we ennoble it, if we don’t deal with it we are accused of criticism by those who, out of interest or really, believe in these things. In the end, the three pages became a footnote in which we wrote that some believe these things but that in fact it is ‘science fiction’. Later, we changed the word ‘science fiction’ to ‘unrealistic’.” In other words, explains the expert, this rhetoric throws the ball into the stands, produces anxiety about the future and, as many actually argue, shifts the focus of attention from the most urgent AI governance problems that we currently face the hands.
The (real) problems to face
What are these problems? Quintarelli lists some of them: “The use of artificial intelligence for anti-competitive purposes, the creation of monopoly positions, the exploitation of people’s work, discrimination. When they say that humanity risks extinction because of AI, they are nonsense. The only hypothetical risk of this kind is not fake images but the use of AI in armaments: certainly if we make autonomous weapons, even given that we know that AI makes mistakes, we can expect disastrous outcomes.
But again, the problem is not AI but the use we make of it. Having cleared the field of harmful outcomes from super-intelligences that subjugate humanity, I believe instead that there are concrete, current and prospective issues that produce concrete problems. When I say that AI becomes like a man, at that point I am saying that it learns and creates like a man and, therefore, the idea of paying royalties when I use other people’s texts makes no sense.
So these great visions on hypothetical great themes and great problems mask concrete current problems that are important, precisely, such as the exploitation of people’s work, the creation of monopoly rents, the exploitation of other people’s work”.
The other AI, the “boring” one
The point is actually essential. Artificial intelligence, if you think about it, has only come to the attention of the general public over the last year, mostly thanks to its abilities in texts, images and videos. For the recreational, working at lower levels, recreational and programming aspects. However, there is a hidden AI that works in platform systems, surveillance systems and production systems: what are the risks and advantages of less visible but perhaps more significant solutions for production systems?
“Let’s start by saying that these are statistical systems, statistical means that 1+1 does not always equal 2. They are statistical and non-deterministic systems and, as such, sometimes produce incorrect results. They allow us to automate categories of problems, previously not manageable in an algorithmic form, which require perception, classification and prediction.
The theme is this: being statistical systems, they make mistakes every now and then and we need to know this – continues Quintarelli -. The risk is that we make an idealized use of them thinking that, since they are computers, they are infallible, that we can use them in a sort of neo physiognomy by solving problems that, in reality, are not scientific: using these things to establish whether a person says whether it is true or not, or whether a person is prone to criminal recidivism or not, is part of pseudo-science.”
In short, it is good to be aware of the strong limitations of these solutions and, for example, not to apply them where there is a significant risk that those limits will cause damage. In particular, damage to people. “The risks lie in uncritical adoption, determined by ignorance, by excess expectations, fueled by a narrative that is preponderant. The problem is not so much in the tool but in the idea that we have and how we go about using it and, therefore, education is needed among decision makers”, says Quintarelli.
And in fact what we use today is certainly not the only type of artificial intelligence in circulation: “I like to talk about what I call ‘boring’ artificial intelligence – concludes Quintarelli -. AI has hit the headlines for its ability to generate texts, images, videos, etc. But it is not the only AI that exists. If I think about the research in seed production, the work in large plantations for sowing, the harvesting of tomatoes, the selection of good quality or low quality tomatoes, the logistics, their transport, the determination of the price to put it on the shelf , right down to the person looking for the recipe, I know that there are many artificial intelligence systems behind it, so AI is already with us every time we eat a tomato, we just don’t see it.
This is boring AI but it is also the one that produces the most significant effects on the bottom line of companies, the one that should be adopted quickly. The generative part of AI is, yes, interesting, but it still makes a lot of mistakes, while this part of AI is already very consolidated.”
Source: Wired
Original: Read More