AI concerns and related quotes
| Concerns | Quotes |
|---|---|
| Privacy and security | “It is clear that you have to understand that if you give Google a document to have it translated, Google will use it for its purposes because it is its business model, in this sense you should be very careful what you share. This is related to the issue of privacy protection, which in Europe is very much focused on the individual, whereas I am much more concerned with the analysis of collective flows. For example: during the first lockdown, Facebook gave anonymised data on the geolocation of the movement that took place that night in Italy (the famous trains that went from the North to the South). It is true that they did not say that Marianna or Ginetta made the trip, but they gave the idea of the mass movement and the next day the government changed a law. It was not an action with zero impact! It is a subject on which one has to be very careful” [Interview #1] “There are AIs that are exploited to defraud man … there is software with which I could now present myself to you with the face of another person and with his or her own voice. You would never notice that I’m not myself; so, clearly, it’s also a safety factor” [Interview #6] “Ensuring privacy and security isn’t just a technological requirement, it is our duty to our employees and customers” [Interview #45] |
| Equity and fairness | “A question we are often asked is whether the machine has a bias in its choice of CVs. The answer is: absolutely yes, because, I repeat, it is we who give the machine samples and if these have a bias, the machine automatically replicates them … However, the recruiter also has the bias, because he is subject to fatigue, and this here creates another type of bias, in my opinion, more dangerous, and therefore the machine in this sense, not being subject to fatigue gives an evaluation already in the first instance, regardless of the time in which it evaluates the resumes. Obviously, the recruiter intervenes on this and then makes a human assessment.” [Interview #3] “At European level this problem is being addressed and there is a whole extremely large table on the introduction at European level of regulations for responsible use of AI” [Interview #9] “In theory, the machine has no preconceptions. But the machine responds to criteria that we give it, which are preconceptions” [Interview #10] “If AI use historical data, it may reproduce historical bias” [Interview #13] "If we do not realise that we are not talking to a human, this is worrying. If … we do not have the ‘antibodies’ to react, the effect can be dramatic” [Interview #14] |
| Explainability | “Our ability to understand and trust AI tools decision-making is critical. Without explainability, we're navigating with a black box” [Interview #4] “It’s about understanding the ‘why’ behind [AI tools] decisions. If we cannot interpret the rationale, we risk compromising the integrity of our strategic decisions” [Interview #33] “AI algorithms need to be transparent. When you have an algorithm based on deep learning … it’s like having a black box … you don’t understand what it’s doing. So, it is very important to introduce AI with awareness, taking care of the ethical and moral aspects. We are in the process of starting our own ethics committee for AI” [Interview #37] |
| Liability | “Big algorithms can make big mistakes” [Interview #10] “One of my biggest concerns is determining who is responsible for damage caused by an AI-powered device or service. For example, in the case of an accident involving a self-driving car, should the damage be covered by the car owner, the car manufacturer or the software programmer?” [Interview #19] “Autonomy is one of the keys, and this is one of the pitfalls of AI in general: traceability. Can we trace and trace back all the decisions made by an algorithm? Not always. It’s especially difficult with neural networks” [Interview #36] “There is a need for clear laws that make it possible to understand if, how and when to assign responsibilities when it comes to AI tools” [Interview #46] |
| Overestimation | “Another great fear is the excess of expectation that is generated. Since some AI algorithms are available out-of-the-box, it seems that they are easy to implement … So you have to go to customers and explain to them that first of all the result is not obvious, an experiment must be carried out to evaluate the effectiveness of the models before adopting it; and that an AI is as prone to errors as a human being” [Interview #3] “AI is quite powerful, but one of the key concerns I have, or I see, is [that it] can be a little bit too powerful for people and organisations if they don’t know how to use it. It’s like when you just took your driver’s license and suddenly ended up in a Lamborghini. It might be a nice car, an expensive car, but you might not be able to use it properly” [Interview #30] |
| Reduction of human ability | “One of the potential problems is that people get used to using these tools and stop thinking … you think the machine has already done everything […] It’s true that calculator increases things for you, but if you don’t learn to do the calculations yourself, you’re missing a piece of the development” [Interview #1] “My main concern is that some people still think that AI can replace a human” [Interview #13] “What am I worried about? Mainly the shutting down of brains; we rely more and more on these tools without asking ourselves questions about the results they give us” [Interview #22] “The risk is to think that AI is the substitute for complex thinking, in other words, that is a big risk in the sense: leaning on the mere final data of an AI analysis. But if you only look at the final data that the algorithm returned to you, you run the risk of not understanding the complexity of human reasoning and human effort that went into achieving that simplicity of management. It is absurd to think that AI means turning off the brain” [Interview #26] |
| Concerns | Quotes |
|---|---|
| Privacy and security | “It is clear that you have to understand that if you give Google a document to have it translated, Google will use it for its purposes because it is its business model, in this sense you should be very careful what you share. This is related to the issue of privacy protection, which in Europe is very much focused on the individual, whereas I am much more concerned with the analysis of collective flows. For example: during the first lockdown, Facebook gave anonymised data on the geolocation of the movement that took place that night in Italy (the famous trains that went from the North to the South). It is true that they did not say that Marianna or Ginetta made the trip, but they gave the idea of the mass movement and the next day the government changed a law. It was not an action with zero impact! It is a subject on which one has to be very careful” [Interview #1] |
| Equity and fairness | “A question we are often asked is whether the machine has a bias in its choice of CVs. The answer is: absolutely yes, because, I repeat, it is we who give the machine samples and if these have a bias, the machine automatically replicates them … However, the recruiter also has the bias, because he is subject to fatigue, and this here creates another type of bias, in my opinion, more dangerous, and therefore the machine in this sense, not being subject to fatigue gives an evaluation already in the first instance, regardless of the time in which it evaluates the resumes. Obviously, the recruiter intervenes on this and then makes a human assessment.” [Interview #3] |
| Explainability | “Our ability to understand and trust AI tools decision-making is critical. Without explainability, we're navigating with a black box” [Interview #4] |
| Liability | “Big algorithms can make big mistakes” [Interview #10] |
| Overestimation | “Another great fear is the excess of expectation that is generated. Since some AI algorithms are available out-of-the-box, it seems that they are easy to implement … So you have to go to customers and explain to them that first of all the result is not obvious, an experiment must be carried out to evaluate the effectiveness of the models before adopting it; and that an AI is as prone to errors as a human being” [Interview #3] |
| Reduction of human ability | “One of the potential problems is that people get used to using these tools and stop thinking … you think the machine has already done everything […] It’s true that calculator increases things for you, but if you don’t learn to do the calculations yourself, you’re missing a piece of the development” [Interview #1] |