What is a bug? I mean not the nice animal, rather, a “computer bug”. It is basically an error somewhere into the computer that causes incorrect or unexpected results.
It is a popular term among those that are dealing with computer in general as well as among those that experiment just the error without know about bugs.
One important step in build a traditional computer program is called debugging, that means looking for nice animals that compromise your software code
and possible remove them. Sometime bugs take you eyes open all the night.
Bugs live also at the Artificial Intelligence age, no matter how smart systems are becoming but bugs are there. The interesting story is that we are not recognizing the presence of them and apparently no one or few seems to care in this new hyper agile society.
I am referring to much more sophisticated versions of bugs that could characterize an AI algorithm and could compromise its output and consequently the level of trust we could have toward it and the decisions it is recommending to its users.
In AI we have a double problem: we have “classic bugs”, that are nice and funny error into the software program that implements the AI algorithm or the mathematical model that is behind and what we can call “AI bugs”. What are “AI bugs”? Simple, they are all kind of biases that could influence the AI algorithm behavior.
Of course AI bugs are the much more difficult to find: they are not errors that generate a bad behavior but apparently every thing is going ahead find until the end of the decision.
We trust the algorithm to decide, for example, who gets a loan from a Bank since it is a good guy or who gets its job interview first since the fantastic AI algorithm
that learned to classify people profiles put the best candidates for the job at the top of the list, or who gets a new less expensive health insurance plan since the AI algorithm
gave us a more longer life expectation.
Of course we can go ahead in infinite way and the key word behind this is “trust” we are starting to trust machines, and now more and more AI to support us in a number of decisions that could impact in direct or indirect way our lives and our society in general.
But, who care about “AI bugs” or in general who care about a bias an AI algorithm could have? When I use to give lectures to AI students or talk to people that would
like to know about AI I close my presentations with some thoughts and examples of “AI bugs”.
One example is coming from a work from researchers from Shanghai Jiao Tong University that published this work: “Automated Inference on Criminality using Face Images”
https://arxiv.org/pdf/1611.04135v1.pdf A conclusion reported into the work is this: “In other words, the faces of general law-biding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people”
More recently the same team considering the critics published this “Responses to Critiques on Machine Learning of Criminality Perceptions” (https://arxiv.org/abs/1611.04135).
A question we can ask to our self is if this kind of activities make sense.
Initiatives that are valuable to mention in this space are, for instance, the one led by Kate Crawford and Meredith Whittaker, AI Now, that is working across disciplines to understand AI’s social impacts (https://artificialintelligencenow.com/) or the general Partnership for AI promoted by a number of big IT players (https://www.partnershiponai.org/).
Nevertheless, the true problem of the existence “AI bugs” and how to cope with them will remain there, as a bug. The only antidote, I think, is coming from our self and our care
about the problem and the willing to ask for transparency, also for AI.