The Theology of Artificial Intelligence
Artificial Intelligence systems are every day more present in our lives. Despite interacting continuously with them we are usually not conscious about how they work or what are the consequences of using this. In our mind, it's just a black box.
Artificial Intelligence systems are every day more present in our lives: searching information on the internet, checking twitter, going from A to B, recommending us films, and even in our health. They recommend and sometimes dictate where we need to go, what we need to buy, and even sometimes what should we think. Despite interacting continuously with them we are usually not conscious about how they work or what are the consequences of using this. In our mind, it's just a black box.
Those black boxes can be mainly beneficial: allowing us to go through a city we don’t know, giving us a better credit score and even allowing doctors to prescribe us certain medication. It’s just in some cases where those systems show their weaknesses: racism, sexism, throwing people to chronic poverty and hunger and even condemning you for something the system predicted you might do in the future.
Several are the possible biases Artificial Intelligence systems can have, ranging from data itself to the decisions made when designing the models. Some are intended (e.g. correcting historical data bias), and others not. It is, however, more relevant that there is currently little room for humans to question and discuss the results. Sometimes it's because we are OK with the results, and not think on how it can affect people different than us. Other times, because there is no process to challenge the algorithms. And in those cases when we can contact someone from the system, that person has no knowledge on how or why the algorithm works.
Despite multiple new regulations have been proposed to address those issues, the practical applications are far from ready. If, for challenging an algorithm output, you need to arrive to court after 2-3 long years of battling in lower institutions, you will simply not do it. And thus, potentially flawed and racist algorithms will continue in place until there is enough public pressure on social media that the company finally changes that exact model, probably without thinking on revisiting the others they may have.
Being able to request more information about the AI systems and being able to challenge it needs to be broadly enabled. Companies and institutions should have those processes in place, not hiding behind technicalities, trade secret or algorithms so complex that simply put, they cannot be explained. When something can affect negatively our lives, humans should be in there. In the design phase of the algorithm, discussing on the different values and effects on them the system might have, avoiding to ask the wrong questions and focus on the algorithm fairness. And then, it is of the upmost importance that we can challenge those systems, because they are not perfect. They are not god.
People usually finds that everything based on mathematics is correct. It is accurate and it should not be challenged. Because 1 + 1 is 2 and we are experimenting it everyday. This attribution is also expanded to almost all the things based on mathematics, and AI systems are one of them. “How do Google Maps work?” “It’s magic. There is a complex math algorithm behind it, and look, it has given me two correct options.” And we hardly think the routes can be wrong, that the machine we rely on has failed. And when it does, it is a problem for us: disbelief, thinking that it is us that we have done something wrong. It is not possible that a god-like creature has failed in its mission.
Changing this perception can be difficult, but it’s something we must start doing as a society, and facilitating the ways to think about this, to have a discussion and finally challenge a decision is a good start.
Finally, its important that the designers of the system have in mind that they are not creating a god-like creature. That it will be flawed, and it’s just via the collaborative process with the broader society that they will enhance its usefulness and mitigate collateral damage. The system must respect the freedom of choice, human rights, and we, as designers, must carry the responsibility to work toward the kind of world that we want to live in.
Image from John Towner