Written by Roel Meijvis
The pandemic has accelerated digital transformation and, in doing so, has pushed the long-standing process of automation within organizations into a new phase. However, the pandemic has also shown us our own limits. This has brought us to a time when a blind faith in progress is not only naive but also destructive. Growth must therefore be understood not only quantitatively but also qualitatively, in the sense that it not only does as little harm as possible, but also contributes to people and the environment. Can automation help here? Can automation be such a ‘Force for Good’?
According to Angela Salmeron and Neil Ward-Dutton, authors of the IDC white paper “Automation as a Force for Good – 6 Steps to Transform Theory into Practice” it clearly can. In fact, in their article they not only show how automation can contribute to people, the environment and society, but also actively call for immediate action in the form of a manifesto. The tipping point in history that they claim is the pandemic offers us the momentum to build a better world. Although they emphasize that it is we who have this better future in our hands, they show that technology and automation are the building blocks.
According to them, automation can be a ‘Force for Good’ for both employee, organization and society. For (i) the employee because it enables them to focus on more complex and challenging work, (ii) for the organization because, by the foregoing and by its good intentions, it manages to retain employees who are committed to the organization and its goals, and (iii) for society (and the environment) because it benefits from sustainable organizations that care about it and offer meaningful work. And these are just a few of the many examples cited.
Automation can thus be such a force for good, according to this report, ‘provided it is deeply rooted in universal values and principles’, according to Salmeron & Ward-Dutton. And therein lies the crux. For the question of whether automation can be a ‘Force for Good’ we may first have to ask whether human beings are, can be, and if so how. An important supplementary question is whether it is evident what those universal values and principles are, because automation must serve this purpose. And precisely that question, “what is good?”, is not technological in nature, but one of the primal questions of philosophy, which goes back to Aristotle who put the question of the good life at the heart of his work and thus gave the starting shot for the tradition of thought that we still know today as ethics.
Do we have technology or does technology have us?
The title of the aforementioned report echoes, consciously or unconsciously, the title of another text setting out a vision of the future, namely A Force for Good by the Dalai Lama. Interesting in this context is how he concludes the preface to this book: ‘As you read this, keep in mind that as human beings equipped with tremendous intelligence and the ability to develop a warm heart, we can each become a force for good.’ The quote allows us to give more substance to the concept of a ‘Force for Good’, namely as a force made up of two components: intelligence and (the potential for) ‘a warm heart’.
Today’s technology is incredibly intelligent, but even the most advanced AI seems to lack anything like a warm heart (for now). Not for nothing, therefore, Salmeron & Ward-Dutton emphasize several times in their paper that whether automation processes contribute to a better world is entirely in our hands; it is up to us to deploy and use technological intelligence in a responsible and moral manner. This seems obvious; after all, it is we ourselves who decide on this technology and write its codes and programs. But is this really the case? Do we have technology or does technology have us?
A good example of the latter was given by Henry David Thoreau as early as 1854 (!) in the classic Walden: “We rush to lay a magnetic telegraph from Maine to Texas; but Maine and Texas may not have anything important to say.” This statement most easily translates to this day and age by looking at the way we rush to the Apple Store when the new iPhone is available. Because it’s new, because it’s better, that’s why we want it, not because we need it as a means to a self-imposed end.
Technology comes with a stubborn injunction, namely, that it must stay up-to-date, a goal that springs from itself and maintains itself. We humans who must accomplish this update – from the technicians and programmers who must make it possible to the top of an organization who must decide whether to modernize – have thus become the tools of technology, rather than the other way around. A science fiction-like dystopian vision of the future in which humans have become slaves to artificial intelligence is thus suddenly closer than imagined.
The question of the good life
Fortunately, if we are optimistic, this purely instrumental, technocratic and economic worldview is slowly crumbling and, also within organizations, more and more attention is being paid to ‘the warm heart’, to (fellow) humanity and its concern for the environment. This is necessary, because only this will enable us to break with the circularity of the culminating process of the self-perpetuation of technology that must grow all the time, without the question of where to, for what or to what end.
These questions come together in the primal question of the good life, in ethics. What is a meaningful life? What is of value? What kind of world do we want to live in? These are the questions that each person must ask in order to then be able to express them within the various communities (society, organization, family, etc.) in which he or she participates. And given the need to break with many of the self-destructive forces at work in our society, it is more important than ever to engage with this question.
But the question of the good life is not only an individual matter, it is also a collective one. Instead of a (definitive) answer it is rather a conversation about the good life, a conversation that must continue because these values, universal or otherwise, must be given form anew in every age. (At the same time it is, even more than a conversation or a nice flipchart at the end of a seminar, the result of our actions in the here and now, with which we always already give expression to what we think is good).
If this does not happen, technological or technocratic thinking will continue to prevail. The values and goals will then no longer be formulated from ‘the warm heart’ and the conversation about what is good, but from a heartless intelligence in which automation processes succeed each other, purely because there is now a new technology. In this way, automation will only be a force for self-preservation, rather than a force for good.
If it is to be the latter, then we must have our goals, values and principles strongly in mind in order to bravely resist this efficacy of technology. Only then can we state whether a certain technique contributes to the good, whether it helps us to be more human, to lead a more meaningful existence. Only then can we judge whether an automatization process that is more efficient but costs people their jobs is actually ‘better’. And only then do we ‘have technology’ instead of the other way around and can we both ‘develop our warm hearts’ and be truly intelligent.
The whitepaper Roel refers to can be downloaded using this link to UiPath’s website.