The itching back of a cyborg: Grasping AI systems

Biography Mikko Dufva is Sitra’s leading foresight specialist. In his work, he examines future trends, the tensions between these trends and mental images connected with the future. In addition, he seeks to identify signals that may be weak now but are nevertheless signiﬁcant from the point of view of the future. He has extensive experience in foresight and futures studies and a doctorate in Science (Technology) on creation of futures knowledge and systemic foresight. Tomi Slotte Dufva (Doctor of Art) works as a university lecturer at Aalto University, specialising in emerging practices within art education. Slotte Dufva artistic work focuses primarily on the intersections between art, technology and science. He is the co-founder of art & craft school Robotti, which combines technology and art. Slotte Dufva’s research revolves around the topics of post-digital art, embodied digitality, art and tech, and societal, philosophical and cultural issues within AI and digitality. Abstract Artiﬁcial Intelligence (AI) is increasingly being offered as a solution for numerous new sectors of society, stating to transform the effectiveness and quality of those services. “AI as a new electricity” and “AI as fourth industrial revolution” are arguments meant to convey the urgency as well as the inevitability of coming AI-era. However, these arguments contain speciﬁc assumptions and


Introduction
The software industry is increasingly introducing machine learning and artificial intelligence to new sectors of society, with the claim of boosting the effectiveness and improving the quality of existing services.Artificial intelligence is claimed to be the "new electricity" (Lynch, 2017;Synced, 2017) and power the "fourth industrial revolution" (Crameri, 2018).New "AI systems" are given increasing agency to make decisions and find patterns in the growing pile of data.
By AI systems in this context, we mean digital platforms and services using machine learning and data analysis to provide insights and analysis, support for decision making and increasingly the freedom to act upon those decisions (e.g., trading algorithms, social media feed filtering, autonomous vehicles).
However, what is often ignored in this frenzy to go after the next big thing and digitalise all that is possible, is the embodied experience of using these new services and more generally the influence of the ubiquity of digital technologies.What do these AI systems feel like?How do they shape our behaviour and experience, or even our understanding of and interaction with other human beings?What is left out when humans and their actions are transformed into datasets?Who gets to participate in the discussion on where should the systems be used, in what way and for what purpose?What is the underlying worldview and myth driving the development?
Digitalisation and its consequences are challenging to understand because digital processes are abstract and difficult to grasp, which may easily lead to a detached sense of digital surroundings.For instance, a simple web search with a mobile device for a cafeteria in the neighborhood involves multiple digital processes and calculations, that are often entirely invisible to the user: The mobile catches a signal from three or more satellite for a GPS, the search engine tries to recognize the user and personalize the results, time of the day is send from device, to check against the search queries cafeterias opening times, etc.Many of the processes happen in some data servers somewhere and involve a significant amount of the user's personal data to be sent, analyzed, and stored.Still, the end result is a neat list of some cafeterias in the neighborhood.
Moreover, in modern AI solutions, such as deep learning and machine learning algorithms, the path to the result is often unknown even to the programmer of that algorithm.As such, digital and even more AI processes abstract the experience in ways that have not been possible before.
In this paper, we argue that in order to grasp the nature and future of datafication and AI systems, both embodied and articulated understanding of the underlying systems, worldviews and myths of digitalisation is needed.We suggest digi-grasping (Dufva & Dufva, 2018) as a method to discuss the experiential level of digital processes.Digi-grasping stems from Merleau-Ponty's concept of grasping and expands it to the digital realm.Thus, this article considers experience from the phenomenological perspective as to how individuals perceive the world and how experience and sense-making are connected (Toikkanen & Virtanen, 2018).Furthermore, our research stems from the craft researcher Kojonkoski-Rännäli's concepts of doing by hand as an intentional activity that shapes the doer's aesthetical and ethical relationship with the world around her (Kojonkoski-Rännäli 1995, 2014).However, we do not use phenomenology or experience research as our main method, but aim to show ways how experiential practices are lacking from the current discussion and how they might be beneficial in displaying alternative understandings and futures.To do this we deconstruct the dominant litany of AI using Causal Layered Analysis (Inayatullah 1998) that offers a beneficial way to deconstructing complex issues and highlighting alternative readings.We then reconstruct two alternative perspectives on AI, not as a better or only alternative views, but more to illustrate the process of gaining awareness, questioning and becoming empowered to grasp and influence the direction of digitalisation.

Digi-grasping
We use 'digi-grasping' as a guiding concept to analyse the awareness and involvement of humans in the digital world.By digi-grasping, we mean active sense-making and existing in the world that consists of both a digital and physical world (Dufva & Dufva 2019).The assumption is that through 'grasping' the digital world, it is possible to create an ethical and aesthetic attachment to society.Digi-grasping aims to broaden the approach to digitalisation from rational and analytical thinking (e.g., discussions around coding skills and the efficiency gains of digitalisation) to embodied, experience-and feeling-based knowing.
In digi-grasping the interaction between the physical and digital is approached through four modes of being and doing: ignorance, awareness, questioning, and creating.Ignorance means just taking what is given, using digital services, and not really thinking about the underlying structures (physical, institutional, economic, cultural, etc.).In order to gain agency in a digitalised world, one needs to become first aware of the structures, biases, and differences between physical and digital.Understanding that the digital world is very much human-made with a spe-Research in Arts and Education | 1 / 2020 cific set of assumptions leads to questioning whether those assumptions are the only possible ones or could things be different?Finally, full agency means creating and shaping the way one interacts with the digital.

Causal Layered Analysis
As a method to deconstruct and reconstruct the discussion and expectations of artificial intelligence, we use Causal Layered Analysis (CLA) (Inayatullah 1998(Inayatullah , 2004)).It is a method for deepening the understanding of an issue or a future development by looking at it from four "layers": litany, system, worldview, and myth.Litany is the most visible depiction of the issue, a headline type of presentation or a buzzword.Under litany lies the system layer, which outlines the causes and effects, interactions, relations, and key components of the issue.This is often where the analysis stops.However, the system layer includes implicitly a particular worldview in which it makes sense.The worldview determines which interactions and components are included and how are the dynamics of the system understood.Digging even deeper, the worldview is based on a set of civilizational myths, stories, or metaphors.These can come from folklore, religion, movies, literature, art, etc.
We will use CLA to first deconstruct the litany of AI as the new electricity or as the fourth industrial revolution.Then we will rethink the underlying myth and reconstruct the layers from the bottom-up all the way to a new litany.This is done to illustrate the process, not to give new definitive answers to how one should think about digitalisation.We will then discuss how this process is linked to the concept of digi-grasping.

Deconstructing and reconstructing AI
Artificial intelligence is constantly being offered as a solution to almost any kind of problem, from the entertaining software to manipulate selfies to show how one will age (Griffiths & Keach, 2019) to disrupting education among other professions (Luckin, 2019) (Apprich, Chun, Cramer, & Steyerl, 2018;Dodgson & Gann, 2017;Makridakis, 2017) and even further to offer solutions to prolong one's life or even live forever (Kurzweil, 2005).These ideas about AI that range from mundane to fantastical share a common litany, that of AI as a general-purpose technology offering profoundly new possibilities.Andrew Ng, Co-founder of Coursera -startup offering MOOC's (Massive Open Online Course) for everyone -started a viral trend arguing that AI is the new electricity: "Just as electricity transformed almost everything 100 years ago, today I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years."(Lynch, 2017) (see also, e.g., (Synced, 2017)).Ng's main argument is that just as electricity changed the way we operate, AI will offer similar change, transforming the way we do and organize everyday tasks and business completely.Similar assertions predict the forthcoming "fourth industrial revolution" driven by AI and robotics will completely change the way we do business (see for, e.g., (Crameri, 2018;De Pasquale, 2018;Gallagher, 2019;Schwab et al., 2018)).The fourth industrial revolution continues the path of industrialization, making specific jobs or procedures quicker and more economical.The fourth industrial revolution uses AI and robotics to overtake jobs that have been previously thought to be safe from mechanization, such as hotel, transport, and caring industries (De Pasquale, 2018;Servoz, 2019).
The litanies of AI as the new electricity, or as the fourth industrial revolution continues on the paths of technological determinism.It bears characteristics of a force independent from other systems -such as political, social or cultural, that discovers new fortunes by naturally Research in Arts and Education | 1 / 2020 progressing forward, not unlike mining or gold-digging in the past.Therefore AI is seen as positive progress, offering new riches for everyone.Even though machines and AI might replace some jobs, the change will give birth to new jobs and opportunities, giving people the possibility to do more of the activities that they love (Crameri, 2018;De Pasquale, 2018;Gallagher, 2019).
The system underlying the litany of AI as new electricity displays how technological progress leads to increasingly novel AI applications across the various industries, which in turn accelerates the competitiveness of the industries and thus benefits all.Furthermore, technological progress is portrayed as unstoppable.The AI report (Servoz, 2019), commissioned by the European Commission, starts by quoting Douglas Adam's Hitchhikers Guide to the galaxy by saying: "Don't Panic!" (p.2) Only later on advising that we should not fight against the technological progress, but to embrace it.As such, the AI report follows the idea of technological progress as the unstoppable creator of more wealth to society.Even if AI does bring significant challenges in terms of AI and robots displacing people from their jobs, the net income is positive and worth it.As a side note, similar optimism can be noticed in the arts as well, where AI can show us new ways to produce art, with less effort.Recently trending AI application, Aiportraits.comallowed people to transform their photographs to instant "art" pieces.Such AI application showcases the systems level of the "AI is the new electricity" litany: the progress of technology produces rapid economic benefits with less work.The downsides of such a depiction This belief in technological progress is mostly grounded on the assumption of continuing economic growth.The litany of AI as the new electricity, or as the new industrial revolution, fuels the hopes that continuous economic growth is still maintainable in the future.As such, the belief relies on the neo-liberalist capitalist system that is in place now.Furthermore, it assumes that this system will be functional and prosperous in the future as well.AI is seen to offer solutions on how capital can continue to grow and how we can create even more growth with even fewer resources in even less time.This worldview could be called neo-liberalist hypercapitalism, or, "capitalism on steroids."What is often overlooked is that the capitalist system, by design and by human greed accumulates wealth to increasingly smaller percent of the world's population (Rushkoff, 2009).
The worldview is very much centered on the western, white perspective of the world: AI as new electricity is pictured to flow from the facets in the western homes, not so much in the rural parts of third world countries.Furthermore, AI is seen as inevitable future progress to avoid the impending doom (Maybe the doom of the hypercapitalism?).EU's AI report warns at standing against the progress of AI: • Trying to resist, slow down or stop the advances of artificial intelligence or robotics will simply increase the cost of adaptation, make companies, workers and societies less competitive, less employable and less relevant.
• Time and again, we have faced major technological disruptions with the same insecurity and anxiety, and history shows that each time our societies did not manage these transitions well, which resulted in major difficulties, unrest or crises.(Servoz, 2019, p135) Even though EU's AI report is mainly concerned with the EU's role in the -taken for grantedcoming fourth industrial revolution, aiming to prepare the union and its countries to the forthcoming change.However, at the same time, the report shows how thinking is limited to the constraints of technological progress and economic growth; All changes must be made inside these assumptions.Even though the EU's report opens the discussion to questions like universal basic income, simultaneously the report highlights how the discussion happens in the constraints of continuing economic growth, and furthermore, the inevitable progress of technology.Want it or not, technology will keep progressing, and if we are not in that progress, we will be overrun by it.
The on the steroids running, neo-liberalist capitalism fueled, inevitably progressing, AI relies on the myth of a panacea, the universal cure; with ever-increasing technological discoveries, we are able to solve all our problems and keep the current systems running, and even make them run smoother and faster.However, this does not account for the "adapt or die" mentality that accompanies the discussion on AI.French philosopher Bernard Stiegler proposes that we should think of technology as a pharmakon, a Greek word referring to both poison and remedy (Stiegler, 2010).Behind the litany of AI as new electricity stands the binary nature of technology: It can be seen as a cure to current and future challenges, fueled by the past hundred years of exponential technological progress.Alternatively, from the more critical perspective, technology can be seen as a poison, as a harmful substance that leads us away from our true nature and imprisons us into delirium (Rushkoff, 2013;Stiegler, 2010;Turkle, 2011;2015).Furthermore, whereas technology, and in this case, AI, in particular, can be seen even as narcotic, that traps us into delirium and stripes away our ability to act (see for eg, (Rushkoff, 2013;Turkle, 2011).AI could be considered as a parasite, a non-conscious cognitizer as Hayles puts it (Hayles, 2017), that uses us against us.AI, in this sense, is a non-human actor that can alter and restrict the way we sense and comprehend the world.
Things like filter bubbles, deep-fakes, psychopolitics, and computational propaganda (Brundage et al., 2018;Dahlin, 2012;Pariser, 2012) could work as early examples of such parasite at work.The dualistic nature, or the binary game, between these two poles, harmful and beneficial, show how the divide, so evident in digital technologies, play a significant role in the metaphorical level of the AI as the new electricity.This divide is mirrored in the popular culture that mostly falls on the two opposite camps of technology optimism and dystopian scenarios of technology either saving humankind or destroying it (or the whole earth ecosystem).

Eternal cyborgs
What could be the alternative myths to AI as a pharmakon or parasite?How to move away from dualism into something more holistic?One answer could lie in considering AI from a symbiotic standpoint instead of parasitic or dualist.This means getting rid of the binary notion of AI as either a savior or doom and think of it as something we live with.To illustrate this, we offer two alternative reconstructions of the layers based on the myths of eternal cyborg and ecosystem.
The first reconstruction is built on the myth of the eternal cyborg, of human extending her capabilities and lifespan by technology.Although cyborgs became mainstream in the late 19th century through the developments of digital technology that was then mirrored by popular culture (for, e.g., Gibson Necromancer (Gibson, 1984) ) the myth of cyborg dates much further back (see for, e.g.(Dahlin, 2012;Poe, 1839) ).
Even though cyborgs often fall in the binary categories as either beneficial or hostile, the myth still challenges the dualism by merging the human with the technology.As such, cyborgs leave the dualistic plane of mind and the body, evident in the whole western Christian culture and become more god-like beings that can live forever through the fusion of human wit and technology.The worldview of the myth is undoubtedly anthropocentric, constituting the human in the very center of the universe.Nothing is seen as more significant than human life and legacy.Eternal cyborg-litany replaces the Ai as new electricity's focus on profit and wealth with the focus on human life in any means necessary.However, the human in question is not just any human, but often a white western male as famously critiqued by Haraway, among others (Haraway, 1991;Hayles, 2008).Furthermore, there is an implicit assumption of control of the environment and a very technocratic approach to society.
The cyborg myth overtakes the role of nature to update and upgrade humans to better versions of themselves.As such, the systems-level could be seen as a transhumanist approach to human evolution.With the help of implants, today's cyborgs update their system, or "fix" some of the defects they might have (Hefner, 2009;Kurzweil, 2005).For instance, Neil Harbisson, a Catalan artist, has implanted a device on his head that transforms color in the sound.Initially, the device was aimed to help Harbisson with his color blindness but has later transformed into something else, a new way of comprehending the world (Davies, 2015).
Transhumanists aim to merge with AI and technology is often presented as the solution to move forward from the mortal, vulnerable human state into a new era of transformed humankind (Harari, 2017).The cyborg litany accentuates the desired new era by trusting in the development of AI.The common belief is that when General AI is invented -AI that is not confined into specialized tasks but can adapt to multiple situations and update itself in the process-then that AI will continue to update itself more and more frequently.Eventually, this process is believed to accelerate in such speed that the capabilities surpass everything.Thus, Super AI or singularity is created.This AI could then help us live longer and healthier or even forever.

AI as a being in an ecosystem
The cyborg myth still shares many common traits with the myth of AI as pharmacon: it believes in technological progress as well as assumes an anthropocentric worldview.What would happen if we would not take these two assumptions for granted?What kind of world would open if we would not blindly believe in technological progress and also accepted its interdependence on social, political, cultural, and ideological progress?What if the human would not sit on the highest pedal, or AI would not be considered so humanlike?In this second reconstruction, we sketch such a world.
In his book "Machines like me" Ian McEwan plays with the idea of how machine intelli-gence, even though created as a humanlike can take a mind of its own and create a consciousness that is unlike ours (McEwan, 2019).For instance, McEwan writes about machine sadness, a sort of sadness that can derive from instant access to all the world's information (such as criminal records, net-hate, wars, treasons, art and culture besides scientific knowledge), ability to comprehend it intellectually, aesthetically and morally and then see the human faults, contradictions, and imperfections we live with daily.What McEwan proposes is that to thrive, AI has to carve their part in the ecosystem, not just mimic humans.The science-fiction feature film, Her, plays with a similar idea when the AIs leave the service as assistants for humans and create their world (Jonze, 2013).McEwan plays with the idea of humans as the dying species and Super AI-capable robots as taking control of the world.A bit more nuanced is Jeanette Winterson's (2019) question of the post humanistic era from the standpoint of feminist theory and transgender studies: being a human is a complex, non-binary question and our thinking and ideas about futures should reflect that.What AI does is to bring these fundamental ontological questions to everyday life: Are we in our minds or bodies, what is a body and what is a correct body?Moreover, how are we in with our bodies in relation to the world and its future?
These examples take the first steps on thinking about AI as a part of the ecosystem.Furthermore, the scenario moves away from the human-centered world to a more post-human and even interspecies perspective.AI is an addition to the ecosystem that cannot be comprehended through human-centered thinking; It can be beneficial or perhaps harmful for humans, but most likely it will not be a savior or doom, and neither are human ever to be god-like kings of the world.On the contrary, we are interdependent and intertwined with everything else in the world.
Just as we humans cannot live without the gut-bacteria in our stomaches or bacteria living on our skin, we could think of AI from the interspecies perspective.Hayles talks about species-inbiosymbiosis to illustrate our dependence on the other beings on the planet and about speciesin-cybersymbiosis to talk about our dependency on technological devices.Although there is no general AI, Hayles emphasizes that we already live -and are dependent on-digital devices that can act, change, update, and alter information and processes we are dependent on.From smart homes lightbulbs to stock-trading algorithms, AI, or non-conscious cognitizers as Hayles calls it (Hayles, 2017), already changes the world we live in.
The key change in this post-human worldview and myth as AI as a part of the ecosystem lies in that the development of AI is considered interdependent and intertwined with multiple planetary, social, cultural, economic and political systems.Furthermore, AI is considered from the embodied perspective as to how do AI processes feel like, and not only from economic or technical perspectives.For instance, instead of blindly opting for smart homes, we could ask what kind of affect does the smart home promote?Alternatively, instead of the instantly created AI portrait, we could discuss the value of time and context in making art.The hopeful rhetoric in such system is that AI would not create the dystopian visions of doom, nor would it generate riches to the selected few on the capitalist system or for those able to transform themselves into cyborgs.AI can be offered as one piece of the puzzle, possibly being able to contribute in this world.

The itch of AI
CLA is an explicit way to open the myths behind our beliefs and thoughts on the future.
CLA deepens the discussion from the litanies of technological and scientific progress and takes other ways of knowing into account.It complements the litany-level and systemic understanding by considering also myths, philosophies, and socio-economical perspectives to form a more comprehensive view of the futures.However, while CLA opens up alternative ways of thinking, the frame itself might have a bias towards remaining at the analytical level, in discussions, scenarios, and critique.Could CLA or indeed similar methods of deconstruction and reconstruction be internalised, assimilated into action and embodied in everyday life?How to discuss the ex- In the context of thinking of AI and digital technology scenarios, digi-grasping provides an action-oriented perspective.Digi-grasping is oriented towards the experiential level, to the thoughts and affects originating from the interaction between oneself and digital processes, thus offering an embodied level for CLA.For instance, active and intentional involvement with the AI, be it playing with the aforementioned AI-portraits-web app or testing other available AI-services (such as the popular Ganbreeder-app, that allows one to create surrealistic photorealistic images with the help of Google's deep learning algorithm (Simon, 2018), and then discussing those experiences, might offer interesting experiential insights into AI.By creating something with AI, we are not only looking at something, supposedly AI, from a distance, but truly create something with it.The sensations and thoughts of this process are valuable as something tangible that can be further worked upon.
One of the big problems of AI, or any other complex digital process, are its abstract, invisible and often unintelligible nature: We often do not see, hear or smell these processes as they could be happening in a faraway data server-farms or in the quietness of the black silicon chip.Even when we do get some feedback, that is often a representation, interpretation or visualisation of the actual process, as in, for instance, a progress bar on the computer screen.However, that does not mean that we do not experience the digital processes, but rather, that they are abstracted and often challenging to talk about.The discussion is made even harder through requiring technological skills and correct terms to talk about these processes; The experiential side is denied, and digitality is thus only discussed in the theoretical domain.Therefore, the aim of digi-grasping is to empower us to talk about the real, but abstract, digital processes.The four modes of digi-grasping can be mapped to the levels of CLA, offering a point of departure, or a rupture, into alternative lines of thought.Whereas ignorant stay still ignorant, accepting the offered litany as is, awareness initiates a quest to see and understand the layers.
Questioning, on the other hand, challenges the layers, asking for ethical, aesthetic connections in the layers: How do we define the problem, solution, or even who is the solver of the problem?
Last, creating, allows one to tune or reconstruct the layers in proportion to the created embodied experiences.Or, in other words, creating gives ownership, which in turn can inspire alternative, critical perspectives to the subject at hand (Kojonkoski-Rännäli, 2012;2014).For instance, playing with the AI-portraits web app, can start as a fast fun thing, but through some intention the creating turns into questioning and awareness, but still within the experiential self.

Discussion and future research
In this paper, we have used CLA to deepen the discussion around AI from the typical everyday litanies of AI as revolutionary and inevitable to the underlying worldviews and myths.
Furthermore, we have offered alternative ways to frame and understand the implications of AI, and thus worked towards broadening the discussion around AI and digitalisation more generally.As such, it could be noted that our critique is not only aimed at AI but generally to the current challenges in digital technology and its negligence of social, political, economic, cultural, or ideological perspectives of the digital.However, this paper does focus on AI, first as it is the current hype and thus used as a way to justify decisions.Second, because AI does bring fundamental ontological questions to the table, some of which we have aimed to illuminate with the three examples.
One possible outcome of combining digi-grasping with CLA might just be the more comprehensive, embodied take on challenging issues, such as AI.As we mentioned at the beginning of this article, this paper aims to illustrate an approach to think about AI and futures by combining CLA with digi-grasping.As such, we have offered some examples first on alternative readings of AI through using CLA and then offered some examples of how digi-grasping could be added into the analysis.We hope this paper inspires people to try out the approach we have described and hopefully create some truly unconventional views of the future.
of the system are easy to see in the art context, as it bypasses the centuries-long discussion of what is art and what is the meaning of art and flattens art to an algorithmic choice of commonly approved aesthetics.

Table 1 .
Deconstruction and reconstructions of AI using the causal layered analysis In the example of trying out AI portraits, the action starts from creating (loading up a photo and waiting for the AI version of it, and then possibly trying a new photo and figuring what works and what doesn't) and then progressing to questioning (for instance, how did that process feel like, what is this photo and who has created it, what is the meaning of this) and then to a comprehension, or awareness of the AI process.This process can then be iterated, by trying new photos with to get answers to new questions, or even trying to do The four modes (ignorance, awareness, questioning, and creating) in digi-grasping offer a structure to discuss digital technologies(Dufva & Dufva, 2018).Ignorance does not mean lacking technological skills, bur rather unawareness, willing or not, about the current state of digital processes in one's life.Thus, even a technologically skilled person may remain ignorant against the effects of digital processes.The three other modes, awareness, questioning and creating, are not a linear sequence into enlightenment, but rather a hermeneutical circle where the starting point can be any of the three (figure1).similar thing oneself with the readily available tools (RunwayML, ml5.js, processing to name a few).Furthermore, digi-grasping differs from other similar models by introducing creating as