What it is like to be a Polar Bear: how can AI be used in human/animal relations during the current extinction event.
Dr Sam Leach (adjunct research fellow at University of South Australia, member of RMIT AEGIS research group).
This paper discusses an installation work that I presented at Sullivan Strumpf gallery, Sydney in 2022. Viewers were invited to test themselves using AI to determine how closely they resembled a polar bear. A companion work used AI to give viewers a chance to see the world from a polar bear’s perspective by classifying objects in the field of view into categories that might be meaningful to a polar bear: food or mate. The work was accompanied by paintings based on compositions generated with AI. The model was trained using images of myself in my studio, augmented by friends, family and some stock images of polar bears. The training was deliberately poor and arbitrary, with the result that the algorithm detects resemblance to a polar bear in an unpredictable and arbitrary manner. In practice, this results in viewers attempting to mimic polar bears, trying different expressions and poses that increase their resemblance score. The motif of the polar bear was suggested by AI image generation, but this species status as a charismatic avatar of climate change seemed like an apt choice given the environmental impact of AI.
To put this work in context, I will outline some of the major ethical issues being considered in AI today, and some of the specific ways that AI is impacting non-human animals. The work ‘Polar Bear Test’ is intended to foreground the considerations and interests of non-human animals in the development and deployment of AI.
AI systems are already widespread and deploying rapidly. Dall-E and prompt driven image generation have recently captured public attention and imagination, but these moments of public interface represent a tiny amount of the activity underway in the AI Field. The technology is developing quickly and easily outpaces not only attempts at regulation but also efforts to critically assess the implications of this technology politically and culturally[i]. Significant work has been done to identify and address prejudice, bias and oppressive applications of this technology, however almost all of this work has been focused on the relationship between humans and these systems. Work has been done on the impact of AI on the environment and climate change, for example Kate Crawford’s Atlas of AI[ii] (anatomy of an AI system, 2018). However, while AI massively effects non-human animals, the discussion of the ethical and environmental impacts on them is rarely discussed and almost absent from ethical guidelines in AI development. In a meta-analysis of AI ethics in 2021, Hagendorff found just 1 paper that considered non-humans[iii].
1. Ethical Issues with AI:
1.1 Prejudice and bias
Well documented incidents of bias and unfairness to minorities, women and people of colour have been seen in AI systems used for hiring, medical applications and facial recognition among others [iv][v][vi]These biases reflect the data on which the algorithms were trained, as well as the perspectives of the engineers who design and deploy the models.
There are examples of AI being used to overcome human bias, especially in HR and Medicine, and there are massive efforts to generally address and improve the ethics of AI in relation to these problems[vii]. However, the same cannot be said for biased desicion making between humans and non/humans and especially species bias in relation to farmed vs wild vs companion animals[viii], which I will come to in a few slides.
1.2 Goal and Value misalignment, Trust and Explainability
Goal misalignment describes the situation where AIs or artificial agents act in a way that does not reflect the intended goal of the programmer, leading to unintended side effects (eg discriminating against people of colour in hiring processes) or reward hacking (as in Bostrom’s hypothetical paperclip maximising AI). Nick Bostrom’s paperclip apocalypse is frequently evoked as a concern for AI. In this thought experiment an AI tasked to maximise paperclip production converts the world’s resources including humans into paperclips[ix]. A real world example is self-driving cars with a goal to get from a to b, in 2018 a jaywalking pedestrian was killed because the ai did not have a category for an out-of-place pedestrian. [x]
Steps to address these problems have resulted in the so-called Asilomar Principles developed by leading AI researchers (signed by around 5700 of them)[xi] (https://futureoflife.org/ai-principles/, accessed on 11 April 2021. However, these principles focus only on humans: “(10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation. (11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.”[xii]
Trust in AI systems is still somewhat lower than trust in human decision making, however this is dependent on the context[xiii]. “There are domains that use black box prediction models since these domains are either well studied and users trust the existing models or because no direct consequences threaten in case the system makes mistakes, for example a bad song suggestion on a music streaming service has relatively low consequence while missile detection systems are well understood. (Doshi-Velez & Kim, 2017)”,
The notion of consequences here is again strictly anthropocentric, meaning that trust may attach to systems which deliver potentially catastrophic outcomes to non-humans despite being poorly understood or having low levels of explainability.
1.3 Environmental impact of the technology
Digitisation is often presented as a clean solution, and there is scope for AI to improve efficienct in resource use and allocation. However, as Crawford, among others, point out the environmental impact of placing computing technology in a vast array of items and objects is significant. Crawford uses describes this as the “mineralogical layer” of AI and quotes David Abraham that “. 99.8% of earth removed in rare earth mining is discarded as waste... that are dumped back into the hills and streams”.
Data centres are estimated to use around 200twh[xiv] (source: Energy council of Australia) the same amount of energy as Argentina or Thailand the vast energy requirements of huge data centres and the intensive operations required to train and run large AI models[xv]. Training GPT3, the algorithm behind Dall-E, is estimated to use up to 552tons of Co2 to train[xvi][xvii], similar to the amount of carbon released by a SpaceX launch, coincidentally both are owned by the same man[xviii].
2. How Non-human Animals are Impacted by AI:
2.1 Use of animals in development of AI
2.1.1 Animals are used in AI research and development. :AI models have historically been inter-twined with research into the brains and cognition of non-human animals. A large number of animals have been killed or subjected to vivisection historically to develop insights which led to the development of contemporary AI systems. AI researchers continue to search for ways to mimic the learning processes of non-human animals, particularly “innate mechanisms” or “behavior programs”[xix] Even if new and ongoing research does not necessarily involve dissecting animal brains, animals are still expected to be subject to experimental procedures to further AI research.[xx].
2.1.2 Animals are a benchmark for AI performance: the success of certain AI models is determined with reference to the ability of animals. In these cases the studies are typically specialized goal directed tasks focussing on solving problem or performing a task motivated by a food reward. Bossert and Hagendorff[xxi] point out that this approach typifies animal intelligence as a process that can be decoded and replicated by a disembodied processor, a new manifestation of the old Cartesian beast-machine.
The logo of the Animal AI Olympics, a competition to test AI depicts animals comprised ofcogs and machine parts[xxii].
2.2 AI is being used on directly animals (speciesism),
Speciesism in the deployment and discussion of AI is rife. The treatment of animals based on if they are wild, pets or farm animals is marked.[xxiii]
2.2.1 Pets: Many AI applications have been developed for pets involving feeding, training and efforts to develop inter-species translators[xxiv]
2.2.2 Wild Animals: One of the more publicised interactions between AI and animals is the detection and monitoring of endangered species, including measures to limit poaching. Activities such as census taking on whales and polar bears does have potential benefits, though some question if, like humans, non-human animals might have a right to privacy. AI controlled drones are being used to hunt animals – especially invasive animals as with possums in New Zealand [xxv], but also for recreational hunting (https://www.outdoorlife.com/hunting/spartan-forge-hunting-app/)
2..2.3 By far the most animals interacting with AI are farm animals in industrial agriculture, especially factory farming[xxvi]: AI models are used to monitor the health and productivity of animals, indicating when food, medicine or death should be administered. They also interact directly with animals, clipping marks onto them, making noises to modify animal behaviour, delivering electric shocks when animals move outside designated areas.[xxvii]
Ai is being used in the development of targeted pesticides and control mechanisms – may ultimately reduce use of pesticides but again, the goal is entirely anthropocentric, so that side effects and overall impact on non-humans is not a factor.
2.3 Animals are indirectly impacted by AI
In addition to environmental impacts discussed previously animals are indirectly impacted by AI in various ways.
2.3.1Automation such as self-driving cars, automated household cleaning and garden maintenance
As discussed, value and goal alignment problems persist in automated objects such as self driving cars, house cleaning robots, gardening devices and so on have no facility for considering the rights or needs of individual animals as they undertake their activities. A car might avoid collision with an animals to prevent injury to the occupants or damage to the vehicle, but not to prevent the death of the animal itself. [xxviii] The automation of farming also allows further detachment and distancing from the realities of farming[xxix].
2.3.2 most of the large datasets contain images of animal cruelty, and algorithmic recommendations can promote unethical video recommendations of animals being treated cruelly. Conversely, image datasets of animals, eg hens and pigs typically show free-range healthy animals and very rarely the confinement most of these animals are subject to. Indeed, legislation is specifically designed to limit the availabity of such images.[xxx]
Conclusion:
Animals are frequently referenced as tools, a source of training data and as a benchmark in the development of AI However a pressing concern in the interactions between AI and non-human animals is the near total disregard in AI for the rights and concerns of non-human animals. This omission extends to the field of AI ethics where a metastudy showed that only a small number of papers have been written addressing this area. Artists play a role in considering this gap.
My work Polar Bear Test aims to have people identify with a non-human animal specifically in the context of a familiar application of AI, particularly an application that is familiarly prone to bias, poor training and unethical application.
[i] Kane, A. (2021). Regulating AI: Considerations that Apply Across Domains. In Robotics, AI, and Humanity (pp. 251-259). Springer, Cham.
[ii] Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence . Yale University Press.
[iii] Hagendorff, T. (2021). Blind spots in AI ethics. AI and Ethics, 1-17.
[iv]Angwin, J. Larson, S. Mattu, L. Kirchner, Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks., 2016
[v]Dastin, J.Amazon scraps secret AI recruiting tool that showed bias against women, 2018.
[vi]Misty, A. Microsoft Creates AI Bot – Internet Immediately Turns it Racist, 2016. https://socialhax.com/2016/03/24/microsoft-creates-ai-bot-internet-immediately-turns-racist/ (accessed 17 October 2022).
[vii] Lin, Y. T., Hung, T. W., & Huang, L. T. L. (2021). Engineering equity: How AI can help reduce the harm of implicit bias. Philosophy & Technology, 34(1), 65-90.
[viii] Hagendorff, T., Bossert, L., Fai, T. Y., & Singer, P. (2022). Speciesist bias in AI--How AI applications perpetuate discrimination and unfair outcomes against animals. arXiv preprint arXiv:2202.10848.
[ix] Bostrom, N. (2003). Ethical issues in advanced artificial intelligence. Science fiction and philosophy: from time travel to superintelligence, 277, 284.
[x] Christian, B. (2021). The alignment problem: How can machines learn human values?. Atlantic Books.
[xi] https://futureoflife.org/ai-principles/, accessed on 11 April 2021
[xii] Ziesche, S. AI Ethics and Value Alignment for Nonhuman Animals. Philosophies 2021, 6, 31.https://doi.org/10.3390/philosophies6020031
[xiii] Kern, C., Gerdon, F., Bach, R. L., Keusch, F., & Kreuter, F. (2022). Humans versus machines: Who is perceived to decide fairer? Experimental evidence on attitudes toward automated decision-making. Patterns, 3(10), 100591.)
[xiv] https://www.energycouncil.com.au/analysis/big-data-a-big-energy-challenge/#:~:text=Data%20centres%20consume%20around%20an,cent%20of%20global%20electricity%20demand. (accessed 10/11/22)
[xv] Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence . Yale University Press.
[xvi] Patterson, D., Gonzalez, J., Le, Q., Liang, C., Munguia, L. M., Rothchild, D., ... & Dean, J. (2021). Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350.
[xvii] Anthony, L. F. W., Kanding, B., & Selvan, R. (2020). Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv preprint arXiv:2007.03051.
[xviii] https://www.theguardian.com/science/2021/jul/19/billionaires-space-tourism-environment-emissions. (accessed 18/11/22)
[xix]Zador, A.M. (2019) A critique of pure learning and what artificial neural networks can learn from animal brains, Nature Communications 10 3770.
[xx] Bossert, L., & Hagendorff, T. (2021). Animals and AI. The role of animals in AI research and application–An overview and ethical evaluation. Technology in Society, 67, 101678.
[xxi]Bossert, L., & Hagendorff, T. (2021). Animals and AI. The role of animals in AI research and application–An overview and ethical evaluation. Technology in Society, 67, 101678.p.5
[xxii] Animal Olympics Logo https://github.com/beyretb/AnimalAI-Olympics (accessed 17/11/22)
Zador, A.M. (2019) A critique of pure learning and what artificial neural networks can learn from animal brains, Nature Communications 10 3770.
[xxiii] Hagendorff, T., Bossert, L., Fai, T. Y., & Singer, P. (2022). Speciesist bias in AI--How AI applications perpetuate discrimination and unfair outcomes against animals. arXiv preprint arXiv:2202.10848.
[xxiv] https://www.lifewire.com/ai-could-help-you-understand-animal-speech-5221922
[xxv] https://www.theguardian.com/environment/2020/mar/14/poison-laden-drones-to-patrol-new-zealand-wilderness-hunt-pests-aoe (accessed 18/11/22)
[xxvi] Carpio, F., Jukan, A., Sanchez, A. I. M., Amla, N., & Kemper, N. (2017, November). Beyond production indicators: A novel smart farming application and system for animal welfare. In Proceedings of the Fourth International Conference on Animal-Computer Interaction (pp. 1-11).
[xxvii] Singer, P., & Tse, Y. F. (2022). AI ethics: the case for including animals. AI and Ethics, 1-13.
[xxviii] [xxviii] Singer, P., & Tse, Y. F. (2022). AI ethics: the case for including animals. AI and Ethics, 1-13.
[xxix] Hagendorff, T., Bossert, L., Fai, T. Y., & Singer, P. (2022). Speciesist bias in AI--How AI applications perpetuate discrimination and unfair outcomes against animals. arXiv preprint arXiv:2202.10848.