Cities 1.5

Dark Machines: AI, Climate Action, and the Future of Our Cities

University of Toronto Press Season 5 Episode 2

We live in the age of technology…in the blink of an eye, the Internet and social media have created new opportunities, jobs, and possibilities for connection. But they have also fuelled polarization, persecution, and real-world violence. Artificial intelligence, or AI, promises to turbocharge this revolution. But many questions remain unanswered by the advocates of these new technologies. Can we afford to let AI use infinite amounts of energy? Is it possible to create planetary responsible AI, or is that just a pipe dream? And if the need arises, how can we resist these dark machines?


Image credit: This image was AI-generated and does not depict real events.


Featured guest:

Victor Galaz is an academic and author whose expertise lies at the intersection of governance, climate and technology. He is an Associate Professor in Political Science at the Stockholm Resilience Centre, and a Program Director at the Beijer Institute of Ecological Economics. His most recent book is Dark Machines: How Artificial Intelligence, Digitalization and Automation is Changing our Living Planet and he is also co-founder of the Biosphere Code. 


Links:

AI and the Future of Cities - Fortune 

The workers already replaced by artificial intelligence - BBC

AI voice cloning tools imitating political leaders threaten elections -  The Independent

New AI Now Paper Highlights Risks of Commercial AI Used In Military Contexts - AI Now Institute

A.I. has a discrimination problem - CNBC

Generative AI’s environmental impact - MIT 

The ‘AI divide’ between the Global North and the Global South - World Economic Forum


If you want to learn more about the Journal of City Climate Policy and Economy, please visit our website: https://jccpe.utpjournals.press/

Cities 1.5 is produced by the University of Toronto Press and Cities 1.5 is supported by C40 Cities and the C40 Centre for City Climate Policy and Economy. You can sign up to the Centre newsletter here. https://thecentre.substack.com/

Our executive producers are Calli Elipoulos and Peggy Whitfield.

Produced by Jess Schmidt: https://jessdoespodcasting.com/

Edited by Morgane Chambrin: https://www.morganechambrin.com/

Music is by Lorna Gilfedder: https://origamipodcastservices.com/

[Cities 1.5 theme music]

David 00:01

I am David Miller and you’re listening to Cities 1.5, a podcast exploring how cities are leading global change through local climate action. [Cities 1.5 theme music fades out]

[whimsical music] We are entering a crucial decision-making era. Will we take urgent action to avert the worst impacts of climate breakdown, or will we continue making the same catastrophically bad choices that have already been over a century in the making? We’re also entering an era of new technologies. AI, machine learning, algorithms and more are already shaping our world. They’re primarily owned by rich Global North tech giants, and we’ve been sold a promise that these technologies will make life better for all. They will revolutionize the way we protect both people and planet. Many cities are already trying to harness this tech to speed up sustainability actions and improve the wellbeing of residents. [music ends]

[fast rhythmic music] But there are already concerning signs that all may not be as it seems, and maybe that shouldn’t surprise us since this wouldn’t be the first time that outcomes depend largely on who controls the technology. Workers are already being replaced with ChatGPT. Fake images of extreme weather events are being generated and shared with the public. Politicians and public figures are having their voices and images cloned. Governments are terminating staff using AI without human oversight. Militaries are using technology to surveil civilian populations. In some instances, AI decision making has been found to discriminate against marginalized groups, and this could have life changing or even life-threatening outcomes. AI requires vast amounts of energy and resources, exponentially ballooning a global bill we couldn’t afford even before this tech was on the scene. Disinformation is on the verge of being turbocharged with generative AI producing images, voice and text making it impossible for the average person to tell the difference between what is real and what is fake. In fact, part of this episode was written using AI. Can you tell which? I’ll reveal all at the end. [music ends]

** The bolded text below was written by AI **

[soft rhythmic music] We often hear that artificial intelligence, machine learning and automation will help us combat the climate crisis, optimizing energy grids, advancing climate modeling, and making industries more efficient. But what if the reality is far more complicated? What if rather than slowing down climate change, these technologies are actually accelerating it? Are we handing over too much power to systems we don’t fully understand? Are we building a world where technology deepens inequality, accelerates extraction and worsens climate risks? What can we do to steer these powerful tools towards sustainability rather than catastrophe? And crucially, is AI an ally in the fight against the climate crisis or a hidden threat? [music ends]

**AI script ends here**

[fast rhythmic music] To find the answers to these questions, I’m speaking to one of the world’s leading experts in governance, climate, and emerging technologies, Victor Galaz, in a special standalone interview. As well as being an academic, Victor is also author of the recently published Dark Machines: How Artificial Intelligence, Digitalization and Automation is Changing our Living Planet. His book speaks to many of the questions we need answering if we’re to ensure that these new technologies are a net positive for the planet. [music ends]

 

Victor Galaz 04:30

 

[phone rings] [whooshing] My name is Victor Galaz, Associate Professor in Political Science, and I’m calling in from Stockholm, Sweden. [handset clicks]

 

David 04:38

 

Victor, thank you so much, first of all for your work, but secondly for joining us today on Cities 1.5.

 

Victor Galaz 04:45

 

My pleasure. Thank you very much.

 

David 04:47

 

Could you just say a bit about your background and who you are so that our listeners get a sense of how we got here?

 

Victor Galaz 04:54

 

Of course. So, I’m a political scientist. I’m based at the Stockholm Resilience Centre, Stockholm University. I’m also a program director at the Beijer Institute of Ecological Economics based at the Royal Swedish Academy of Sciences. My research deals with essentially global risks, global connected risks, and I’ve done many different aspects of it, but mostly at the interface between technology, people and nature climate. So, I’ve studied how we respond to pandemics, I’ve studied the risks involved with geoengineering technologies, I’ve studied the connection between the financial sector and climate tipping points, and in the five to six years I’ve been interested in artificial intelligence and what it means in terms of both opportunities and risks.

 

David 05:46

 

You know, for most of us, artificial intelligence began with ChatGPT. You know, my friends are using ChatGPT to compose reference letters and things like this, but there’s much bigger background getting to this point. Can you talk about your perspective about the influence of AI and how long it has mattered and why?

 

Victor Galaz 06:14

 

I guess, first of all, we need to define what artificial intelligence is, right? And there’s so many different definitions of it. But a very simple way to look at it, they are essentially machines that are able to do things that we would normally associate with human intelligence, like visualize things, write things, analyze things, et cetera. And I think—I mean, the importance of AI became very clear to me around 10 years ago, and at that time we talked about algorithms or algorithmic systems. So, things that help steer technologies, things that define what we see online in social media, et cetera, and that, to me, was sort of the beginning of trying to understand how technology shapes sustainability in the long term. Now, in the last years, what we’ve seen and why people have become so excited about it is the potential of these technologies through what we now call generative AI. So, what you call ChatGPT is just a class of AI systems that normally are called generative AI, just to produce things that we didn’t foresee before in that way: synthetic text, music, code, images, et cetera, et cetera, but that is, really, just the surface of a much deeper, I would say, technological change. Like so much of what we have nowadays, what we normally think as hardware or physical artifacts, like a car or a speaker or a mobile phone, are infused with some sort of artificial intelligence that make them operate in a way. So, to me, there’s no way we can discuss sustainability, climate action or anything related to the SDGs without really getting a grip of these new technologies.

 

David 08:04

 

I want to come to the title of your book and talk about dark machines and your definition of what they are, but let’s just talk about algorithms for one more minute. You’ve been paying attention to this for at least a decade. Should we worry about algorithms? Are they a good thing? A bad thing? Are they neutral?

 

Victor Galaz 08:25

 

I would never say that they’re neutral. If you ask a computer scientist, they will probably say that they are neutral, right? I mean, it depends on how you use them. Like every technology is used in a real-world context. A real-world context that is normally highly social unequal, where you have diverse economic incentives that drive certain types of innovations and certain types of applications. So even though the design of a simple computer code as you would have in an algorithm might seem neutral to you, as soon as you start to apply it in the real world, it will have impacts. Some of them will be impacts that we don’t mind having, I mean, they might look neutral to us, and other times it would lead to discriminatory outcomes, or it could create harms. It could lead to the need for the use of more computing power and energy, water, hardware, et cetera. So, to me, I mean saying that algorithms or technologies are neutral ignores the fact that these are used in the real world and what that means.

 

David 09:33

 

[whimsical music] Can you give an example of something you’ve seen with the algorithms that might cause concern and suggest that whoever’s investing in these algorithms, you know, dictates the outcomes to some extent?

 

Victor Galaz 09:46

 

This is why I talk about dark machines in my book and not just AI, because I think that the topic is bigger than just deep learning neural network type of system. So, let’s talk about consensus algorithms. And people probably don’t know what that is, but if you talk about blockchain or cryptocurrencies people know what that is, right? That is not an AI system. It’s a simple consensus algorithm, but by using it and applying it you consume a lot of energy. So, I mean, that’s one of the implications of using that kind of system. So, you talk about OpenAI or ChatGPT, so these are large language models, and large language models have very interesting capabilities. I mean, they make incredible things and then they also do stupid things. But what we’re seeing out there in reality is a [inaudible 10:37] to create the biggest possible large language model, like billion on parameters, and this requires an incredible amount of computing power and energy and resources, right? So that race towards bigger and bigger models, we can now see in the carbon emissions from these companies. Like for a very long time when you looked at the data of digitalization, we managed to basically stay flat for a long time even though we processed more and more information globally. [music ends]

Thanks to this new type of systems, the generative AI, now we’re seeing carbon emissions increase suddenly from this sector, and they could increase quite rapidly. So this is about creating a technology that might have some social benefits that’s concentrating economic benefits in the hands of very few tech companies, because you need so much computers and skills, but where all the risks and harms are externalized because of climate change or the kind of hard labor that is behind labeling all this data or biased algorithms that make decisions that harm people of color or women, et cetera. So, I mean, that to me is just another example of an interesting technology that has downstream effects that can be quite harmful.

 

David 12:02

 

Sort of vaguely analogous to the fossil fuel industry, which gets very rich but the rest of us suffer from the pollution and the impacts of climate change caused by the fossil fuel industry. What economists call externalities.

 

Victor Galaz 12:17

 

I wouldn’t even say ‘vaguely’ David, I would say it’s obvious. So, another example, like releasing these large language models out there in the wild, which, again, I mean, some people find quite exciting and they’re quite useful for some things, but they are creating harms downstream. And I study misinformation online, so academics are there to try to clean up the mess that the big tech companies are creating through the systems, right? And we do it on our free time [chuckles] or in the limited research time that we have and with limited resources. So, I think it is very, very parallel, actually, to what the fossil fuel industry has done for decades.

 

David 12:55

 

You also alluded to some social harms, not just the environmental challenges of the increase in consumption of energy for this. Can you just give a quick example of something you’ve seen?

 

Victor Galaz 13:07

 

The best mapped examples, and this is not my own research, but if you look in the last five to 10 years, mainly from the US and the UK. So, whenever a decision maker, let’s say like policing or allocation of social welfare resources, use predictive AI modeling to help them allocate part of the decisions from humans to machines. Because it’s more effective, right? You want to remove human labor from it, and you want to make it more efficient, so you use AI systems instead. There are so many studies that show that these systems tend to have a bias because of the data they’re trained on and how these are designed. And that bias systematically is, I mean, towards, again, women or people of color, and that’s just like the results of crap data in and then you get crap decisions out. So those kind of biases are quite well mapped.

 

David 14:03

 

Can you talk about this idea of dark machines and your concepts of how we’re at risk from these dark machines?

 

Victor Galaz 14:14

 

So, the reason why I talk about dark machines, I think the—one of the main reasons is that it’s very seldom, again, in the real world that you interact with just an AI system. Like whenever you use a cell phone, it’s a combination of different systems. Some of them might be AI and other are more simple algorithms, some are just the digital interface. Yeah, exactly. So, you’re showing me your cell phone now. Or in [inaudible 14:38] some people might add blockchain to it. So, I thought, “Huh? I think the concept dark machines, it’s more fair. It’s like a combination of different digital systems and interplay.” That’s one thing. And then, of course, why would you call them dark? You can just call them, I don’t know, complex machines. But I think to me, I wanted to make a point that there’s something about how these systems are set up that make them dark. So first of all, systems that are based on deep learning, or so-called neural networks, are opaque. We know they’re able to do many things, but it’s difficult to gain insight to why they reach certain types of decisions, so they’re opaque. The other thing is that they can be opaque because of company secrecy, so companies are not open in how their systems operate. And then the last point of why I chose to call it dark machines is that there’s an interesting literature for something called dark patterns. So, these are user interfaces and technologies that we use, like in webpages or digital services, that are designed in a way to nudge us into—to use these services in ways that maximize the profits of companies instead of our wellbeing, right? So, I wanted to bring that upfront very clearly and to a contrast of other conversations that focus more on AI for good or AI for the planet, which I felt was quite naive.

 

David 16:07

 

So, it’s dark in the sense that we can’t see inside the core of what’s going on here?

 

Victor Galaz 16:14

 

It’s extremely challenging. It’s both for technical reasons, but also because they’re just hidden behind what companies are willing to share.

 

David 16:24

 

[fast rhythmic music] You give an interesting example in your book of information gathering. I think it’s Facebook who used to have like or not like—

 

Victor Galaz 16:33

 

Mm-hmm.

 

David 16:34

 

—and now has different emoticons, and that’s all designed to gather information and sell it to people who want to sell products. I just found that to be quite an interesting example of how manipulative, perhaps, some of these companies can be.

 

Victor Galaz 16:51

 

Yeah. And I think—I mean, it’s just like how the digital space expands into every aspect of our lives, right? So, I mean, in the book I talk about these technologies going broad, like in all sectors, and then also going deep, like trying to collect data about our deepest emotions. And I mean, that is an extremely controversial field of research called emotion AI, right? So, trying to collect data about people’s emotions and build predictive models of that. So, in my book, I talk about the decision to go from a simple like/dislike emotion button in social media to a much more nuanced and very carefully designed, I should say, way to collect data about like, love, angry, sad. And I mean, to us it’s kind of like a need, because then you can really express what you feel whenever a friend posts an update, but it’s super useful data for a company who’s interested to know what drives us. [music ends]

 

David 17:55

 

Interesting to me. Because I’d understood in medical ethics you have to have informed consent of people to be experimented on, but this is kind of like a big experiment without any informed consent at all.

 

Victor Galaz 18:07

 

[sighs] I mean, it depends on how you define informed consent. I guess a company would say, “Well, I mean, you abide to the rules of use, right? Whenever you download our app.” But of course, most people will probably not be familiar with how that data is collected, emotion data, and how it is used or sold

 

David 18:26

 

Because it’s dark?

 

Victor Galaz 18:28

 

Because it’s dark, yes. One of my main arguments is that the idea that the converging forces of a growing climate and planetary crisis and technological development through artificial intelligence and automation will evolve to act synergistically for the benefit of people and the planet is not only naive, but also dangerous. On the contrary, I elaborate how AI and associated technologies may lead to accelerated discrimination, automated inequality, and augmented diffusion of misinformation whilst simultaneously amplifying risks to people and the planet.

 

David 19:06

 

Can you talk a bit about that statement, which is pretty powerful. Why do you feel the technologies could be such a threat and why raise it now?

 

Victor Galaz 19:16

 

So, I think one of the most common questions I get nowadays is like, “Is AI good or bad for climate action?” And I think, of course, it’s going to be both, or the standard answer would be, “It depends on how you use it, right? I mean, you can use it for good and you can use this for bad.” And this is a narrative that companies like, governments like, some UN agencies like. What I’m asking in the book and what I’m questioning are the much tougher questions that relate to the political economy of AI and all things that are digital. Like who’s at the driver’s seat in developing these technologies? We know it’s the Global North; it’s the major big tech companies. What are the incentives of these companies or agents? Like what kind of innovations are we likely to see from those at the driver’s seat at the moment? And, of course, those are questions that in the end define what the innovation landscape looks like for AI. So, what are the results of this? I know this sounds vague, but I think the results of this is that we will see a couple of exciting, interesting, nice to have local AI innovations, right? So, some of them will help us monitor biodiversity better in some locality, some of them will help us optimize traffic flows in the city. We might increase energy efficiency in a couple of buildings, so with data servers. But at the same time, and this is where my argument comes in, at a much bigger scale we will see other things. So, we will see the use of AI services and cloud services by the fossil fuel industry, which will make them more profitable, and they will be able to extract more fossil fuel. Industry itself, AI industry, will consume more energy and more resources, water and space. You will have a race to create bigger and bigger large language models or generative AI models that consume this energy. We will see, and we’re seeing that already to some extent, amplification of misinformation, because it becomes much easier to mass produce synthetic content. 

[whimsical music] And, as I mentioned before, as soon as you start to use predictive AI systems to help guide decisions in, I don’t know, allocation of health resources, social welfare, or predictive policing, we know from a fact that these create harms on minorities and women, et cetera. So, I mean, that to me is the main argument. We will see interesting things. As a scientist, I see so many applications for science, but the way the whole incentive structure and what’s driving AI innovation is something very different from what we need.

 

David 21:58

 

What’s driving it ultimately is the financial success of the big tech companies who can afford to invest in it.

 

Victor Galaz 22:05

 

You’ve put that much more clearly than I did, David.

 

David 22:07

 

I’ve had some practice.

 

Victor Galaz 22:09

 

And, I mean, we—and we’ve seen this before, right? I mean, it’s not unique for the AI industries, it’s just how economies work. You know, like companies will innovate in spaces where they see where they can grow, right? And this is where they see that they can grow at the moment. [music continues then ends]

 

David 22:31

 

[soft rhythmic music] This episode of Cities 1.5 is produced by University of Toronto Press, with generous support from C40 Cities. Want more access to current research on how city leaders are approaching climate action? We also publish the Journal of City Climate Policy and Economy. Our mission is to publish timely evidence-based research that contributes to the urban climate agenda and supports governmental policy towards an equitable and resilient world. The journal serves as a platform for dynamic content that highlights ambitious near-term climate action, with a particular focus on human-centered solutions to today’s most pressing climate challenges. To read the latest issue, visit jccpe.utpjournals.press or click on our link in the show notes. [music ends]

You spoke about potential negative impacts. Let’s talk about that in—a bit more and maybe with some examples. Are there potential negative impacts you could see on employment or poverty or inequality or, you know, other societal issues we already have in our economic system?

 

Victor Galaz 23:48

 

The challenge here is that, I mean, this latest boom of AI and these big investments are quite recent, I would say, so I think it is a little bit too early to say. But, I mean, there are a couple of things that I do flag in the book, and I think for—from my point of view, I have a very short case study on digital farming and food production that, I think, illustrates some of the issues. Some countries and some tech companies have become more interested in food production, and, of course, the narrative being that we need to increase food production because we have a growing population and limited amount of resources. So, you digitalize things, you start to develop AI predictive modeling, et cetera. Now, again, so the question, “Who can afford to use these technologies? Like, who’s going to be left out of the potential?” We know, also from studies, that to be able to use these technologies in a way that actually makes sense economically, you need to do that on very large farms and also preferably in monocultures. That’s where it makes sense to automate a lot of the labor. So, where do we end up? We end up with larger and larger farms, monocultures and automating away small-scale family farms and maybe also migrant workers. So that’s kind of, like, the logic. You’re aiming to do something good but, in the end, just like the nature of the technology, creates all these possible risks; ecological and social risks. And that’s just for digital farming, so what happens if you think about forestry, digital forestry or coastal management. What happens in cities when you begin to apply these technologies? Who’s going to benefit from that and who’s going to be left out? I think these are all questions that we don’t really have answers to at the moment, but I think if you look at the evidence and look at some of the case studies, I see many red flags popping up here and there that we don’t really talk about because we’re so optimistic of what these technologies can do.

 

David 25:55

 

Well, you can sort of guess in the farming example, small landowners aren’t going to win that economic battle. So, you can—I mean, you can kind of see existing trends to the centralization of ownership of arable land in large public and privately held corporations will be exacerbated, would be a reasonable guess, for me at least.

 

Victor Galaz 26:22

 

I think in the end it depends on how you design your AI innovation policies. Because I know for a fact that there are many other types of experiments and uses of AI that are designed to work from the bottom up, right? So, you co-create AI systems in collaboration with local farmers. You use open science, you use open platforms, open data and support what they want to do with their farms, right? And that way they become empowered through technology. Those projects exist as well, right? But what’s the scale of those projects compared to the bigger mergers between big ag and big tech? And to some extent with support from governments, I mean, the scales are going to be quite different in terms of implementation.

 

David 27:10

 

It’s a fascinating example because it speaks, really, to the politics and who decides and who invests and where does the investment come. You know, one of the things that leaps to my mind in that context is, this seems to also reinforce the imbalance between the Global North and the Global South or the global majority, however you want to put people who live in countries that don’t have access to the majority of capital. Is there that sort of imbalance potentially happening as well?

 

Victor Galaz 27:42

 

For sure. For sure, that is happening. So, coming from science and then collaborating in the sustainability domain with people in this space, there are a lot of good things happening too, right? I mean, there are many ways that you can use these technologies to empower people, to help communities adapt to climate change in better ways, it’s just that we’re just struggling to catch up in terms of resources, both personal resources but also compute. I mean, everyone in my community talks about the need to compute and that is quite costly at the moment. We’re behind all the time and I think the tech sector is so big and so powerful that they’re driving how AI innovation is being done at the moment in many countries.

 

David 28:29

 

[soft rhythmic music] The tech sector is also a little bit notorious at the moment, at least in the United States and in the general discussion, about how it deals with online misinformation. We’ve delved into this in previous episodes of Cities 1.5, in particular with respect to climate action; mis and disinformation campaigns are rife. How do dark machines affect that, and do you have any examples of how they are already supporting the spread of climate disinformation? And maybe any thoughts about how we can maybe use the good side of—or the positive potential of AI to help people, particularly urban residents, safeguard themselves against these kind of disinformation attacks? We see it very much in Los Angeles at moment around the wildfires. [music ends]

 

Victor Galaz 29:24

 

First of all, I think whenever we talk about the spread of misinformation and disinformation, we should be very clear on that this is never just a purely technological question, right? It it’s a social—like in the deepest sense, it’s a social question. It’s a political question that has interaction with technology, right? So, you can never fix this through technology, but it does play a role, right? So, for example, there is evidence and some—quite discussions of large social media companies offering advertising space online to people or to the fossil fuel industries or other lobby organizations that spread misleading claims about climate change. So, whenever you tap the bar in your search engine, you’ll get like sponsored results in the top that were incorrect. I think some of these platforms have mitigated that lately, but, I mean, that did exist, right? Other things that I think if we talk about social media platforms are something called recommender systems. So, whenever you use a social media platform, it will recommend people to follow, like you should follow X or Y or Z, right? And into some extent that makes it easier for you to curate your feed. Another aspect of recommended systems is that they define what you see in your newsfeed. So, the newsfeed is very seldom, just a chronological flow of information. It’s somehow curated. And we know for a fact that some of these systems prioritize information that creates a lot of reactions, right? And that tends to be information that provokes emotional responses, angry comments, et cetera, and that boosts mis and disinformation.

 

David 31:08

 

But it’s sort of dark to us, how they do it?

 

Victor Galaz 31:10

 

A little. Yeah. I mean, it is dark. We don’t know fully how these things work, right? There was a big discussion a couple of years ago about social bots, so inauthentic automated accounts mimicking human users, and that—we’ve done some studies on that, and they do occur in climate conversations and when it comes to disinformation. What I want to add to this, and this is the—a big thing, is, of course, generative AI. You talk about ChatGPT or the ability to mass produce synthetic content that is very sophisticated. People will not—are not able to separate between authentic content or synthetic content anymore. It’s just extremely, extremely difficult. So that, to me, has become a growing risk in this space. We don’t see big impacts at the moment now on public perception or opinion because of generative AI, but to me it is clearly a growing risk. Especially since you can design this content to target specific users, right? Or the ideological preferences that you might have, or your age or….

 

David 32:28

 

Or your emotional preferences.

 

Victor Galaz 32:30

 

Emotional preferences. Generative AI can become an extremely powerful misinformation weapon, right? But you can use gen AI to combat it too, right? I mean, there’s no way we will be able to mitigate mis and disinformation without using AI. That’s just impossible.

 

David 32:48

 

One of the things you talk in your book is a way to sort of push back, which you talk about planetary responsible AI algorithmic resistance, which by the way, Victor, is a sentence I never thought I would see, let alone say out loud. Can you explain what you mean by that and why we need to think about these kinds of ideas if we want dark machines to actually be supporting the changes we need to protect people and the planet?

 

Victor Galaz 33:21

 

It’s a mouthful. I agree with you. But it—You can think about it in two parts, right? So, one is planetary responsible AI, right? So, what is that? And I think for anyone who’s in the space of artificial intelligence, they know what responsible AI is. So, it’s transparent, it holds people accountable, and it has a very strong ethical and social element, right? I mean, that debate is old. What I wanted to do in that debate was just to add the climate and environmental dimension to it. That’s why I called it planetary responsible AI. To consider not only the social dimensions, but the environmental and climatic dimensions of AI. Because AI is a very material technology. I mean, it has very tangible—a very tangible material and climate footprint. So, that was the first aspect. And you can use AI for quite good things, right? Even in the city domain. So, I have a colleague in New York at The New School, Timon McPherson, they have a project, and they use AI to try to help cities prepare for a changing climate. So, they—instead of using conventional physics model to do climate adaptation mapping of New York City, they augment them using deep learning. And by doing that, you can simply speed up computation much more and you can create many more scenarios that can help cities adapt to climate change, so that project is called ClimateIQ. So that’s like a good use of AI that I believe is planetary responsible. So, that’s the first part. Now, you asked about algorithmic resistance. And this is not my term. I mean, there’s actually a lot of people working in this space. What I wanted to add was a little bit more of the environmental angle. So, whenever you go to any of these conferences of AI for Good, you will see all of these good projects. They’re all good projects, right? So, projects to help monitor biodiversity or, say, water or whatever. But the things we need to ask us, everyone in this space, and I’m sure everyone listening thinks about that too.

[soft rhythmic music] Like, will this particular solution help redirect resources or shift political influence or lead to changes in norms that are really transformative? Right? That really lead to system change? And I think that many of these AI for Good initiatives, actually, do not do that. Because if you want to do that, you need to question underlying business as usual structures, right? And we know from history and from research that sometimes that takes social resistance. So, that’s what I mean with algorithmic resistance. You need to question and challenge dominating narratives. Those solutions will not come from arenas dominated by big tech.

 

David 36:23

 

For example, we might need to question the fundamental underpinnings of our economic system, which I would loosely call neoliberal, and start thinking about concepts like ecological economics. [music ends]

 

Victor Galaz 36:37

 

Just think about, so, the expansion, again, of data servers around the world. They take space, they use water, they use energy, so you see communities, local communities, could be all the way from Chile, Uruguay and the Netherlands, protesting against the expansion of this type of infrastructure. That, to me, is some sort of algorithmic resistance. Like questioning, “Like, do we really need to build another data center and who’s benefiting and who’s losing?” There’s another beautiful example that I have in my book. It’s an art science project called Synthetic Messenger, right? So—And this was an artist called Tega Brain who created a network of social bots, or something called a botnet. And that botnet just went out to the internet, found articles that talked about climate change, and then this botnet automatically clicked on every ad of those articles, like millions of times. And so, what that does is that it creates revenues or sends a signal to the online magazine that this article is creating a profit for the journal, so that way it would boost more climate reporting. I mean, I don’t know whether it works at the big scale, but I think it’s kind of like an intriguing guerilla type of tactic that is very different from what we would see in this—in the AI space normally.

 

David 37:57

 

Well, that’s interesting and a bit entertaining. One last question for you. In the context of the conversation we’ve just had about dark machines and generative AI and algorithms, if you could make just one system’s change to better protect people and planet, what would it be?

 

Victor Galaz 38:17

 

Do you remember the discussion that we had in the Nineties and beginning of 2000 of something called the Tobin tax?

 

David 38:23

 

[whimsical music] Yes.

 

Victor Galaz 38:23

 

Do you remember the Tobin tax?

 

David 38:24

 

Absolutely.

 

Victor Galaz 38:25

 

So, like, a small tax on financial transactions that would create the resources to address and work with climate action and battle social inequality globally. I proposed something [chuckles] in my book that is a slightly similar to that. So, like a Tobin tax, like, on computer, of AI. Where you would tax companies depending on how much computer they use, how much water they use, and how much carbon they use, and redirect those funds into projects that promote planetary responsible AI. To really change the innovation landscape and to change the incentives that governments and companies have at the moment where we really use the power of AI for something good. So not just talk about it, but provide resources to it.

 

David 39:13

 

Brilliant. Victor, thanks so much for being with us today. It’s been a terrific conversation. More importantly, a superb book and continued success on your really important research, which is absolutely critical if we are going to succeed in protecting people and the planet.

 

Victor Galaz 39:30

 

Thank you, David. [music continues then ends]

 

David 39:41

 

[fast rhythmic music] It’s difficult to predict what the future holds, but decisions we make now have ramifications that will last for decades. The advent of social media offered us many new and positive opportunities, but it’s also increased societal polarization, damaged the mental health of young people, and has become a breeding ground for disinformation, threatening our democratic processes. We must ensure that these emerging technologies do not repeat the same mistakes of the past. This is a difficult task in a global neoliberal economy based on profit and endless growth because tech giants will always go where the money is, and that’s overwhelmingly in the hands of big business and the fossil fuel industry. It’s crucial to find ways to resist this outcome. Actions like coalition building, grassroots movements and legislation can be combined with tools such as the Biosphere Code, planetary responsible AI and algorithmic resistance. Only then can we have a chance to ensure that these technologies are used for the benefit of people and planet.

 

Oh! And as for the AI generated text, it’s highlighted in bold in the episode transcript on our website. Cities 1.5 is and always will be a people-first podcast, because that’s one of the ways that we ourselves, as a small podcast production team, contribute to the resistance that’s helping save the world from climate breakdown. [music continues then stops]

 

[Cities 1.5 theme music] On the next episode of Cities 1.5, I have the pleasure of speaking to experts in fields that you may not think are connected, but are both greatly impacted by climate caused events like the recent wildfires in Los Angeles, California. Kate Stein wears many hats, including director of the Climate-Resilient Insurance Strategy Project, as well as being a global leader in climate risk, resilience and insurance. [inaudible 41:59] is a senior subject matter expert at Revolver, specializing in mis and disinformation. Tune in next Tuesday to find out about how global climate events are causing rippling impacts across both of these industries. You won’t want to miss it.

 

This has been Cities 1.5, leading global change through local climate action. I’m David Miller. I was the mayor of Toronto, Canada, and I know firsthand the role cities can play in solving the climate crisis. Currently, I’m the editor-in-chief of the Journal of City Climate Policy and Economy, published by the University of Toronto Press in collaboration with the C40 Center for City Climate Policy and Economy, where I’m also the managing director. C40’s mission is to help its member cities halve their emissions within a decade while improving equity, building resilience, and creating the conditions for everyone everywhere to thrive.

 

Cities 1.5 is produced by University of Toronto Press in association with the Journal of City Climate Policy and Economy and C40 Cities. This podcast is produced by Jessica Schmidt and edited by Morgane Chambrin. Our executive producers are Peggy Whitfield and Calli Elipoulos. Our music is by Lorna Gilfedder. The fight for an empowered world is closer than you think. To learn more, visit the show’s website linked in the episode notes. See you next time. [Cities 1.5 theme music continues then ends]

People on this episode