Aliens on Our Own Planet - redux
12/7/23 – Revisiting the artificial intelligences all around us
Hello everyone:
I’ve written a new introduction to an essay from a year ago, and reworked the essay in various places. The topic – corporations as a form of AI – only grows in relevance. This week at COP28, for example, there are 2,456 lobbyists and representatives from petrostates and fossil fuel companies hellbent on slowing or reversing attempts to phase out fossil fuels, all because the companies they serve have a mission that does not necessarily take into account human or planetary well-being. They are a two-dimensional nightmare in a three-dimensional world.
Which brings me to the quote of the week, from Sultan Al Jaber, the president of COP28 and CEO of the Abu Dhabi National Oil Company:
“There is no science out there, or no scenario out there, that says that the phaseout of fossil fuel is what’s going to achieve 1.5C.”
That’s either a lie, a deep misunderstanding of the science, an epic misstatement, or an article of faith among those whose lives are built around the sale of oil and who cannot imagine the end of that way of life.
As always, please remember to scroll past the end of the essay to read some curated Anthropocene news.
Now on to this week’s writing:
I haven’t written here about the grotesque violence by Hamas in Israel and by Israel in Gaza in part because the story largely falls outside my Anthropocene purview. It’s an unimaginable tragedy taking place within the human theater, with thousands of bodies slumped across the scorched earth of the stage, nearly every one of them victims of someone else’s hubris, bias, fear, rage, and desperation.
Certainly, though, like all of our theaters this one exists in the real world. Half a billion birds still migrate over Israel and Gaza twice a year, spring and fall, softly winging over a history of ecological disruption that was begat in the days long before Genesis was written.
It’s been easier to frame Russia’s effort to erase Ukraine as an Anthropocene conflict, given that Russia is a shaky petrostate looking to control the global supply of grain (much of which Ukraine grows). And the ecological destruction wrought in Ukraine is a brutal microcosm of what ails the world in this age of smash-and-grab capitalism.
But then I heard an NPR interview with an Israeli investigative journalist who just published an article, “A Mass Assassination Factory: Inside Israel’s Calculated Bombing of Gaza,” which details (among other things) how the Israeli military relies on an AI system to generate thousands of bombing targets in Gaza. The targeting algorithm is deliberately permissive, associative, and murderous, because military leadership has designed it that way. Their consensus has been that dozens or even hundreds of civilian deaths are acceptable in the targeting of a single Hamas leader. Most of the bombs are falling on apartment blocks and private residences where Hamas soldiers might live. Collateral damage is planned. To be clear, the targeting is not sloppy:
“Nothing happens by accident,” said another source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”
And to be clear, I’m not writing to condemn Israelis for what their military does, just as I don’t conflate Palestinians with atrocities committed by Hamas. I’m writing about a disturbing use of AI. To do so, I’m citing an Israeli article sourced from multiple members of the Israeli military intelligence community, and whose facts were all run by Israeli censors before publishing. My interest for this essay is in how AI will shape the world we’re still building haphazardly atop the real one.
A permissive forest management algorithm which allows broad collateral damage of old growth trees, for example, will care as much for biodiversity as this one does for Palestinian 3-year-olds.
The AI program has a name: Habsora, or “The Gospel.”
What could go wrong when we assign our worst impulses to a machine, and then hold that machine up as a truth to believe in?
In all the chatting about AI cognition, philosophizing about AI futures, and hand-wringing about AI taking our jobs, I don’t hear much about its likely ecological consequences. We’re fascinated and nervous about AIs replacing us, but apparently less worried that they’ll accelerate the Anthropocene, lengthening and darkening our shadow. Anything which amplifies our abilities may amplify our harms.
Of course, AI programs will be used to limit our harm as well. To the extent that they’re used to, say, improve agricultural output and reduce chemical use, increase energy efficiency, design fusion power, search for endangered species, or synthesize tasty non-animal meat, these high-functioning algorithms will reduce our footprint. But as the suffering in Gaza indicates, everything depends on who invents the purpose for, writes the rules for, and designs the consequences of, an AI.
So, while Russian destruction in Ukraine is an old story of cultural and ecological erasure, the Israeli use of AI to accelerate destruction raises a new concern for the 21st century. There will be many, many kinds of AI in our lives very soon, and some of them will be directed, as The Gospel has been, to intensify our ability to disrupt. This is largely because the companies building them are profit-driven, more competitive than collaborative, and focused on exerting power within the human theater.
Looking ahead, you don’t have to imagine a world in which non-human entities have been empowered to accelerate consumerism through targeted ads, design wasteful and unrepairable products, and disrupt democratic norms in order to reduce regulations. That world is here already.
But’s let’s back up, way up, and look at this from another angle.
In my Antarctic notebooks, I often circled back around to a rhetorical question: Why do we behave like aliens on our own planet? In other words, why have we built and maintained a world that squats atop the community of life rather than being woven into it? Why are the rules of civilization so callous toward other species?
The question was prompted in part by the experience of living on the lunar expanse of an ice continent, where every weird aspect of modern human life stood out in stark silhouette. Simply being in Antarctica is weird, not least because we’re so comfortable living in a place that does not naturally support life. No trees, bushes, grass, or land animals. The good green Earth is gone, and we hardly notice. We have work to do.
But mostly my question about our modern alien nature was triggered by the Antarctic amplification of the planet-eating logistics of ordinary Anthropocene life. The view from Antarctica of, say, single-use plastics, or toilet paper from a tree harvested 20,000 miles away, or oil and gas extracted and processed in Saudi Arabia is disheartening, to say the least. I didn’t need to be at an end of the Earth to contemplate the end of the Earth, but it was particularly unsettling to stand on the ice and recognize the true scale and cost of the supply chain that began in the deformed and deforested warm world.
Why do we behave like aliens on our own planet? There are many ways to answer this, but most of them relate to the power and control of large corporations in one way or another: profit motive, ecological amnesia, loss of empathy for other species, the myth of human supremacy, the celebration of selfishness, the desire to enslave, etc. But I’ve never synthesized all of these answers as elegantly as the science-fiction writer Charles Stross:
We are now living in a global state that has been structured for the benefit of non-human entities with non-human goals.
But, Stross notes, the aliens didn’t come from away. They were born here and then took over:
Corporations do not share our priorities. They are hive organisms constructed out of teeming workers who join or leave the collective: those who participate within it subordinate their goals to that of the collective, which pursues the three corporate objectives of growth, profitability, and pain avoidance.
I found Stross in James Bridle’s new book, Ways of Being: Animals, Plants, Machines: The Search for a Planetary Intelligence. As lucid as Stross is here, it’s Bridle’s writing and thinking that thrills me. It’s worth quoting Bridle at length, because I hope his insight will strike you in the same way it struck me:
I am sometimes asked when ‘real’ AI will arrive – meaning the era of super-intelligent machines, capable of transcending human abilities and superseding us. When this happens, I often answer: it’s already here. It’s corporations. This usually gets an uncertain half-laugh, so I explain further. We tend to imagine AI as embodied in something like a robot, or a computer, but it can really be instantiated as anything.
Imagine a system with clearly defined goals, sensors and effectors for reading and interacting with the world, the ability to recognize pleasure and pain as attractors and things to avoid, the resources to carry out its will, and the legal and social standing to see that its needs are catered for, even respected. That’s a description of an AI – it’s also a description of a modern corporation.
For this ‘corporate AI’, pleasure is growth and profitability, and pain is lawsuits and drops in shareholder value. Corporate speech is protected, corporate personhood recognized, and corporate desires are given freedom, legitimacy and sometimes violent force by international trade laws, state regulation – or lack thereof – and the norms and expectations of capitalist society… Crucially, [corporations] lack empathy, or loyalty, and they are hard – although not impossible – to kill.
Remember that the word “corporation” evolved from the Latin corpus, “body,” through corporare, “to form into one body,” to our modern idea of an incorporated business to which we have now granted rights and legal protections like those offered to actual human bodies. But unlike nearly all 8 billion human bodies, large corporations have massive reserves of money and power they can wield to serve their purpose and protect their interests. And weirdly, though corporations have been at the center of the planetary destruction that accelerated in the 20th century, many of us spend our lives serving their purposes more than we serve our own.
Bridle makes some other brilliant notes on corporate AI that I’ll summarize briefly:
The AI we tend to imagine – a powerful, self-serving, brutal machine – is modeled on corporate AI.
The classic worst-case-scenario depiction of AI is described by the paperclip hypothesis, which imagines an AI tasked with one simple job – maximize paperclip production – becoming all-powerful and single-minded as it devours the world to serve its mundane paperclip-making mission.
We think of AI in such limited ways in part because of science fiction and Hollywood storytelling, but mostly because at this stage of history artificial intelligence is being developed by incredibly wealthy companies which model AI on their own needs and interests rather than on ours.
These ‘masters’ of the new computational AIs – leaders at Google, Microsoft, etc. – are often outspoken in their concern that AI may pose an existential threat to humans. Their concern is notable for its lack of irony, and most likely based in self-interest rather than social altruism. As Bridle notes, “Perhaps they fear artificial intelligence because it threatens to do to them what they have been doing to the rest of us for some time.”
What we’re not told in all these frightening AI scenarios is that an unlimited variety of artificial intelligences are possible, including ecological, empathetic, and socially-motivated ones. With AI (as with all our tools), we get what we build, not what we hope we’ve built.
I think what excites me about these ideas – reading Bridle’s long quote above felt like a minor revelation – is that it changes how I see the world. The goal of this Field Guide, after all, is about clearly seeing the Anthropocene world as it is, in all its beauty, loss, and grief.
For me, seeing corporations as AI clarifies the Why and the How of their unnecessarily harmful existence. More importantly, it suggests a powerful leverage point for making a better future. The more we change the algorithm that shapes the paperclip-obsessed corporation – by limiting corporate personhood and its free speech, and by increasing its obligations to all species – the more we improve the fate of the Earth.
As Bridle explained, we have to recognize that AI is already here. As annoying as it is to admit, AI in its various forms will be a fundamental part of the planetary landscape for all the futures we can currently imagine. Or rather, for all the futures being imagined for us by the people who keep making advanced tools that both lift and crush ordinary human lives.
Looking ahead, then, the question is what kind of corporate AIs, and thus what kind of future, will be imagined into being. Judging by the insatiable half-witted appetites of many corporations, we have a lot of room for improvement and experiment.
I’m a bit late to this party, since some folks have been talking about corporate AI for years (read here, here, and here) and explaining that these instantiated entities were born centuries ago with the birth of the powerful companies that first controlled global trade, including the slave trade. But we really need to be talking about them now as we enter the bottleneck of consequences for climate and biology on this planet.
There are libraries to be filled with books on the history and future of corporations and AI, so I’ll merely explore a few thoughts of my own. I’m particularly interested in how the chasm between corporate AI and ecology disrupts our individual relationship with the real world.
To get to that, I’ll ask a few simple questions.
How has a world “structured for the benefit of non-human entities with non-human goals” changed our sense of time?
I’ll start my answer by quoting Sumana Roy, author of the new book How I Became a Tree:
When I look back at the reasons for my disaffection with being human, and my desire to become a tree, I can see that at root lay the feeling that I was being bulldozed by time.
Bulldozed by time… I think we can all relate to time-based anxiety (even though you and I may not be the people on Earth most harassed by the bulldozer). Remember that our sense of time is more structured, down to micro-intervals we call nanoseconds, than at any time in human history. We tick down time on our wrist, in our pocket, in bedroom alarms, on the living room wall, in town centers, but most of all at work. The modern interval-based sense of time first existed to enhance scientific measurement, but soon became the foundation of industrial productivity. (Michael Pollan writes convincingly in This is Your Mind on Plants that coffee had a lot to do with this too.) It does little for human quality of life by itself.
For a million years, we thrived and played and loved without the litany of seconds, minutes, hours, or even weeks. Corporate AIs seeking their peculiar efficiencies cannot exist without an autocratic, single-minded sense of time, but people can.
What about our sense of place?
The dominance of “non-human entities” has increasingly meant that we are nodes in their systems, and that those systems largely define our lives. Many of us work for a corporate AI or in jobs that feed their profit-driven needs. They define our real and virtual landscapes, whether as agribusiness reducing millions of lush forested acres into soybeans or as tech companies creating online social spaces that are really hyperactive data-scraping honey traps.
The goods and services we require to live are available only where corporations make them available, not where they originate or where we live. Mainly, though, they have disrupted our sense of place by erasing the sanctity of place. I live in Maine, but my home and life are filled by materials siphoned and scraped from other continents. That’s the Anthropocene, in which the dismantling of natural systems became a cultural norm.
What is a forest or fish to a corporation?
Easy answer, right? A tree or clam is a resource, a source of profit to be harvested for maximum utility to the harvester. But to call a blueberry or a cod a mere resource we must first erase their own intelligence and value. (Indigenous societies have long thought differently, because the value of relationship to fellow species is greater than any self-interest.) The corporation’s relation to actual forms of life is that of slave trader or slaveholder to slave: possession, utility, income. This isn’t a logger or fisherman feeding their family or village. A corporation is an abstract entity which has no interest in forest or fish, only what value they can bring.
Similarly, fossil fuel companies are interested in the fossil-fueled industrialization of society because we then depend on those companies for our continued existence. They have actively ignored the consequences and actively fought against their responsibility for those consequences because they undermine the companies’ artificial purpose in life: to make petroleum paperclips.
I’ve focused my questions on time, space, and other species because I’m trying to tap into what remains of our shared memory of the first 99% of human history, when the struggle to live was confined to actual people and the community of life around us. We lived in direct relationship rather than through abstract corporate intermediaries – middlemen who are not men at all – whose pleasure principle is to profit from both people and the land while minimizing their responsibility to either.
Fighting back against corporate AI is a topic that deserves far more space, but I’ll note for my purposes here that in the Anthropocene it will require a civilizational embrace of ecology and of the intelligence of indigenous peoples. We need to rebuild our relationship with the natural world to the point where we see ourselves in other species and them in us.
Maybe we all need to become trees. A little animism and anthropomorphism would go a long way toward the necessary blooming of democracy that will defend our own rights and expand those of ecosystems and species.
Right now, though, what we really need is good governance. Those corporations that do good societal work, that donate to good causes, that contribute to the community at the expense of their bottom line, and that re-frame their purpose and their supply chain to reduce harm to the natural world are bending the algorithm, but they're not breaking it. To turn corporations into decent human analogues requires policy and law to lay down guardrails on behavior and limits on profit. Collateral damage must be minimized and punished.
It may seem otherwise, but corporations are not required by law to maximize profit. It’s just a bad habit, one that we can break.
In simple terms, corporations are intelligent self-serving entities which have amassed their own sources of power. The best countermeasure is to amass our own source of power. Protests and pitchforks have some effect, but the best solution is to concentrate our power in the hands of policy-makers who will write and enforce better regulation. Corporations recognize that their greatest threat is at the policy level, which is why they expend so much energy in the halls of political power. It’s why there are 2,456 fossil fuel lobbyists and representatives this week at COP28.
We should certainly get back into the business of revoking corporate charters, also known as the “corporate death penalty,” which was common in U.S. history until the early 20th century. This reminds me of a classic joke attributed to the journalist Bill Moyers:
I’ll believe that corporations are people when Texas executes one of them.
One way or another, we must either limit the power of corporate personhood or make them better people. This is particularly true at the environmental level, as James Bridle explains:
If we are to address the wholesale despoliation of the planet, and our growing helplessness in the face of vast computational power, then we must find ways to reconcile our technological prowess and sense of human uniqueness with an earthy sensibility and an attentiveness to the interconnectiveness of all things. We must learn to live with the world, rather than seek to dominate it. In short, we must discover an ecology of technology.
I love that phrase, “an ecology of technology,” though it might be more accurate to say we need technology to serve ecology.
Lest I forget to circle back, it’s important to remember that AI-directed bombs are being dropped intentionally today and tonight on civilians, on families and neighbors of people who may or may not be connected to Hamas’ horrific massacre of Israelis on October 7th. The horrors of war are, I think, a little more horrible when we treat the death of innocents as necessary, when we assign responsibility to machines we built, and when we call it Gospel.
Likewise, if we insist on building machines whose cognition will far surpass our own, the only real lasting contribution we can give them is a set of ethics. If those ethics, as they relate to the glories of both the community of life and the human community, aren’t more kind and inclusive, democratic and ecological, then all we’re building is a harder problem to solve.
The good news is that if we build the new AIs the right way, then we can imbue them with better ethics than the corporations which have preceded them.
Thanks for sticking with me.
In other Anthropocene news:
From Science, massive redwood trees torched by an unusually intense forest fire in 2020 are showing signs of life, relying on energy stored decades ago and buds that formed as long as 1000 years ago. It’s a beautiful story of resilience, but “it’s unclear whether the trees could withstand the regular infernos that might occur under a warmer climate regime.”
From Bradley Stevens at Ecologist@Large, an essay on the toxic problem of cigarette butts in the environment. It’s fascinating, and useful. You can listen to his podcast of the essay too.
From the BBC, a reality check regarding the recent trans-Atlantic flight of a commercial airliner powered only by “fat and sugar,” or biofuel. There simply isn’t enough land available to grow a significant fraction of the crops necessary to make the enormous amount of aviation fuel consumed every year. This is the same greenwashing conundrum we face with ethanol, in which wide swathes of the planet are turned into biological deserts in order to power an unnecessarily busy civilization. In other words, “The science would suggest that there really is no such thing as sustainable aviation.”
From Nautilus, the interactive Navigator map from ProtectedSeas is an important and astonishingly comprehensive new tool for assessing the quantity and quality of marine protected areas around the globe. Until now, knowledge of MPAs and their site-specific regulations has been fragmented. The team at ProtectedSeas spent 8 years to assemble and standardize information on 21,000 sites to provide “the most comprehensive resource of regulatory information of marine protected areas in the world” which “offers a one-of-a-kind marine monitoring solution to enhance awareness of and compliance with ocean protections globally.” Among the revelations is that while it’s widely claimed that 8% of the ocean has been protected, the Navigator shows that full protections are in place for only 3.4%. We have a lot to do to reach 30% by 2030.
From the Times, an excellent summary of the financing problem slowing the renewable energy revolution in Africa and elsewhere. Simply put, banks and investors are treating renewable energy projects like any other investment rather than as critical steps to stopping runaway climate chaos. Selfishly, the banks are profit-focused but risk-averse, despite the much greater risk of a failed energy transition. There’s a lot of pressure at COP28, though, to change their attitude.
Also from the Times, a new strategy to increase forest cover (and its myriad health benefits) in NYC, especially in communities of color, is challenged by budget and other concerns.From Science, massive redwood trees torched by an unusually intense forest fire in 2020 are showing signs of life, relying on energy stored decades ago and buds that formed as long as 1000 years ago. It’s a beautiful story of resilience, but “it’s unclear whether the trees could withstand the regular infernos that might occur under a warmer climate regime.”
From Time, “The Dirty Secret of Alternative Plastics,” which discusses how biodegradable, expensive, or occasionally toxic some of these plastics can be.
From CivilEats, “Can Agriculture Kick its Plastic Addiction?” Plastics make agriculture easier, more efficient, and more effective, but contaminates the soil and food we grow. It’s a small volume of the plastic produced globally, “but it carries the highest risks.”
Thanks for re-visiting these ongoing - seemingly intractable - aspects of human nature. I’m now an old man, and am no longer surprised by the technologies and atrocities invented/employed/ignored by humans consumed by greed for wealth, power, and a sense of security. As always, it is disheartening, but it’s incumbent on those who oppose the actions to speak out as possible, and to seek and then practice alternative life paths.
I just read some ending lines from the Palestinian poet Refaat Alareer that resonate in these - as in all - times of horror, a poem with the title, “If I must die.” It is written with the hope that a child who survives,
“... sees the kite, my kite you made, flying up above
and thinks for a moment an angel is there
bringing back love
If I must die
let it bring hope
let it be a tale.”
According to today’s news reports, Mr. Alareer and members of his family were recently killed, in their home, in Palestine.
Jason, your eloquence is deeply appreciated, and can create ripples far beyond what we can see in the moment.
I learned about two new things from this essay: “The Gospel” and “Ways of Being.” Both are food for thought, with intermingled horror, sadness, anger, joy, wonder, and discovery (with of course the horror and anger skewed heavily in the direction of a smart-bomb AI). Just like life, I guess. This was a quietly courageous piece.