Race Conditions

One commit at a time

The Paperclip Maximizer Is Already Here And It Looks Like Tech Companies Working with ICE

I, for one, might welcome our AI overlords.

(If they come pre-programmed with a cute British accent and can make me a cuppa when I’m too lazy to get out of bed on time.)

Except they are already here and they don’t need any more welcoming. We’ve embraced them whoheartedly, cheered them on at conferences and stock exchanges, and even embraced a kind of neofeudal allegiance by donning clothing embroidered with their logos.

What I’m talking about here is the idea that we already have an AI in our midst and that AI manifests itself as corporations. This idea first came to my attention in a tweet by James Bridle, which referenced two articles: Invaders from Mars by Charlie Stross and The Singularity Already Happened; We Got Corporations other by Tim Maly.

As I browsed the Google search results for “corporations as AI” (you have to sift through a lot of buzzword hype to get to the good stuff) and looked at the various links linked in the resulting articles, I discovered that a similar idea had been proposed by several other writers and thinkers, probably most well-known among them Ted Chiang (author of one of my favourite scifi short story collections).

Chiang’s article in Buzzfeed discusses Elon Musks calls to action against the paperclip maximiser AI, which, while originally given a harmless task like developing paperclips, develops and re-designs itself to maximise the amount of paperclips it can produce and in the process ends up consuming all of the materials on Earth. We shouldn’t fear Skynet, we should fear a benign Roomba-on-steroids gone rogue.

Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

Chiang continues his thinking down an interesting line. Why do technologists think this claim of humanity being wiped out by a benign, but misguided optimiser, is realistic? Perhaps it is because those of us who work in tech companies and startups already have good examples of how a non-human entity (albeit with legal protections that sometimes seem to treat it as a person) can consume everything in its path while attempting to maximise some otherwise benign goal like shareholder value or acquisition price.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly.

Tech Companies Working with ICE and the Paperclip Maximiser

A few weeks after I first saw the original tweet about AIs as corporations, an event in the tech industry prompted me to rethink how this angle fits in with the difficulty of having conversations about the ethics of engineering at tech companies. The event I am referring to is “Chef does work with ICE” shitstorm that happened over Twitter and the subsequent decision by open source developer Seth Vargo to remove some of his open source code from GitHub. For my readers who don’t follow the tech community too closely, Chef is a company making automation tooling for provisioning and managing fleets of servers. Chef’s main product is open source meaning everyone can read the source code and make contributions. Although I don’t have too much experience with Chef, it is my understanding that the code written by Vargo was used by many of Chef’s users and its removal caused some outages.

A note: Chef has since published an update, which details their plans to not renew any ICE or CBP contracts.

In a matter of hours, Barry Crist, the CEO of Chef published a blog post explaining the companies position on ICE contracts and two interesting pieces struck me.

In the first, Barry Crist explains that while individual community members may strongly disagree with Chef’s decision to collaborate with ICE, the company will continue to do so in order to continue growing Chef as a company that transcends numerous U.S. presidential administrations.

While I understand that many of you and many of our community members would prefer we had no business relationship with DHS-ICE, I have made a principled decision, with the support of the Chef executive team, to work with the institutions of our government, regardless of whether or not we personally agree with their various policies. I want to be clear that this decision is not about contract value — it is about maintaining a consistent and fair business approach in these volatile times. I do not believe that it is appropriate, practical, or within our mission to examine specific government projects with the purpose of selecting which U.S. agencies we should or should not do business. My goal is to continue growing Chef as a company that transcends numerous U.S. presidential administrations.

In the second, Crist underscores that he personally disagrees with ICE’s policies and actions:

And to be clear: I also find policies such as separating families and detaining children wrong and contrary to the best interests of our country.

These two points provide interesting case studies in support of the “corporation as a form of AI” thesis put forth by Chiang et al. In the first point, the position of Chef is to continue to perform a harmful activity, not because they intrinsically want to do harm, but because the harm results from the pursuit of the otherwise benign optimisation goal of company growth and survival under various (political) environment.

The second point is a bit more sinister and demonstrates an interesting contradicition in how the public thinks about tech companies. On one hand, companies are made up of people. People with various individual opinions and positions. Decisions in companies are almost always made by people or groups of people, which makes it seem a bit ridiculous to claim that a corporation could be an AI. After all, it’s just people who occupy the leadership positions in a corporation.

But as Crist’s response demonstrates, the people in these leadership positions are not entirely free to make decisions as they wish, no matter how strong their personal beliefs. Instead, they are bound by big fancy words like fiduciary duty and maximising shareholder value and thus make decisions that serve to optimise some goal in the name of the former and the latter.

As Charlie Stross writes in Invaders from Mars,

Corporations do not share our priorities. They are hive organisms constructed out of teeming workers who join or leave the collective: those who participate within it subordinate their goals to that of the collective, which pursues the three corporate objectives of growth, profitability, and pain avoidance. (The sources of pain a corporate organism seeks to avoid are lawsuits, prosecution, and a drop in shareholder value.)

Even if Chef’s employees and leadership team personally believe that some policies of their customers are wrong, they still have to act contrary to those beliefs bound by the obligation to pursue the goal of company growth, which brings us to the challenging task of having conversations about the ethics of engineering at a tech company.

In my personal experience, many of the conversations about ethics in tech and what we should and should not build end up going exactly like this.

  1. “I personally disagree with this product/policy”
  2. “But I also don’t think tech is political, it’s just code, you can use it for good or bad”
  3. “Companies shouldn’t be the ones to decide what’s good and bad in society, we have laws for that”

There is a lot to unpack in that, but for now I’ll leave these thoughts at this. If we’re going to cede the architecture of the future to corporations, I’m going to be looking at it with fearful trepidation instead of hopeful anticipation.