5 Filters

Remarkable conversation with an AI. What do people think?

Yes, I think that’s bang-on Evvy. I read an article somewhere that a bunch of common prescription decisions are going to be delegated to pharmacists to free up time for GPs. Some or other govt muppet said (I paraphrase) “…women won’t be needing to wait ridiculous amounts of time merely to get contraception”. They can blurt out what they want in front of a queue at Boots**, along with the methadone zombies and teenagers after Clearasil. Yay! Progress!

** or just press the right part of the touchscreen, coming soon… EVERYWHERE. Even Burgerking don’t want staff capable of pressing the right key on a till anymore. So order your whopper, or is it royale? By prodding the part of the screen showing a photo of a big blob of stuff in a bun, a fizzy drink and fries. The latter two aren’t part of the deal though, just for illustrative purposes. I’m ashamed that I even know this but the very cheapest Travelodges tend to be miles away from places that serve actual food.

2 Likes

The most obvious thing about AI is that it’s not human (and is obviously corporate).

I, on the otherhand, are human and I have a real brain. Would you rather interact with me than some dickhead bot that some spotty little kid has come up with?

Again, this seems an obvious question.

1 Like

A key problem with the robots is that you have to adjust your own behaviour to fit their expectations.

  • Plonking one item at a time into your bag, so that scale 1 minus item Z continues to equal scale 2 plus item Z.

  • The ‘Pay Here’ machines in public car parks now demand full car registration details to rule out the minor gift of your ticket with 30 minutes left to an incoming driver, as you leave, does not rob the council of 50p.

  • That motorway services ‘Costsalot Coffee’ sign promises a sit-down and a pick-you-up after two hours of foot to the metal but all that it really means is there’s a souped-up slot machine in the corner of W H Smiths. Go drink it in the car park, peasant.

3 Likes

Dumb and dumber

A lawyer faces sanctions after he used ChatGPT to write a brief riddled with fake citations

Steven Schwartz was "unaware of the possibility that [ChatGPT’s] content could be false.”

2 Likes

This made me chuckle, for several reasons. Peasant…it’s accurate.

One of my boys used to work at one and the stories…I wouldn’t do anything except pee. I’d struggle with that if I was female.

My solution has been to plan ahead and find a decent pub just off the motorway. The extra 30 mins journey time has never yet failed to pay off. These places need our support too.

2 Likes

Tech leaders warn of AI=threat of extinction.

It’s good that this debate is happening I guess, but it should be happening at top political and public levels, rather than between two factions of the global AI boy racers club.
It’s like a debate between Top Gear’s Jeremy Clarkson and Two-jags Prescott on on the environment.

The lesser evils (the Prescotts) think it’s going too fast because, they acknowledge, the AI products could to a lot of harm. They call for a slowdown to put in good safety measures.

But even the lesser evils express no concern for deep-rooted, global societal effects as jobs are junked en masse and the public are left behind a wall of sludger as inferior but cheap AI is slotted in to keep their pesky human problems at bay. Every major production system could be worsened by the capitalistic race to the bottom; those that deliver basic services like utilities, food, and democratic rights.
Being ‘more safe’ from sudden AI-caused catastrophes won’t ameliorate this risk, which is more like a certainty - because it’s one of the main purposes.
ED

As generative AI developments in recent months have reached a fever pitch, some top leaders in the field have issued a grim extinction warning. But others caution against alarmism and want to take a measured mitigation approach.


senior writer Shane Snider

Shane Snider

2 Likes

The race to the bottom sums it up perfectly Evvy.

The linked author isn’t picking up on one of the elephants in the room: politicians have no concept whatsoever of long-term anymore… if they ever did.

I agree that regulation might be useful in setting up ‘guardrails’… but suspect that what will happen is that trusted partners (take a bow the usual suspects: Google, Microsoft etc) will be given carte blanche. While Open Source projects do at least offer opportunities for inspecting what the algorithms do, the databases that all of these projects are using are poisoned already (Wikipedia natch).

A key danger at the moment is the AI projects that create images and videos. A recent photo of ‘a burning Moscow apartment block’, used to big up the Ukrainian drone attacks, was merely a stolen image from (I forget: Idaho, Iowa, somewhere like that). The flak chucker/BBC VeryIffy programmes, may not pick these up… or not very quickly anyway. Twitter did though, using crowd-sourcing. Annoyingly I can’t find the tweet now (deleted?).

Fama, malum qua non aliud velocius ullum.

Virgil’s Aeneid: “Rumour, than whom no other evil thing is faster” (unless the internet has lied to me… I don’t have a copy handy).

I’m going to go and see this next week, looks like predictive programming to me:

Massively indebted countries will have extreme things forced upon them seems tp be the subtext. Not really a coincidence is it…

2 Likes

I was just about to email this article to RG… :frowning_face:

So I’ll post it here instead. Hope you’re well RG, and enjoying being a baby again.

Anyway. I’ve posted articles by Chiang on AI here in the past. I find him to be one of the clearest thinkers on this subject.

Cheers


Illustration by Vivek Thakker

Annals of Artificial Intelligence

ChatGPT Is a Blurry JPEG of the Web

OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?

By Ted Chiang

February 9, 2023

Save this story

In 2013, workers at a German construction company noticed something odd about their Xerox photocopier: when they made a copy of the floor plan of a house, the copy differed from the original in a subtle but significant way. In the original floor plan, each of the house’s three rooms was accompanied by a rectangle specifying its area: the rooms were 14.13, 21.11, and 17.42 square metres, respectively. However, in the photocopy, all three rooms were labelled as being 14.13 square metres in size. The company contacted the computer scientist David Kriesel to investigate this seemingly inconceivable result. They needed a computer scientist because a modern Xerox photocopier doesn’t use the physical xerographic process popularized in the nineteen-sixties. Instead, it scans the document digitally, and then prints the resulting image file. Combine that with the fact that virtually every digital image file is compressed to save space, and a solution to the mystery begins to suggest itself.

Compressing a file requires two steps: first, the encoding, during which the file is converted into a more compact format, and then the decoding, whereby the process is reversed. If the restored file is identical to the original, then the compression process is described as lossless: no information has been discarded. By contrast, if the restored file is only an approximation of the original, the compression is described as lossy: some information has been discarded and is now unrecoverable. Lossless compression is what’s typically used for text files and computer programs, because those are domains in which even a single incorrect character has the potential to be disastrous. Lossy compression is often used for photos, audio, and video in situations in which absolute accuracy isn’t essential. Most of the time, we don’t notice if a picture, song, or movie isn’t perfectly reproduced. The loss in fidelity becomes more perceptible only as files are squeezed very tightly. In those cases, we notice what are known as compression artifacts: the fuzziness of the smallest jpeg and mpeg images, or the tinny sound of low-bit-rate MP3s.

Xerox photocopiers use a lossy compression format known as jbig2, designed for use with black-and-white images. To save space, the copier identifies similar-looking regions in the image and stores a single copy for all of them; when the file is decompressed, it uses that copy repeatedly to reconstruct the image. It turned out that the photocopier had judged the labels specifying the area of the rooms to be similar enough that it needed to store only one of them—14.13—and it reused that one for all three rooms when printing the floor plan.

The fact that Xerox photocopiers use a lossy compression format instead of a lossless one isn’t, in itself, a problem. The problem is that the photocopiers were degrading the image in a subtle way, in which the compression artifacts weren’t immediately recognizable. If the photocopier simply produced blurry printouts, everyone would know that they weren’t accurate reproductions of the originals. What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t. (In 2014, Xerox released a patch to correct this issue.)

I think that this incident with the Xerox photocopier is worth bearing in mind today, as we consider OpenAI’s ChatGPT and other similar programs, which A.I. researchers call large language models. The resemblance between a photocopier and a large language model might not be immediately apparent—but consider the following scenario. Imagine that you’re about to lose your access to the Internet forever. In preparation, you plan to create a compressed copy of all the text on the Web, so that you can store it on a private server. Unfortunately, your private server has only one per cent of the space needed; you can’t use a lossless compression algorithm if you want everything to fit. Instead, you write a lossy algorithm that identifies statistical regularities in the text and stores them in a specialized file format. Because you have virtually unlimited computational power to throw at this task, your algorithm can identify extraordinarily nuanced statistical regularities, and this allows you to achieve the desired compression ratio of a hundred to one.

Now, losing your Internet access isn’t quite so terrible; you’ve got all the information on the Web stored on your server. The only catch is that, because the text has been so highly compressed, you can’t look for information by searching for an exact quote; you’ll never get an exact match, because the words aren’t what’s being stored. To solve this problem, you create an interface that accepts queries in the form of questions and responds with answers that convey the gist of what you have on your server.

What I’ve described sounds a lot like ChatGPT, or most any other large language model. Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.

This analogy makes even more sense when we remember that a common technique used by lossy compression algorithms is interpolation—that is, estimating what’s missing by looking at what’s on either side of the gap. When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them. (“When in the Course of human events, it becomes necessary for one to separate his garments from their mates, in order to maintain the cleanliness and order thereof. . . .”) ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.

Given that large language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large language models, but there is another aspect to the compression analogy that is worth considering. Since 2006, an A.I. researcher named Marcus Hutter has offered a cash reward—known as the Prize for Compressing Human Knowledge, or the Hutter Prize—to anyone who can losslessly compress a specific one-gigabyte snapshot of Wikipedia smaller than the previous prize-winner did. You have probably encountered files compressed using the zip file format. The zip format reduces Hutter’s one-gigabyte file to about three hundred megabytes; the most recent prize-winner has managed to reduce it to a hundred and fifteen megabytes. This isn’t just an exercise in smooshing. Hutter believes that better text compression will be instrumental in the creation of human-level artificial intelligence, in part because the greatest degree of compression can be achieved by understanding the text.

To grasp the proposed relationship between compression and understanding, imagine that you have a text file containing a million examples of addition, subtraction, multiplication, and division. Although any compression algorithm could reduce the size of this file, the way to achieve the greatest compression ratio would probably be to derive the principles of arithmetic and then write the code for a calculator program. Using a calculator, you could perfectly reconstruct not just the million examples in the file but any other example of arithmetic that you might encounter in the future. The same logic applies to the problem of compressing a slice of Wikipedia. If a compression program knows that force equals mass times acceleration, it can discard a lot of words when compressing the pages about physics because it will be able to reconstruct them. Likewise, the more the program knows about supply and demand, the more words it can discard when compressing the pages about economics, and so forth.

Large language models identify statistical regularities in text. Any analysis of the text of the Web will reveal that phrases like “supply is low” often appear in close proximity to phrases like “prices rise.” A chatbot that incorporates this correlation might, when asked a question about the effect of supply shortages, respond with an answer about prices increasing. If a large language model has compiled a vast number of correlations between economic terms—so many that it can offer plausible responses to a wide variety of questions—should we say that it actually understands economic theory? Models like ChatGPT aren’t eligible for the Hutter Prize for a variety of reasons, one of which is that they don’t reconstruct the original text precisely—i.e., they don’t perform lossless compression. But is it possible that their lossy compression nonetheless indicates real understanding of the sort that A.I. researchers are interested in?

Let’s go back to the example of arithmetic. If you ask GPT-3 (the large-language model that ChatGPT was built from) to add or subtract a pair of numbers, it almost always responds with the correct answer when the numbers have only two digits. But its accuracy worsens significantly with larger numbers, falling to ten per cent when the numbers have five digits. Most of the correct answers that GPT-3 gives are not found on the Web—there aren’t many Web pages that contain the text “245 + 821,” for example—so it’s not engaged in simple memorization. But, despite ingesting a vast amount of information, it hasn’t been able to derive the principles of arithmetic, either. A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic. The Web certainly contains explanations of carrying the “1,” but GPT-3 isn’t able to incorporate those explanations. GPT-3’s statistical analysis of examples of arithmetic enables it to produce a superficial approximation of the real thing, but no more than that.

Given GPT-3’s failure at a subject taught in elementary school, how can we explain the fact that it sometimes appears to perform well at writing college-level essays? Even though large language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory. Perhaps arithmetic is a special case, one for which large language models are poorly suited. Is it possible that, in areas outside addition and subtraction, statistical regularities in text actually do correspond to genuine knowledge of the real world?

I think there’s a simpler explanation. Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

Alot of uses have been proposed for large language models. Thinking about them as blurry jpegs offers a way to evaluate what they might or might not be well suited for. Let’s consider a few scenarios.

Can large language models take the place of traditional search engines? For us to have confidence in them, we would need to know that they haven’t been fed propaganda and conspiracy theories—we’d need to know that the jpeg is capturing the right sections of the Web. But, even if a large language model includes only the information we want, there’s still the matter of blurriness. There’s a type of blurriness that is acceptable, which is the re-stating of information in different words. Then there’s the blurriness of outright fabrication, which we consider unacceptable when we’re looking for facts. It’s not clear that it’s technically possible to retain the acceptable kind of blurriness while eliminating the unacceptable kind, but I expect that we’ll find out in the near future.

Even if it is possible to restrict large language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.

There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large language models and lossy compression is useful. Repeatedly resaving a jpeg creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.

Indeed, a useful criterion for gauging a large language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either. Conversely, if a model starts generating text so good that it can be used to train new models, then that should give us confidence in the quality of that text. (I suspect that such an outcome would require a major breakthrough in the techniques used to build these models.) If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.

Can large language models help humans with the creation of original writing? To answer that, we need to be specific about what we mean by that question. There is a genre of art known as Xerox art, or photocopy art, in which artists use the distinctive properties of photocopiers as creative tools. Something along those lines is surely possible with the photocopier that is ChatGPT, so, in that sense, the answer is yes. But I don’t think that anyone would claim that photocopiers have become an essential tool in the creation of art; the vast majority of artists don’t use them in their creative process, and no one argues that they’re putting themselves at a disadvantage with that choice.

So let’s assume that we’re not talking about a new genre of writing that’s analogous to Xerox art. Given that stipulation, can the text generated by large language models be a useful starting point for writers to build off when writing something original, whether it’s fiction or nonfiction? Will letting a large language model handle the boilerplate allow writers to focus their attention on the really creative parts?

Obviously, no one can speak for all writers, but let me make the argument that starting with a blurry copy of unoriginal work isn’t a good way to create original work. If you’re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isn’t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose. Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.

And it’s not the case that, once you have ceased to be a student, you can safely use the template that a large language model provides. The struggle to express your thoughts doesn’t disappear once you graduate—it can take place every time you start drafting a new piece. Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.

There’s nothing magical or mystical about writing, but it involves more than placing an existing document on an unreliable photocopier and pressing the Print button. It’s possible that, in the future, we will build an A.I. that is capable of writing good prose based on nothing but its own experience of the world. The day we achieve that will be momentous indeed—but that day lies far beyond our prediction horizon. In the meantime, it’s reasonable to ask, What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry jpeg, when you still have the original? :diamonds:

3 Likes

And another more recent piece by Chiang on the dangerous implications of AI and neoliberal capitalism

Illustration by Berke Yazicioglu

Annals of Artificial Intelligence

Will A.I. Become the New McKinsey?

As it’s currently imagined, the technology promises to concentrate wealth and disempower workers. Is an alternative possible?

By Ted Chiang

May 4, 2023

Save this story

When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.

So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.

A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.

The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.

Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. That’s the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinsey’s solutions will increase shareholder value more than your firm’s solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.

Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, I’m not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, I’m talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, I’m not criticizing the idea of selling things; I’m criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, I’m criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.

As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isn’t really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role? Can A.I. do anything to assist workers instead of management?

Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one.

Many people think that A.I. will create more unemployment, and bring up universal basic income, or U.B.I., as a solution to that problem. In general, I like the idea of universal basic income; however, over time, I’ve become skeptical about the way that people who work in A.I. suggest U.B.I. as a response to A.I.-driven unemployment. It would be different if we already had universal basic income, but we don’t, so expressing support for it seems like a way for the people developing A.I. to pass the buck to the government. In effect, they are intensifying the problems that capitalism creates with the expectation that, when those problems become bad enough, the government will have no choice but to step in. As a strategy for making the world a better place, this seems dubious.

You may remember that, in the run-up to the 2016 election, the actress Susan Sarandon—who was a fervent supporter of Bernie Sanders—said that voting for Donald Trump would be better than voting for Hillary Clinton because it would bring about the revolution more quickly. I don’t know how deeply Sarandon had thought this through, but the Slovenian philosopher Slavoj Žižek said the same thing, and I’m pretty sure he had given a lot of thought to the matter. He argued that Trump’s election would be such a shock to the system that it would bring about change.

What Žižek advocated for is an example of an idea in political philosophy known as accelerationism. There are a lot of different versions of accelerationism, but the common thread uniting left-wing accelerationists is the notion that the only way to make things better is to make things worse. Accelerationism says that it’s futile to try to oppose or reform capitalism; instead, we have to exacerbate capitalism’s worst tendencies until the entire system breaks down. The only way to move beyond capitalism is to stomp on the gas pedal of neoliberalism until the engine explodes.

I suppose this is one way to bring about a better world, but, if it’s the approach that the A.I. industry is adopting, I want to make sure everyone is clear about what they’re working toward. By building A.I. to do jobs previously performed by people, A.I. researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in. Intentionally or not, this is very similar to voting for Trump with the goal of bringing about a better world. And the rise of Trump illustrates the risks of pursuing accelerationism as a strategy: things can get very bad, and stay very bad for a long time, before they get better. In fact, you have no idea of how long it will take for things to get better; all you can be sure of is that there will be significant pain and suffering in the short and medium term.

I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.

People who criticize new technologies are sometimes called Luddites, but it’s helpful to clarify what the Luddites actually wanted. The main thing they were protesting was the fact that their wages were falling at the same time that factory owners’ profits were increasing, along with food prices. They were also protesting unsafe working conditions, the use of child labor, and the sale of shoddy goods that discredited the entire textile industry. The Luddites did not indiscriminately destroy machines; if a machine’s owner paid his workers well, they left it alone. The Luddites were not anti-technology; what they wanted was economic justice. They destroyed machinery as a way to get factory owners’ attention. The fact that the word “Luddite” is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.

Whenever anyone accuses anyone else of being a Luddite, it’s worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving people’s lives? Or are they just trying to increase the private accumulation of capital?

Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesn’t include better lives for people who work? What is the point of greater efficiency, if the money being saved isn’t going anywhere except into shareholders’ bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technology—and those include uses that benefit shareholders over workers—without being described as opponents of technology.

Imagine an idealized future, a hundred years from now, in which no one is forced to work at any job they dislike, and everyone can spend their time on whatever they find most personally fulfilling. Obviously it’s hard to see how we’d get there from here. But now consider two possible scenarios for the next few decades. In one, management and the forces of capital are even more powerful than they are now. In the other, labor is more powerful than it is now. Which one of these seems more likely to get us closer to that idealized future? And, as it’s currently deployed, which one is A.I. pushing us toward?

Of course, there is the argument that new technology improves our standard of living in the long term, which makes up for the unemployment that it creates in the short term. This argument carried weight for much of the post-Industrial Revolution period, but it has lost its force in the past half century. In the United States, per-capita G.D.P. has almost doubled since 1980, while the median household income has lagged far behind. That period covers the information-technology revolution. This means that the economic value created by the personal computer and the Internet has mostly served to increase the wealth of the top one per cent of the top one per cent, instead of raising the standard of living for U.S. citizens as a whole.

Of course, we all have the Internet now, and the Internet is amazing. But real-estate prices, college tuition, and health-care costs have all risen faster than inflation. In 1980, it was common to support a family on a single income; now it’s rare. So, how much progress have we really made in the past forty years? Sure, shopping online is fast and easy, and streaming movies at home is cool, but I think a lot of people would willingly trade those conveniences for the ability to own their own homes, send their kids to college without running up lifelong debt, and go to the hospital without falling into bankruptcy. It’s not technology’s fault that the median income hasn’t kept pace with per-capita G.D.P.; it’s mostly the fault of Ronald Reagan and Milton Friedman. But some responsibility also falls on the management policies of C.E.O.s like Jack Welch, who ran General Electric between 1981 and 2001, as well as on consulting firms like McKinsey. I’m not blaming the personal computer for the rise in wealth inequality—I’m just saying that the claim that better technology will necessarily improve people’s standard of living is no longer credible.

The fact that personal computers didn’t raise the median income is particularly relevant when thinking about the possible benefits of A.I. It’s often suggested that researchers should focus on ways that A.I. can increase individual workers’ productivity rather than replace them; this is referred to as the augmentation path, as opposed to the automation path. That’s a worthy goal, but, by itself, it won’t improve people’s economic fortunes. The productivity software that ran on personal computers was a perfect example of augmentation rather than automation: word-processing programs replaced typewriters rather than typists, and spreadsheet programs replaced paper spreadsheets rather than accountants. But the increased personal productivity brought about by the personal computer wasn’t matched by an increased standard of living.

The only way that technology can boost the standard of living is if there are economic policies in place to distribute the benefits of technology appropriately. We haven’t had those policies for the past forty years, and, unless we get them, there is no reason to think that forthcoming advances in A.I. will raise the median income, even if we’re able to devise ways for it to augment individual workers. A.I. will certainly reduce labor costs and increase profits for corporations, but that is entirely different from improving our standard of living.

It would be convenient if we could assume that a utopian future is right around the corner and develop technology for use in that future. But the fact that a given technology would be helpful in a utopia does not imply that it’s helpful now. In a utopia where there’s a machine that converts toxic waste into food, generating toxic waste wouldn’t be a problem, but, in the here and now, no one could claim that generating toxic waste is harmless. Accelerationists might argue that generating more toxic waste will motivate the invention of a waste-to-food converter, but how convincing is that? We evaluate the environmental impact of technologies in the context of the mitigations that are currently available, not in the context of hypothetical future mitigations. By the same token, we can’t evaluate A.I. by imagining how helpful it will be in a world with U.B.I.; we have to evaluate it in light of the existing imbalance between capital and labor, and, in that context, A.I. is a threat because of the way it assists capital.

Aformer partner at McKinsey defended the company’s actions by saying, “We don’t do policy. We do execution.” But this is a pretty thin excuse; harmful policy decisions are more likely to be made when consulting firms—or new technologies—offer ways to implement them. The version of A.I. that’s currently being developed makes it easier for companies to lay people off. So is there any way to develop a kind of A.I. that makes it harder?

In his book “How to Be an Anticapitalist in the 21st Century,” the sociologist Erik Olin Wright offers a taxonomy of strategies for responding to the harms of capitalism. Two of the strategies he mentions are smashing capitalism and dismantling capitalism, which probably fall outside the scope of this discussion. The ones that are more relevant here are taming capitalism and resisting capitalism. Roughly speaking, taming capitalism means government regulation, and resisting capitalism means grassroots activism and labor unions. Are there ways for A.I. to strengthen those things? Is there a way for A.I. to empower labor unions or worker-owned coöperatives?

In 1976, the workers at the Lucas Aerospace Corporation in Birmingham, England, were facing layoffs because of cuts in defense spending. In response, the shop stewards produced a document known as the Lucas Plan, which described a hundred and fifty “socially useful products,” ranging from dialysis machines to wind turbines and hybrid engines for cars, that the workforce could build with its existing skills and equipment rather than being laid off. The management at Lucas Aerospace rejected the proposal, but it remains a notable modern example of workers trying to steer capitalism in a more human direction. Surely something similar must be possible with modern computing technology.

Does capitalism have to be as harmful as it currently is? Maybe not. The three decades following the Second World War are sometimes known as the golden age of capitalism. This period was partially the result of better government policies, but the government didn’t create the golden age on its own: corporate culture was different during this era. In General Electric’s annual report from 1953, the company bragged about how much it paid in taxes and how much it was spending on payroll. It explicitly said that “maximizing employment security is a prime company goal.” The founder of Johnson & Johnson said that the company’s responsibility to its employees was higher than its responsibility to its shareholders. Corporations then had a radically different conception of their role in society compared with corporations today.

Is there a way to get back to those values? It seems unlikely, but remember that the golden age of capitalism came after the enormous wealth inequality of the Gilded Age. Right now we’re living in a second Gilded Age, in which wealth inequality is about the same as it was back in 1913, so it’s not impossible that we could go from where we are now to a second golden age. Of course, in between the first Gilded Age and the golden age we had the Great Depression and two World Wars. An accelerationist might say that those events were necessary to bring about the golden age, but I think most of us would prefer to skip over those steps. The task before us is to imagine ways for technology to move us toward a golden age without bringing about another Great Depression first.

We all live in a capitalist system, so we are all participants in capitalism whether we like it or not. And it’s reasonable to wonder if there’s anything you as an individual can do. If you work as a food scientist at Frito-Lay and your job is to invent new flavors of potato chip, I’m not going to say that you have an ethical obligation to quit because you’re assisting the engine of consumerism. You’re using your training as a food scientist to provide customers with a pleasant experience; that’s a perfectly reasonable way to make a living.

But many of the people who work in A.I. regard it as more important than inventing new flavors of potato chip. They say it’s a world-changing technology. If that’s the case, then they have a duty to find ways for A.I. to make the world better without first making it worse. Can A.I. ameliorate the inequities of our world other than by pushing us to the brink of societal collapse? If A.I. is as powerful a tool as its proponents claim, they should be able to find other uses for it besides intensifying the ruthlessness of capital.

If there is any lesson that we should take from stories about genies granting wishes, it’s that the desire to get something without effort is the real problem. Think about the story of “The Sorcerer’s Apprentice,” in which the apprentice casts a spell to make broomsticks carry water but is unable to make them stop. The lesson of that story is not that magic is impossible to control: at the end of the story, the sorcerer comes back and immediately fixes the mess the apprentice made. The lesson is that you can’t get out of doing the hard work. The apprentice wanted to avoid his chores, and looking for a shortcut was what got him into trouble.

The tendency to think of A.I. as a magical problem solver is indicative of a desire to avoid the hard work that building a better world requires. That hard work will involve things like addressing wealth inequality and taming capitalism. For technologists, the hardest work of all—the task that they most want to avoid—will be questioning the assumption that more technology is always better, and the belief that they can continue with business as usual and everything will simply work itself out. No one enjoys thinking about their complicity in the injustices of the world, but it is imperative that the people who are building world-shaking technologies engage in this kind of critical self-examination. It’s their willingness to look unflinchingly at their own role in the system that will determine whether A.I. leads to a better world or a worse one. :diamonds:

More Science and Technology

Sign up for our daily newsletter to receive the best stories from The New Yorker.

Ted Chiang is an award-winning author of science fiction. In 2016, the title story from his first collection, “Stories of Your Life and Others,” was adapted into the film “Arrival.” He lives in Bellevue, Washington, where he works as a freelance technical writer.

2 Likes

Hi @admin , one or two elements of TC’s article don’t synch with my experience:

  1. the Genie or McKenzie AI - maybe but there are lots of other potential AI’s depending on their development .
    At the moment they seem to be brought into play whenever we find we have too much data for humans to read absorbe and sort - didn’t the MHRA invest in one to deal with the massive number of side effects they thought would follow the jab?
    One to one political manipulation of voters is not possible in most general elections, but combine social media and AI - Cambridge Analytical and voila!
    Spooks and police must have too much data to deal with so AI could be used to search and sort - hence data gathering has increased now that they have a route into managing it! This doesn’t bode well for the public at large and excessive use will bring in 1984 to a far greater degree than we have at present.
    Military demand for AI must be massive at every level.

  2. I think we have moved away from capitalism to autocratic militarism or globalism, so economic and social issues are no longer of vital relevance in decision making.

  3. Whitney Webb recently made a very good observation about UBI - it sounds good until you take into account that UBI is intended to remove all other forms of social support - including free health care and pensions! The devil is in the detail as always.

  4. The promises made about the benefits to everyone from robotics at the start were never intended to be fulfilled, as it has proved. Instead of individuals or local communities having ownership of robots to improve their daily lives ownership stopped at corporate and selective state operations. So it will be with AI with or without robotic elements, imo.

cheers

2 Likes

Hi @CJ1

Yes, you’re right. The field of machine learning and AI is very broad with lots of application areas. TC is talking mainly about the new breed of generative AI represented by the new breed of chatbots. His main point seems to be that there is a lot of anthropormorphising going on around what is ultimately a collection of statistical routines. I found his analogy with lossy compression and interpolation very interesting. Its not the full story, but it is an interesting analogy. It’s a nice counterpoint to those who are claiming that AI is already conscious.

I agree. And I think TC is arguing that the way AI is being used will accelerate that trend

WW is outstanding. You’re right - the devil is in the detail.

Agree with your point 4. With the happy thought that, again, unregulated AI could make the whole thing much worse.

Cheers for your thoughts

1 Like

A random quote from an excellent article. I haven’t read the second one yet but on first skim noticed the reference to Slavoj Žižek. He’s a funny guy, shoots from the hip rather than thinking things through as thoroughly as Ted Chiang seems to imply. But I might come back to that.

The quoted sentence is very prescient. I have noticed during sessions with a competent psychotherapist (and I am currently pursuing a complaint against a very much NOT competent one) that one says things that one had never thought of before. Or rather they surface from the unconscious, with or without some prompting from the therapist. On mentioning this to a very young but very sharp therapist she suggested I research “the rubber duck effect”.

In brief: even talking to a rubber duck (or reading our draft essay aloud to one) helps to uncover defects we hadn’t previously spotted. It can be as simple as rearranging two clauses in the same sentence so that they read better. Give it a go.

2 Likes

Having now read the (second Ted Chiang) article I see that he has used Slavoj Žižek to provide a bridge to a (decent) summary of (left) accelerationism. (There is a right accelerationism too, UKColumn may even be an example).

Folks like Nick Land are probably much more central to an understanding of l/accelerationism but I use the term ‘understanding’ under advisement: he is very opaque in a lot of his writing.

This article includes a video which I haven’t watched but seems a decent summary of where Žižek stands. (Before leaving the topic I have to add that Žižek is a very readable film critic and I recommend his series A Pervert’s Guide to Cinema. Lots of Lynch and Hitchcock. Very likely on Yuchoob.)

Anyhow, here’s a decent, simple, article about the threats to employment posed by AI, which is a key theme of Chiang’s (second) article.

2 Likes

Google fakes it?

I don’t know how serious the shortfall here is in techical terms, but I do find the seemingly brazen bad faith revealed significant.
ED

Google’s best Gemini AI demo video was fabricated

Google takes heat for a misleading AI demo video that hyped up its GPT-4 competitor.

Benj Edwards

4 Likes

Interesting but not surprising I guess. Marketing is a form of sanctioned lying after all.

It did remind me of the conversation with Whitney Webb that you posted here a couple months back where she was predicting that someone would fake an artificial general intelligence as the next step in this surreal tech landscape…

Cheers

2 Likes

As Aly says, none of us really expect truth in advertising, but it is insidious for Google to make these claims simply because tons of people, including investors, will believe that this level of ‘intelligence’ is close to being fulfilled. That props up the agenda that (1) AI is beneficial and will solve all our problems and (2) the State (or Global Gov as appropriate) had better look after the largely unemployed population with UBI. For a generation or so. The well-behaved, sterilised, docile population, that is.

Maybe four years ago ~ I sampled a Microsoft tool which was ostensibly designed to help people with vision problems to get a sense of their surroundings by using phone camera. The AI would describe the scene eg “a large auditorium with a man aged about forty writing on a whiteboard” = you are in a lecture theatre. It was fairly basic and easy to fool. Doing the same with real time moving images may be possible one day, in simplified Built Back Better environments. (Imagine how much Teams footage has been analysed by their servers since then.)

… but why?

Yeah, sure, it might help some blind people to feel more a part of their environment but goggles that stimulate your optic nerve are far better. One of my nephews uses them.

Google are self-evidently working on a RoboCop. Perfect discrimination is not the real aim. The desired outcome on the real project plan is to be sufficiently accurate, with appropriate warnings posted on the borders of elite gated compounds, to encourage judicious pizza deliveries only. (Attribution: Snow Crash, Neal Stephenson)

3 Likes

Google just launched the most powerful AI on the planet. Gemini.
But after the launch Google’s value dropped by 4.4% - which is some going for a $1.73 trillion outfit (about $75bn I make it).

Some amusing commentary in this 18m video
https://twitter.com/IntConfused/status/1760961175867773335

People quickly picked up some flaws in its output, in particular like this (from the above link):

Gemini’s image of a pope
image

Gemini’s image of Vikings
image

Gemini’s images of Nazis
image

As is explained in the vid, here is Jen Gennai, founder of Google’s Responsible Innovation Group, AI and governance team (or something very similar) to explain.
In that capacity, she “ensured Google met it’s AI principles our company’s ethical charter for the deployment of fair, inclusive and ethical and advanced technologies.”
She said she took a “principled, risk-based inclusive approach when conducting ethical and algorthimic impact assessments of products prior to launch, to ensure that they didn’t cause unintended or harmful consequences to the billions of Google’s users.”

Lol.
Right wingers are alarmed at the ‘erasion’ of white people, and put it down to ‘Marxism’ bless them (or maybe not) from the left wing er…billionaires.
Kind of missing the point of the weaponization of selected minority groups.

But amusing.

Unlike this:

US Used AI to Help Find Middle East Targets for Airstrikes

https://www.bloomberg.com/news/articles/2024-02-26/us-says-it-used-ai-to-help-find-targets-it-hit-in-iraq-syria-and-yemen

Matt Taibbi comments:

“If AI Thinks George Washington is a Black Woman, Why Are We Letting it Pick Bomb Targets?”

Article is paywalled.

3 Likes

Of all people, Andrew Tate has figured out how to get white people on Google’s AI.

Ask it to show pictures of people who like fried chicken…

2 Likes

I did wonder if the Gemini stuff was spoofed. I don’t seem to be able to access it which is most upsetting having had Beta access to the previous version. Which admittedly I got Bard of pretty soon yuk yuk. The images of Nazis were especially funny. When I’m marched off to Extremist jail or reeducation centre due to posting a hate-filled meme of a cat vomiting at the sight of an MP to my three-and-a-half followers, I do hope they will send a properly representative snatch squad of varying hues and with multicoloured flag badges etc.

3 Likes

I hear there is an example of Hasbara on Google’s AI. Ask it what is Israehell (but spell it the correct way). Then ask what is Palestine.