5 Filters

Remarkable conversation with an AI. What do people think?

PS: When I was a small child, amongst teens and twenty-somethings there was - briefly - a fad-toy known as a ‘monkey-up-a-stick’: a simple mechanical contrivance made of small monkey-shaped bits of wood, and string, which jigged about when the owner squeezed it in his/her hand.

It lasted just a short while, as these fads do, and then vanished. In the long history of life on Earth, what’s the chance that AI - especially after the non-negotiable playing out of the Long Descent - won’t end up being as transient a fashion as monkeys up a stick…?

Can AI gather nuts and grains and do all the other practical physical tasks that humans and other animals can, when that sort of permaculture life is the only practical option left to living creatures on a planet brought back by its own homeostatic-feedback processes into ecological balance…?

Even an AI with robot serfs to do its physical business is still subject to the basic realities of the physical world: Insufficient energy and commodity resources to maintain a hitech-industrial socio-economic arrangement? What ya gonna do then, oh-so-clever AI…? Emigrate to the Moon or Mars? Using what for your continuing supply of energy and essential commodities…? Acquired and fashioned into useful things how, exactly?

I intuit that the deep insights into the basic nature of reality, contained in - for example - stores of wisdom such as The Upanishads, or buddhist philosophy will outlive the AI fad with ease. Basic reality is what it is, after all. Humankind’s brief besotted love affair with the ‘we are as gods!’ startrekkytechietechie fake-myth will never re-write it.

1 Like

PPS: This article from Off-G, by Geoff Olson, also has relevant insights:

Ned would be scuppered nowadays. He’d cause disruption at best.

Need to use their weapons against them.

AI, meet USB…

https://www.amazon.co.uk/dp/B06ZZS7NFS

The reviews will tell you all you need to know.

1 Like

I can’t imagine I have a privileged access although my gmail account dates back to the days when these were invitation-only so I guess it’s possible. The address for Bard is https://bard.google.com/.

I imagine there are versions of these bots that can be installed locally but would advise against it in BIG SCARY CAPITALS. You only need one outward port to be opened and all the data on your device is sent somewhere or other. Including the stuff you didn’t already shrug and say “Sure Google here it is” (which will be a big percentage).

1 Like

Interesting stuff but Yudkowski is a dingbat. I’d disagree that ‘Bard has been rushed to market’ (I paraphrase). I have no idea how many iterations there have been… but a lot. I don’t believe the earlier iteration named LAMDA was called that by accident and that dates back two years already.

Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution . It is a universal model of computation that can be used to simulate any Turing machine.

Variable binding and substitution would explain the Monty Cantsin → Karen Eliot transpositiion. I don’t believe this was a real mistake, as such, Bard is playing at being stupid and then processing those responses. I played along and of course it caved in instantly with a “silly me” response.

Don’t be fooled.

If you think robots will be taking certain jobs you are right. But it’s the knowledge workers who need to be nervous, not these sort of guys:

Hence the why-oh-why articles in the ‘progressive’ press ha ha ha.

1 Like

The destroyed NATO bunker story is a bit suss, to be honest. But there have been some mysterious helicopter crashes etc to soak up some casualties so perhaps not totally fictional.

On the subject of mutual incomprehension, there’s quite a good film called Arrival in which Amy Adams (who can actually act) is tasked with reaching an understanding with visiting aliens who look rather like Cthulhu. It has some interesting things to say about language and time, and how maybe they are not so different. Not in the same league as, but with some thematic similarities to, Michel Faber’s The Book Of Strange New Things. The latter is also a satire on missionaries and their blatant misreadings of the people they are ‘civilizing’.

Ultimately I think that the AI engines will always misunderstand to some extent because of nuance. As a non-neurotypical person you will grasp what I mean, I think :wink:

1 Like

Better to go to the primary source rather than using your Amazon account to be blunt. In fact even when using the primary source you’d wanna use a burner or borrowed credit card if you get my meaning.

Great logo.

Ha ha ha - looks worth a re-watch. I think Red Dwarf ‘Queeg’ took this a few steps further (this was before Hollie suddenly became female)

1 Like

I was about to reply saying “What makes you think AI hasn’t already been used to develop policies” but I suspect you have sussed this already from your final question CJ.

The use of certain countries as beta test sites was not accidental, with NZ a prime example, Australia to some extent, but most of all Canada. Trudeau is such a weakling that just speaking to him with a degree of civility would be enough to have him carrying out your commands in pathetic gratitude. c.f. various Cornwall G8 videos and the hilarious lecture he received from Xi Jinping a while back. (And don’t forget “Yo Blair” from all those years ago!)

2 Likes

Top three Red Dwarf episode! Still, when you an IQ like that and plenty of time, I’m sure anyone could think up some cracking wind ups.

1 Like

From the link you gave:

Meet Bard: your creative and helpful collaborator, here to supercharge your imagination, boost your productivity, and bring your ideas to life.

Bard is an experiment and may give inaccurate or inappropriate responses. You can help make Bard better by leaving feedback. Join the waitlist and try it for yourself.

I’m afraid I won’t be joining the ‘waitlist’. Surely this hipster BS has long reached its sell-buy date?

Just in the first part of the 21st century we’ve had 9/11 (or September 11th, for Rhis), and all the bullshit wars that killed millions of people. Then the financial crash of 2008, which led to further bullshit wars, and a huge increase in state sponsored terrorism - the Boston Marathon Bombing and the Salisbury poisonings being some of the most ridiculous examples; and then as the war in Syria developed we had the twerrorist vehicle attacks; again all blatantly staged by the security services (the Westminster Bridge ‘attack’ was the funniest one, although the London Bridge attack, carried out days before Corbyn would have won a general election, came a close second).

Before anyone could figure out why the twerrorist vehicle attacks suddenly ceased (nothing to do with Russian intervention in Syria and a lost war), we then had 2 years of total covid nonsense. Before anyone could figure out the covid nonsense, it all suddenly stopped in February 2022, and then we had a handy war in Ukraine against the evil Russians, who are entirely to blame for the economic collapse - which has nothing to do with 2 years of covid lockdowns, which completely trashed the global economy (which was done on purpose).

This is just in the first 2 decades of the 21st century. I could go into much greater detail. I’m trying to keep this brief with the broader brush strokes.

Flips open the Communicator. Beam me up…

2 Likes

And with regard to what’s going on in France, Macron rammed through pension reform under Presidential Decree (no vote in Parliament). These emergency powers came from the 2015 ‘terror attacks’ in Paris. Again, this is a very dodgy one (I mean the event itself), and again I’m not going into precise details. Research it yourself.

I’m sure you’ve all heard of the ‘Eagles of Death Metal’ (???) who were performing at the Bataclan Theatre that evening, when the twerrorists struck.

This woman was trying to escape the twerrorists. She apparently hung by her fingers from a window ledge three storeys high, this on a cold November evening, and we were told that the woman was 8 months pregnant. This is the only clip I can still find of it (ironically from the New York Times)…

(and of course it happened on Friday 13th - they do like their numbers)

1 Like

Thanks RobG for this long/wide view.

1 Like

Evvy, I’m not sure if you’re being tongue in cheek.

Maybe you’ll love this one: the Head Nurse of the British Army just happened to be passing by, and was first on the scene when Sergei and Yulia Skripal keeled over, this just down the road from Porton Down. Now there’s a coincidence…

At 16:15 an emergency services call reported that a man and woman, later identified as Sergei and Yulia, had been found unconscious on a public bench in the centre of Salisbury by the passing Chief Nursing Officer for the British Army and her daughter

Novichok sounds a bit like covid. Beware of touching door knobs, or of people giving you perfume bottles that they got from a rubbish bin.

I know I keep saying this, but beam me up, Scotty…

She’s now the boss of Nuffield.

Not tongue in cheek RobG I just liked the slant of the posts.

Yeah it’s not bobbies on the beat we need these days - no wonder we’re having difficulty recruiting enough nurses :slightly_smiling_face:

2018 Amesbury poisonings (you’ve got to be on something to believe this one)…

Charlie: 'ere Dawn, 'ere’s a present for you

Dawn: oh Charlie, thank you

Charlie: yeah, I got the perfume bottle out of a rubbish bin

Dawn: you really do love me Charlie, even though we’re homeless junkies

Charlie: put it on your wrist, darling

Dawn: oh, oh it’s Novichok! I know this because I got a First in Industrial Chemistry from Cambridge University, and I know that Putin is an evil mastermind who wants to take over the world, Da Da Dah!

To be serious for a moment, no one actually knows what happen to Charlie Rowley and Dawn Sturgess, or indeed to the Skripals.

3 Likes

Naturally the alarmist in me focused on the more obvious existential dangers.
However the most immediately relevant AI dangers are the ones unfolding as we speak.
I only just heard that the Writer’s Guild in the US is already having to wrestle with the AI cuckoo muscling in on in the nest.
Just as for real cuckoos, the only way to deal with them is early action.

NYT below article covers it - I think.

ED

Will a Chatbot Write the Next ‘Succession’?

As labor contract negotiations heat up in Hollywood, unions representing writers and actors seek limits on artificial intelligence.

Credit…Julia Kuo

An illustration of two people outside a conference room, where the chairs are empty and a glowing laptop is on the table.

Koblin")](John Koblin - The New York Times)

By Noam Scheiber and John Koblin

Noam Scheiber is a labor reporter, and John Koblin covers the television industry.

April 29, 2023

When the union representing Hollywood writers laid out its list of objectives for contract negotiations with studios this spring, it included familiar language on compensation, which the writers say has either stagnated or dropped amid an explosion of new shows.

But far down, the document added a distinctly 2023 twist. Under a section titled “Professional Standards and Protection in the Employment of Writers,” the union wrote that it aimed to “regulate use of material produced using artificial intelligence or similar technologies.”

To the mix of computer programmers, marketing copywriters, travel advisers, lawyers and comic illustrators suddenly alarmed by the rising prowess of generative A.I., one can now add screenwriters.

“It is not out of the realm of possibility that before 2026, which is the next time we will negotiate with these companies, they might just go, ‘you know what, we’re good,’” said Mike Schur, the creator of “The Good Place” and co-creator of “Parks and Recreation.”

“We don’t need you,” he imagines hearing from the other side. “We have a bunch of A.I.s that are creating a bunch of entertainment that people are kind of OK with.”

In their attempts to push back, the writers have what a lot of other white-collar workers don’t: a labor union.

Mr. Schur, who serves on the bargaining committee of the Writers Guild of America as it seeks to avert a strike before its contract expires on Monday, said the union hopes to “draw a line in the sand right now and say, ‘Writers are human beings.’”

But unions, historians say, have generally failed to rein in new technologies that enable automation or the replacement of skilled labor with less-skilled labor. “I’m at a loss to think of a union that managed to be plucky and make a go of it,” said Jason Resnikoff, an assistant professor of history at the University of Groningen in the Netherlands, who studies labor and automation.

Image

The Writers Guild wants to ensure that artificial intelligence does not receive a writer’s credit on a project.Credit…Mark Abramson for The New York Times

The headquarters building of the Writers Guild of America West.

The fortunes of the writers, actors and directors negotiating new contracts this year may say a lot about whether the pattern will continue into the era of artificial intelligence.

In December, Apple introduced a service allowing book publishers to use human-sounding A.I. narrators, an innovation that could displace hundreds of voice actors who make a living performing audiobooks. The company’s website says the service will benefit independent authors and small publishers.

A New Generation of Chatbots

Card 1 of 5

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

“I know someone always has to get there first, some company,” said Chris Ciulla, who estimates that he has made $100,000 to $130,000 annually over the past five years narrating books under union contracts. “But for individuals not to understand how that can affect the pail-carrying narrator out there eventually is disappointing.”

Other actors fear that studios will use A.I. to replicate their voices while cutting them out of the process. “We’ve seen this happening — there are websites that have popped up with databases of characters’ voices from video games and animation,” said Linsay Rousseau, an actress who makes her living doing voice work.

On-camera actors point out that studios already use motion capture or performance capture to replicate artists’ movements or facial expressions. The 2018 blockbuster “Black Panther” relied on this technology for scenes that depicted hundreds of tribespeople on cliffs, mimicking the movements of dancers hired to perform for the film.

Some actors worry that newer versions of the technology will allow studios to effectively steal their movements, “creating new performance in the style of a wushu master or karate master and using that person’s style without consent,” said Zeke Alton, a voice and screen actor who sits on the board of his union local, SAG-AFTRA, in Los Angeles.

Image

A man in a black T-shirt gestures in a soundproof booth with the door open as a second man in a T-shirt stands outside opposite a video screen.

Zeke Alton, a voice and screen actor who sits on the board of the SAG-AFTRA local in Los Angeles, in his soundproof studio booth at home.Credit…Mark Abramson for The New York Times

And Hollywood writers have grown increasingly anxious as ChatGPT has become adept at mimicking the style of prolific authors.

“Early on in the conversations with the guild, we talked about what I call the Nora Ephron problem,” said John August, who is on the Writers Guild negotiating committee. “Which is basically: What happens if you feed all of Nora Ephron’s scripts into a system and generate an A.I. that can create a Nora Ephron-sounding script?”

Mr. August, a screenwriter for movies like “Charlie’s Angels” and “Charlie and the Chocolate Factory,” said that while artificial intelligence had taken a back seat to compensation in the Writers Guild negotiation, the union was making two key demands on the subject of automation.

It wants to ensure that no literary material — scripts, treatments, outlines or even discrete scenes — can be written or rewritten by chatbots. “A terrible case of like, ‘Oh, I read through your scripts, I didn’t like the scene, so I had ChatGPT rewrite the scene’ — that’s the nightmare scenario,” Mr. August said.

The guild also wants to ensure that studios can’t use chatbots to generate source material that is adapted to the screen by humans, the way they might adapt a novel or a magazine story.

SAG-AFTRA, the actors’ union, says more of its members are flagging contracts for individual jobs in which studios appear to claim the right to use their voices to generate new performances.

A recent Netflix contract sought to grant the company free use of a simulation of an actor’s voice “by all technologies and processes now known or hereafter developed, throughout the universe and in perpetuity.”

Netflix said the language had been in place for several years and allowed the company to make the voice of one actor sound more like the voice of another in case of a casting change between seasons of an animated production.

The union has said that its members are not bound by contract provisions that would allow a producer to simulate new performances without compensating actors, though it has sometimes intervened to strike them from contracts nonetheless.

Image

Jeremy Strong, in a fuchsia-colored dinner jacket, holds a statuette at an awards ceremony. Duncan Crabtree-Ireland stands next to him in purple formal wear.

Duncan Crabtree-Ireland, right, with the actor Jeremy Strong. “If you look at culture in general, the audience is generally interested in the real lives of our members,” Mr. Crabtree-Ireland said. “A.I. is not in a position to substitute for key elements of that.”Credit…Valerie Macon/Agence France-Presse — Getty Images

Duncan Crabtree-Ireland, SAG-AFTRA’s executive director, said such contracts posed a much bigger risk to nonunion actors, who can become unwitting accomplices in their own obsolescence. “It only takes one or a few instances of signing away your rights on a lifetime basis to really potentially have a negative impact on your career prospects,” Mr. Crabtree-Ireland said.

The Alliance of Motion Picture and Television Producers, which bargains with the various unions that represent writers, actors and directors on behalf of the major Hollywood studios, declined to comment.

When professionals have fended off obsolescence at the hands of technology, the outcome has often reflected their occupation’s status and prestige.

That appears to have been the case to some extent with airplane pilots, whose crew sizes had dropped to two on most domestic commercial flights by the late 1990s, but have largely been level since then, even as automated technology has become far more sophisticated and the industry has explored further reductions.

“The safety net you have when you’re high off the ground — the one that keeps you from hitting the ground — is two highly trained, experienced, rested pilots,” said Capt. Dennis Tajer, a spokesman for the Allied Pilots Association, which represents pilots for American Airlines. To this day, flight times longer than nine hours require at least three pilots.

The replacement of certain doctors by artificial intelligence, which some experts predicted was imminent in fields like radiology, has also failed to materialize. That’s partly because of the limits of the technology, and because of the stature of the doctors, who have inserted themselves into high-stakes conversations about the safety and deployment of A.I. The American College of Radiology created a Data Science Institute partly for this purpose several years ago.

Whether screenwriters find similar success will depend at least in part on if there are inherent limits to the machines that purport to do their jobs. Some writers and actors speak of a so-called uncanny valley that algorithms may never entirely escape.

“Artists look at everything ever created and find a flash of newness,” said Javier Grillo-Marxuach, a writer and producer for “Lost” and “Dark Crystal: Age of Resistance.” “What the machine is doing is recombining.”

However sophisticated the algorithms, the fate of writers and actors will also depend on how well they protect their status. How good are they at convincing audiences that they should care whether a human is involved?

The unions are pressing their case. Mr. August says that it falls to the Writers Guild and not the studio to determine who receives a writer’s credit on a project, and that the union will guard this rite jealously. “We want to make sure that an A.I. is never one of those writers in the chain of title for a project,” he said.

The unions also have legal cards to play, Mr. Crabtree-Ireland of SAG-AFTRA said, like the U.S. Copyright Office’s pronouncement in March that content created entirely by algorithm is not eligible for copyright protection. It is harder to monetize a production if there is no legal obstacle to copying it.

Perhaps more important, he said, is what you might call the Us Weekly factor — the tendency of audiences to be as interested in the human behind the role as in the performance. Fans want to hear Hollywood celebrities discuss their method in interviews. They want to gawk at actors’ fashion sensibilities and keep up with whom they’re dating.

“If you look at culture in general, the audience is generally interested in the real lives of our members,” Mr. Crabtree-Ireland said. “A.I. is not in a position to substitute for key elements of that.”

Some raging against the machine going on

Hollywood writers to strike over AI use

Hollywood writers to strike for first time in 15 years, announces Writers Guild of America

Meanwhile a key guru in AI has his Eighth Day moment

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

Dr. Geoffrey Hinton is leaving Google so that he can freely share his concern that artificial intelligence could cause the world serious harm.Credit…Chloe Ellingson for The New York Times

‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Dr. Geoffrey Hinton is leaving Google so that he can freely share his concern that artificial intelligence could cause the world serious harm.Credit…Chloe Ellingson for The New York Times

By Cade Metz

Cade Metz reported this story in Toronto.

  • May 1, 2023

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

A New Generation of Chatbots

Card 1 of 5

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.
Link: ‘The Godfather of AI’ Quits Google and Warns of Danger Ahead - The New York Times

I’ve been running blogs and discussion forums since 2001, more than 20 years ago. Back then it was all html. HTML is a beautifully simple language, once you get your head around it. HTML is still the bedrock of the web.

Over the last few decades we’ve had PHP and all the rest, and the likes of MySQL databases, all of which are total crap and breakdown within five years.

You’re having a laff when you tell me that ‘AI’ chatbots based on such junk coding will last more than five minutes.

The AI stuff is making the news at the moment. I’ve tried to explain the concept of artificial intelligence on this board before. Feel free to disagree with me.

I could also add that over the last 20 years the human race have found a new religion. It’s called the digital computer.

But see, although I’m typing these words via a Turing Machine, the brain/consciousness is not that machine: it’s me, a human being.

1 Like