5 Filters

Prof Norman Fenton. AI is expensive garbage whose objective is to censor the internet

(We’ve not quite got the ‘category’ for this; global authoritarianism pathway, great reset, the takeover of academia by the One World zombies or some such)

Mathematician, statistics and computing expert Prof Fenton is known (along with colleague Prof Martin Neil) for viscerating offical vaccine data during covid. They soon became pariah’s, suddenly unable to get anything published, even in the more permissive preprint world.

I’d missed that he had become such an open rebel since he retired a year or two, probably due to poor health which I did know about.

At the top of academia, he is well connected (and partly ejected, it seems), and so he’s been able to taste some of the high winds for himself.

Why AI ‘misinformation’ algorithms and research is mostly expensive garbage

3 Likes

Thanks ED. I think that Prof Fenton is quite right to highlight the issues that he does. Although it niggles me (as it always does) that he blames the “far left”. Do these words have no meaning anymore? . I’m sure the Socialist Workers Party and the Morning Star will be thrilled to know the influence that have over public discourse and censorship. Especially in America in the NYT and the DNC.

Someone should tell them.

Anyway. Leaving his illiteracy of the political spectrum to one side, his AI concerns are very real. This has been an ongoing problem in AI and machine learning for decades, and it might be exploding into the near future. It’s the reason why black people are more likely to be classified by AI as higher risk of recidivism, why women often fail to get a correct medical diagnosis from AI, why people from ethnic minority or working class backgrounds have their CVs ignored by AI scanners.

Biases in the training data will inevitably show up (in very complex ways) in the weights of the models. I remember the fiasco a few years back when Microsoft’s chat bit started spouting racist invective and had to be taken off line. Presumably to have a nice lie down somewhere in a quiet room with Alf Garnet :smile:

It’s funny how this is such a negative when we’re discussing the new Chinese LLMs (“try asking about it Tiananmen Square” wink wink) but when it comes to our AIs then we don’t really want to see it. Of course the problems are equally bad.

The huge issue now is that I see stories over and over of people who are outsourcing their thinking to AI. Students getting essays written for them. Students disbelieving teachers when the teacher tries to point it that an AI is incorrect.

We’re steaming into some very strange ports these days amigo…

1 Like

Hi @admin

Not in medical/science politics these days (in which the traditional left are far less awake than the right). Though the traditional meaning is the same, it’s extra baggage to keep qualifying the terms. It is niggling as you say; more pointedly so when it’s not an American! Indeed, a smart guy like Fenton you would have thought would have picked this up. His failure to see this may partly reflect the polarisation of the company around him. .

I see I overstated Fenton’s claim, he was referring to AI targeting ‘;misinformation’. Nevertheless this spreads out into a lot of of AI applications that are of any genuine educational use. If there is no political controversy, you would be as well looking up Wikipedia.
Your example of the disbelieving students is a little frightening - your illustration of how predudices built into AI makes it start to go off the rails from the start is very apt. It’s a regenerative process. Orwell in “1984” farsightedly remarked that no-one could say for sure how the allegedly all-knowing machine began. But I bet he did!

Cheers

1 Like