5 Filters

Path of least resistance?

I’ve been intermittently following this academic’s blog since 2019 as she has some pertinent things to say about toxicity in academia (and by extension any ‘liberal’ organisation). Here she opines upon ChatGPT and student essays. Will academia (or the TurnItIn software at any rate) come to terms with AI or take the path of least resistance and train students to use AI in ‘informed’ ways? (Thus detoxing plagiarism…)

I have a sneaking feeling the answer is the latter, but that if the tradition of the viva (oral examination, sometimes quite fierce) survives then a degree of quality control will be preserved at postgrad level.

While discussing this with a friend recently he advised that UCU are still observing a marking ban (possibly not nationwide, at selected universities) and that he knew of a Head of Department (in Psychology) who was marking 200+ Undergraduate dissertations all on their own to get around the ban.

Irina’s blog is at Writing – Irina Dumitrescu

1 Like

AI is totally unregulated and as such is a step away from creating a world unfit for humanity.
The article fails to point out that by the time any student gets his degree most quality jobs will already be filled by AI.
By accepting that " AI is here to stay" the writer, alongside the writer in Times Higher Education who says : “Students today need to be prepared for a future in which writing with AI is already becoming essential”, are dooming all of us to a life of penury and servitude to the AI “owners” until these too become AIs.

When the only jobs left are those requiring empathy and heart to care for humanity, how long will it before AI’s see no need for humanity at all?!

We can assume that AI robotic military weaponry has been created ( no doubt trialled on the Israeli Palestine boarder! ) - once we lose control of these “terminators” then it’s goodnight humanity! **

At the moment there seems to be a split in the establishment as to whether regulation is needed see Shane Snider in the post by @Evvy_dense here:

This 15 minute YT with Whitney Webb covers a lot of this stuff:

As does James Corbett who thinks the only regulation which will work is to ban AI access to the internet and other communication systems here: audio and text:

cheers

ps ** picked this up from TLN this morning:

1 Like

A very good point.

I recently had some experience of Cognitive Behavioural Therapy. What a total sham. The therapist may have been less experienced, but I tend to think it’s the ‘technique’ which is mechanistic, formulaic, and uses dodgy metrics to ‘prove’ the efficacy. No trace of empathy was displayed, hardly any open questioning, I felt judged, and that my scepticism about the process was deemed hostile. The ‘clinician’ as they like to style themselves was consistently five or more minutes late as well. In (I think) the third session she made one of very few attempts to contextualise a behaviour pattern. I replied that Gotama Buddha was teaching this 2600+ years ago (not adding “but thanks for being so patronising”). The fourth session turned out to be the last one, I was fired as a client. I was only upset by this to the extent that she beat me to it.

Tripe. Utter rubbish. Slogans and acronyms at best. I actually think ELIZA would have been better.

Computer-based CBT is already a thing so I guess these shoddy services really are threatened and based on my sample of one: a good thing too.

1 Like