I continue to be negative about generative AI assistants insofar as every time someone has written a piece of any length "co-written by an AI" it has all sorts of errors I point out the author hadn't even noticed were there

Most irritatingly is that MULTIPLE TIMES people have written drafts about my work that simply injected and made up history. If they had just published them, that history would be cyclically referenced in the future as if fact

That kind of thing just happens all the time now

in reply to Christine Lemmer-Webber

Study after study also shows that AI assistants erode the development of critical thinking skills and knowledge *retention*. People, finding information isn't the biggest missing skillset in our population, it's CRITICAL THINKING, so this is fucked up

AI assistants also introduce more errors at a high volume, and harder to spot too

microsoft.com/en-us/research/u…
slejournal.springeropen.com/ar…
resources.uplevelteam.com/gen-…
techrepublic.com/article/ai-ge…
arxiv.org/abs/2211.03622
pmc.ncbi.nlm.nih.gov/articles/…

This entry was edited (10 months ago)
in reply to Christine Lemmer-Webber

i think even among people who have higher than average awareness of how these work, a lot of people are still unprepared for something that produces unreliable information in a way that so strongly resembles real information. like if someone's mental model was previously to associate decently structured writing with "the author at least has last Some understanding of the actual subject matter" then they are already at a disadvantage to parse llm output imo
in reply to Christine Lemmer-Webber

@hacks4pancakes Hear hear.

I’d been using perplexity.ai for a few months — it was recommended in an article I read for actually supplying straight-forward answers to queries instead of supplying a list of related articles or topics, and usually did so in satisfying ways — but I had a few results from it that were blatantly wrong with an even basic knowledge of the topic. Which then eroded any confidence about how correct it was about subjects I don’t know the answers to, which is of course a lot of what one would query.

That was when I deleted it.

in reply to Christine Lemmer-Webber

as somebody who actively takes coding risks to the point where I have to be mindful of things breaking and coding being not easily understandable by outsiders, it seems perverse seeing a (growing?) body of people accepting increased chaos and brittleness of processes for instant convenience. En masse, it looks like a dangerous moral hazard experiment. A shame ML hasnt been explored by more prior to AI. Its adoption is akin to caffeine, amphetamine or opiates as an industrial strategy.
in reply to Christine Lemmer-Webber

#xai #llms
in reply to Christine Lemmer-Webber

Same. Cue me showing my boss that it takes me MORE time to generate the thing they want with an assistant *and then make it correct* than it would take me to do it myself...then reminding them again every time the salespeople come back...

reshared this

in reply to Hieronymous Smash

@heironymous In my previous job I had to let my team experience this for themselves; the CEO was hawkish & didn't believe me. Cue two months of AI tools testing later & everyone is sheepish & quiet about it.
One of the reasons ppl think it's faster is they don't know the work - e.g. copywriting takes about 1-1.5hrs to write one page of text, most people can barely type that fast 😝
in reply to Christine Lemmer-Webber

There is a little story from german Wikipedia which shows the way (tho AI was no thing then): there was a politician who had about 15 or so first names (bc ancestors from nobility). Somebody added a further name at his Wikipedia page. Journalists used those citing Wikipedia. Wikipedia author then cited Journals to verify fake name with credible source.
in reply to Christine Lemmer-Webber

Excellent observations and entirely consistent with an article by Nic Coppings recently in Washington Technology:

"In our quest for speed and scalability, we’ve replaced empathy, trust, and authentic connection with CRM automation and AI tools.

The Microsoft “goldfish study” revealed that humans now have an average attention span of 8 seconds.

Whether meeting clients or building professional relationships, people want to be seen, heard, and valued."

washingtontechnology.com/opini…

in reply to Christine Lemmer-Webber

AI is just the latest addition to the hustle bro's toolbelt. They're obsessed with using AI for everything, and they'll mass-produce AI-generated blog posts about topics they know nothing about in hopes of getting a few clicks and some ad dollars. Their entire agenda is to make as much money as possible with as little work as possible, and AI allows them to do that.
in reply to Christine Lemmer-Webber

Doing one's own research is a very important skill that everyone should have and that generative AI is eroding. Some part of me just wishes genAI uncreated and erased from collective consciousness. But the more rational part is horrified to realize that we'll have to somehow live with it from now on, like it or not.
in reply to Christine Lemmer-Webber

Work is trying it and I had a couple of grunt jobs I put through it. It did a fairly decent job of summarizing the topics I was looking for and had links to the relevant documentation. I don't know how well it will work for complex analysis, but doing 'where do I find information about [topic, error message or program]' was accurate enough for me and saved a couple of hours of scanning & reading documents across several network drives and SharePoint sites.