• Happy Love Your Lawyer Day! ⚖️🤑

Ai bots threaten the digital canoeing world

I've noticed recently that several cable TV shows that talk about science, future science, even history are now showing a small circle with the letters "AI" on scenes showing re-enactments or predicted outcomes, maybe the entire entertainment industry (including social media) needs to adopt this idea as it would remove any doubts as to authenticity
 
the entire entertainment industry (including social media) needs to adopt this idea as it would remove any doubts as to authenticity

I agree, it is particularly disturbing when inaccurate AI generated images of the past are presented when correct or original images are readily available. The frequently shared one shown below is especially annoying to me,

Benson



1762271768039.png
 
Last edited:
I have found AI to be useful when researching legislation and legal history. Also for some scientific explanation or basic technical instructions. But it's important to remember that AI tends to hallucinate (just as bad as humans), so everything has to be read with a critical eye - especially with anything that might be political in nature. As for the bots, they currently have tells. My fear is that they'll get so good that the tells are no longer apparent.
 
Bingo, that’s the critical thing. Critical eye, critical thinking and reasoning skills. We’re preaching to the choir here because we’re all of an age where we learned before there was AI; my kids are growing up in a digital era where they aren’t necessarily learning to differentiate between Britannica and Wikipedia. They can’t spot trash when they see it. Their teachers? In many cases I’m old enough to be their parent too, so my confidence in their abilities is… not high.
 
I have found AI to be useful when researching legislation and legal history. Also for some scientific explanation or basic technical instructions. But it's important to remember that AI tends to hallucinate (just as bad as humans), so everything has to be read with a critical eye - especially with anything that might be political in nature. As for the bots, they currently have tells. My fear is that they'll get so good that the tells are no longer apparent.
I admit I haven't used dedicated AI interfaces like ChatGPT, but I've noticed the AI summaries that come with web searches these days are either patently wrong or of such low quality often enough that I scroll right past them instead of bothering to read them.
 
I haven't messed w/ GPT but I've played around a little with the free version of Grammarly just to see what changes it might suggest (I always want to write better). The tech is developing rapidly but it's not quite there yet. Many of the suggestions that it made would change the context and a few were simply wrong (admittedly, that may be because it misinterpreted what I was trying to relate because I was not concise enough to begin with.)

At the same time, I've seen some truly stunning AI photos and believe that computers will eventually acquire the ability to pander to our tastes enough that they may take the place of human writers & photographers someday. Maybe leaving small grammatical errors within TRs could be a HQ (Human of Questionable Intelligence) tell at some point in the future.
 
I use Chat GPT quite a bit and it's almost completely replaced Google as my first source when looking for information. I'm good at searching Google to find the information I need but Ai is much better (most of the time). The more information available about a topic the better it does and while it can certainly get things wrong I'm very impressed overall. It's not uncommon that I still need to search Google for the final answer but Chap GPT usually gets me 95% of the way there so my Google searches are more targeted.

Most recently it's been giving me book recommendations. After some back and forth it seems to have figured out my tastes pretty well. It will give me half a dozen recommendations, which I read and then give feedback on, before giving me more.

I was switching road bike components from one bike frame to another and the rear derailleur would not mount correctly. I told Chat GPT what was happening and how the dimensions of the hanger differed. It explained what the problem was and told me there was an adapter available for my rear derailleur to adapt it. It gave me a part number, which was completely wrong, but it was enough info that a quick Google search turned up exactly what I needed. I could have found out all that information from Google but it would have taken much longer.

A few months ago I was looking for a used minivan to mostly use for traveling. I asked Chat GPT for a list of suitable vans and to list the specs I was most interested in (cargo length, cargo height, ride height, etc). It was able to quickly produce charts comparing not only different brands but also changes within the same model from year to year (what do I gain/lose by getting a 2020 instead of a 2017?) as well as showing the differences between trim levels. It turned into a very long conversation over the course of a month or so as we drilled down what was best for me and my budget. In the end I purchased a Ford Transit Connect that hadn't even been on my radar until Chat GPT suggested it as an option.

I also tried to give it wiring diagrams and asked it to explain system operation on specific vehicle systems (what I do for a living) and it failed dramatically. What it initially spit out looked very impressive but when paying close attention it was getting small but important details wrong. I'd tell it something was wrong and to double check, and it would often find the correction, only to make the same mistake again later. I asked it why it was getting it wrong and it admitted to using general knowledge (such as common relay pin numbers) rather than the specific information from the diagram and vehicle I gave it. It promised not to make those assumptions again but it went right back to old habits a few days later when I picked up the conversation again. These mistakes were obvious to me but could have been disastrous if someone just learning was given the same answers.

For anyone that's curious I recommend playing with it to see how it does. Ask it questions about subjects you're already familiar with so you can judge the quality of its content. That will also show you the best way to phrase questions to get the right information.

Alan
 
Last edited:
Indeed, I do like the idea of easier background research, a task I find to be a chore. I recently read 'How to Think About AI' in which the author points out that AI will only get better, so writing it off based on current capabilities is a fool's folly.

I still don't like how AI products that don't seem ready for primetime are getting rammed down my throat by big tech. And for some frightening reading, check out the book 'AI Snakeoil', which details some absolutely scary applications (medical diagnoses, legal rulings) of AI trained on really biased datasets.

But I recognize I'll probably have to get on board at some point if I want to stay employed.... I'm just putting off spending the time to get up to speed on using it in smart and effective ways. Hopefully in a couple yrs it will be better and I'll be ready to learn.
 
I use Chat GPT quite a bit and it's almost completely replaced Google as my first source when looking for information. I'm good at searching Google to find the information I need but Ai is much better (most of the time). The more information available about a topic the better it does and while it can certainly get things wrong I'm very impressed overall. It's not uncommon that I still need to search Google for the final answer but Chap GPT usually gets me 95% of the way there so my Google searches are more targeted.

Most recently it's been giving me book recommendations. After some back and forth it seems to have figured out my tastes pretty well. It will give me half a dozen recommendations, which I read and then give feedback on, before giving me more.

I was switching road bike components from one bike frame to another and the rear derailleur would not mount correctly. I told Chat GPT what was happening and how the dimensions of the hanger differed. It explained what the problem was and told me there was an adapter available for my rear derailleur to adapt it. It gave me a part number, which was completely wrong, but it was enough info that a quick Google search turned up exactly what I needed. I could have found out all that information from Google but it would have taken much longer.

A few months ago I was looking for a used minivan to mostly use for traveling. I asked Chat GPT for a list of suitable vans and to list the specs I was most interested in (cargo length, cargo height, ride height, etc). It was able to quickly produce charts comparing not only different brands but also changes within the same model from year to year (what do I gain/lose by getting a 2020 instead of a 2017?) as well as showing the differences between trim levels. It turned into a very long conversation over the course of a month or so as we drilled down what was best for me and my budget. In the end I purchased a Ford Transit Connect that hadn't even been on my radar until Chat GPT suggested it as an option.

I also tried to give it wiring diagrams and asked it to explain system operation on specific vehicle systems (what I do for a living) and it failed dramatically. What it initially spit out looked very impressive but when paying close attention it was getting small but important details wrong. I'd tell it something was wrong and to double check, and it would often find the correction, only to make the same mistake again later. I asked it why it was getting it wrong and it admitted to using general knowledge (such as common relay pin numbers) rather than the specific information from the diagram and vehicle I gave it. It promised not to make those assumptions again but it went right back to old habits a few days later when I picked up the conversation again. These mistakes were obvious to me but could have been disastrous if someone just learning was given the same answers.

For anyone that's curious I recommend playing with it to see how it does. Ask it questions about subjects you're already familiar with so you can judge the quality of its content. That will also show you the best way to phrase questions to get the right information.

Alan
My experiences are much like Alan’s.

ChatGPT gives me more focused and specific answers than does google. I may have to sort through dozens of entries in google, whereas ChatGPT answers directly. Of course I have to read more or check sources if it is something critical, but I’d have to do that anyway.

I also can tell ChatGPT that an answer is wrong and it will try to correct itself, which may or may not be successful.

It has evolved markedly from a couple of years ago.

Wikipedia is still better at “encyclopedia” type answers such as a general inquiry on “Magdelan Islands.”

Obviously there will be many more opportunities for mischief and worse. I don’t know how anyone raised with AI will ever be able to distinguish between correct and incorrect answers.
 
Back
Top Bottom