PDA

View Full Version : Can an AI ChatBot Make Stuff Up Just to Get Your Business?



Jim Koepke
02-19-2024, 3:53 PM
Saw this in my Morning Update >

515718

Talk about a Strange New World.

jtk

Brian Tymchak
02-19-2024, 4:27 PM
To answer your question - yes it can if the rules for an AI chatbot are not sufficently restrictive, or the AI language model learned to be more aggressive than accurate. In this case, the chatbot may not be AI. I'm willing to bet on poor programming vs anything more nefarious.

Jim Koepke
02-19-2024, 4:51 PM
I'm willing to bet on poor programming vs anything more nefarious.

The first rule of programming: GIGO = Garbage In - Garbage Out.

jtk

Jimmy Harris
02-19-2024, 5:31 PM
The best way to describe an AI chatbot is that it's a very complex autocomplete. They're not like traditional software in that they are rigidly programmed to complete a specific task or whatever. Instead, they're just fed tons of information and allowed to draw from any of it to complete any possible task. As such, they're inherently unpredictable. It's not necessarily bad programming, so much as it is a bad application of programming.

Edward Weber
02-19-2024, 5:39 PM
Somewhere in the dataset it learned from it either said, Yes you can get a refund in certain situations or didn't specify that you couldn't.

Brian Elfert
02-20-2024, 10:26 AM
Most chatbots are not AI. They are just programmed with certain responses to what the user types. The programming is often very good and can look like a human. You'll notice that most can only do really basic tasks. They can often do things like tell you when the order is scheduled to be delivered. I don't need a chatbot for that as any halfway modern ecommerce site has an order tracking function. The times I really need customer service for a chatbot can't help me. One time an Amazon delivery was damaged during shipping so they cancelled the order. After weeks I had not been refunded for the cancelled order and a chatbot couldn't help with that.

When ChatGPT first came out it had only been loaded with data up until 2021. I don't know if it has been updated since. If AI was truly intelligent the AI would go able to update itself with the latest data.

Derek Meyer
02-20-2024, 2:27 PM
Air Canada recently got into some trouble because their AI Chatbot made up its own policy instead of using the company's. Air Canada was forced to honor this policy to the customer.

https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/

Brian Elfert
02-20-2024, 4:14 PM
Air Canada recently got into some trouble because their AI Chatbot made up its own policy instead of using the company's. Air Canada was forced to honor this policy to the customer.

https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/

The interesting thing is the article states Air Canada has spent more money on their chatbot than they have saved in other customer service costs. The airline says the investment in the chatbot is a long term investment.

Jim Koepke
02-20-2024, 8:05 PM
The interesting thing is the article states Air Canada has spent more money on their chatbot than they have saved in other customer service costs. The airline says the investment in the chatbot is a long term investment.

My last employer spent a five million on a employee payroll time tracking system that was going to save them $50,000 a year. They quickly took down the announcement when employees pointed out how dumb it sounded. The confusion of everyone having to constantly change their password cost more than that in support. But that was the tech department, it was supposed to save money for the payroll department.

There was also confusion caused by the remote log in stations having clocks that were not in synch with the master clock. This had people clocking in at remote locations with the local clock saying they were on time but the master clock said they were late or sometimes it nicked people for clocking out early. But somehow it saved the payroll department money.

Sometimes it makes me think that some people were promoted to management because they couldn't do any of the other work that needed doing.

jtk

Frederick Skelly
02-21-2024, 8:34 AM
My
But somehow it saved the payroll department money.


I’m going off topic here because the payroll dept’s well intended change made me think of it… This is what happens when people only look at one side of the problem, instead of the whole picture. You have to look at the end to end impacts on all parts of a change. For example, think about the push for electric cars. Reducing emissions is a good thing and we should find more ways to do that. But advocates also need to consider the state of our power grid, the potential impact to other ecological matters (strip mining, etc), the impact of additional weight-per-car on road and bridge maintenance, how to build up a charging infrastructure and so on.

My point being that decisions have consequences and everyone (usually) gets a better result when we think it all the way through, from the start.

Jim Koepke
02-21-2024, 4:30 PM
My point being that decisions have consequences and everyone (usually) gets a better result when we think it all the way through, from the start.

Very True.

jtk

Pat Germain
02-21-2024, 5:28 PM
ChatGPT is AI. Most of the AI images we see online came from ChatGPT. Its ability comes from scraping the Internet for images and text information. It mashes all that together into something both impressive and ridiculous at the same time.

My friend's son was attending college and told us about two classmates who turned in papers written by ChatGPT. They were found out because the papers contained text like, "As an Artificial Intelligence Application, I can't determined which theory is most accurate." So yeah. Those students didn't even bother to read the papers they submitted. They were both expelled.

Maurice Mcmurry
02-21-2024, 6:01 PM
I took down a lot of my web presence after getting some phony business inquiry emails that included things the alleged client wanted me to download. The emails were strangely connected to my "about me" on forums as well as the latest things that I had googled. A Canadian YouTuber has some insights on this.


https://youtu.be/ifoacdMGAQE?si=1aIIpEJy-1uAaRqQ

mike stenson
02-22-2024, 8:30 AM
ChatGPT is AI. Most of the AI images we see online came from ChatGPT. Its ability comes from scraping the Internet for images and text information. It mashes all that together into something both impressive and ridiculous at the same time.

My friend's son was attending college and told us about two classmates who turned in papers written by ChatGPT. They were found out because the papers contained text like, "As an Artificial Intelligence Application, I can't determined which theory is most accurate." So yeah. Those students didn't even bother to read the papers they submitted. They were both expelled.

The plagiarism bots now also look for signs that text was AI generated. Bots patrolling for bots.

Jim Becker
02-22-2024, 10:12 AM
If other humans can "make stuff up" to get your "business", why wouldn't an AI be able to do that? I help manage multiple forum sites and there is a constant battle with AI created content being used to try and influence folks. It gets shut down quickly (by humans in many cases, although forum software is trying to get better at it) but stuff still sneaks through. The same is true in all kinds of media, both online and not.

Jim Koepke
02-25-2024, 3:10 AM
If other humans can "make stuff up" to get your "business", why wouldn't an AI be able to do that?

That made me chuckle. One of my brothers used to work in customer service long before AI. Much of his work was rectifying situations where a salesperson include extra items on a customer's installation order to boost their commissions. Often customers wouldn't notice.

jtk

Jim Becker
02-25-2024, 9:52 AM
One of the reasons I made that statement in that way, Jim, is because AI's learn from humans. That's what's different about AIs from "more traditional computing". (now there's an interesting thing to be able to say!) Before AI, computers only did what their human masters told them to do in an explicit way. With AIs, there is certainly basic behavior and (hopefully) rules hard coded, but they suck up information and "learn" from it. Because outside of rote scientific/laws of the universe things, pretty much everything else is generated by human behavior. So back to the premise being discussed here, if humans are "creative" in their efforts to convince other humans to do stuff, it's likely that AIs, who learn from human behavior, will emulate them. This is one of the challenges that's going to need to be addressed by the creators of AI: making sure there are uniform rules about behavior...things like Asimov wrote about in SciFi ages ago!

Brian Tymchak
02-25-2024, 11:10 AM
One of the reasons I made that statement in that way, Jim, is because AI's learn from humans. That's what's different about AIs from "more traditional computing". (now there's an interesting thing to be able to say!) Before AI, computers only did what their human masters told them to do in an explicit way. With AIs, there is certainly basic behavior and (hopefully) rules hard coded, but they suck up information and "learn" from it. Because outside of rote scientific/laws of the universe things, pretty much everything else is generated by human behavior. So back to the premise being discussed here, if humans are "creative" in their efforts to convince other humans to do stuff, it's likely that AIs, who learn from human behavior, will emulate them. This is one of the challenges that's going to need to be addressed by the creators of AI: making sure there are uniform rules about behavior...things like Asimov wrote about in SciFi ages ago!

The current AI language models are learning from human produced content, with some models using social media as a source. :eek: :eek: As we all know humans produce factual and unfactual information, some deliberately unfactual. The old saying "garbage in garbage out" applies at a whole new level with AI. When generative AI becomes main stream, where the AI engines are potentially contributing to the knowledge base at a rate far exceeding the human rate, there is a possibility that the knowledge base becomes heavily polluted with unfactual content before it can be vetted, creating a snowball effect of mis/dis information. I see this as the real danger of AI and guardrails need to be put in place quickly to control.

Jim Koepke
02-25-2024, 12:22 PM
One of the reasons I made that statement in that way, Jim, is because AI's learn from humans. That's what's different about AIs from "more traditional computing". (now there's an interesting thing to be able to say!) Before AI, computers only did what their human masters told them to do in an explicit way. With AIs, there is certainly basic behavior and (hopefully) rules hard coded, but they suck up information and "learn" from it. Because outside of rote scientific/laws of the universe things, pretty much everything else is generated by human behavior. So back to the premise being discussed here, if humans are "creative" in their efforts to convince other humans to do stuff, it's likely that AIs, who learn from human behavior, will emulate them. This is one of the challenges that's going to need to be addressed by the creators of AI: making sure there are uniform rules about behavior...things like Asimov wrote about in SciFi ages ago!

Made me think of a few lines often seen in the early computer labs on college campuses:

Oh this darn computer
I wish that I could sell it
It doesn't do what I want it to
Only what I tell it

- Author Unknown

jtk