PDA

View Full Version : Tried ChatGPT yet?



Stephen Tashiro
07-29-2023, 1:19 PM
I've been using ChatGPT and I'm impressed. I find a few minutes on ChatGPT accomplishes what I'd get in half an hour of web browsing on practical topics - like what species of juniper grow into trees, which have berries that birds like, etc - things I could find out by looking at several web sites and cross referencing the information. ChatGPT doesn't do a better job than I do, but it does the same work faster.

I also use it to review advanced mathematics. It deals with technical definitions and general outlines of mathematical procedures quite well. However, it makes elementary blunders in doing specific calculations. ChatGPT will admit its errors when you point out contradictions. However, ti also says that the ChatGPT program does not learn from the chats - it only "knows" about the data used to train it. So don't waste your time expecting to teach it something it will remember. It will know what is done while in a chat. This doesn't carry over to new chats.

Online, there are lots of posts about how ChatGPT handles (or fails at) logical puzzles or controversial political and philosophical questions. To me, that's like posting about how well a trim router breaks up concrete. Like all tools, I find ChatGPT is only useful for certain tasks.

Edward Weber
07-29-2023, 2:55 PM
It can be helpful to a point.
The only way it can provide accurate answers is if there is accurate data there from which to draw upon.
The quality of the data that ChatGTP and others use is not vetted for accuracy. You just get the most common answers to your query from what data is available, regardless of accuracy.
The general consensus answers are not always correct.
It also further erodes the entire concept of critical thinking.

Stephen Tashiro
07-29-2023, 4:16 PM
The general consensus answers are not always correct.
It also further erodes the entire concept of critical thinking.

I agree that ihere's no gain in accuracy, but in that respect it doesn't differ from quick web research that I'd do myself. There's no guarantee of accuracy in either case. At to critical thinking, I'm apt to think critically about ChatGPT's answers because I know it's a program. That's not true of my attitude toward other "authoritative" websites - sawmillcreek included, of course.

Clifford McGuire
07-29-2023, 5:10 PM
Well, let's ask it:

As an AI language model, I (ChatGPT) strive to provide accurate information based on the data I was trained on. My responses are generated by predicting the most likely answer given the input I receive. However, it's essential to understand that I am not infallible and can make errors or provide inaccurate information.

It's essential to use the information I provide as a starting point for further research and verification, especially for critical decisions or tasks where high accuracy is required. If you are unsure about the information I provide, I recommend consulting reputable sources or domain experts for confirmation.

Patty Hann
07-30-2023, 1:52 AM
Well, let's ask it:

As an AI language model, I (ChatGPT) strive to provide accurate information based on the data I was trained on. My responses are generated by predicting the most likely answer given the input I receive. However, it's essential to understand that I am not infallible and can make errors or provide inaccurate information.

It's essential to use the information I provide as a starting point for further research and verification, especially for critical decisions or tasks where high accuracy is required. If you are unsure about the information I provide, I recommend consulting reputable sources or domain experts for confirmation.

This situation is somewhat similar to using Wikipedia as a source.
Wikipedia isn't always accurate (I've had to correct entries dealing with Literature), but it is an excellent starting point.
You learn terminology, applications, what are some issues or points of contention: you can use all to start your "real research".

Example: you've always wanted to learn furniture making and you've heard of "joinery", so you Wiki it.
Some of what you read may be debatable, some may be flat-out wrong, but now you have learned some terminology, uses, pros/cons of the types of joinery.
And you use it all as jumping off point for more learning, now that you have an idea of the right questions to ask.

For some people the search is a bother and they want answers right away. Well, the maxim "Act in haste, repent in leisure" endures for a reason.
I like engaging in "the chase"... I learn an awful lot that way, maybe not usable right away, but eventually it pays off.

So I'm thinking the accuracy of ChatGTP is the same way. Use it as a starting point, knowing it probably has not a few mistakes/errors.
Then...Tally-ho!... Onward and upward!... "The game is afoot!" :)



505227

Steve Demuth
07-30-2023, 10:44 AM
It's pretty shakey as a way to get reliable information about anything that isn't common knowledge. Like Patty Ham says, you can get terminology and general drift, but relying on any detail that it produces is pretty risky. That goes for the version embedded in Bing/Edge as well - if you use it to find links to web pages, you might be ok, but to actually answer questions, it's not very reliable. Ditto Google's version, Bard. On the other hand, if you want a shortcut to get a page or two of "good enough" text about a topic for which you're willing to provide the outline, and scan the results for obvious stupidities, tools like ChatGPT can probably save you a lot of time.

I've been working some with specialized versions of large language models, specifically those trained and tuned to answer medical questions (search for Med-Palm II to learn a bit about Google's). They still suffer from too much confabulation to be quite ready for prime time, but they are incredibly impressive, nonetheless. I expect to see LLMs coupled to explicit domain models like medicine, law, engineering, and the like, to emerge in the relatively near future that really are the top experts in their respective fields, and highly useful. General knowledge will be harder to nail down to adequate accuracy and reliability, because there is just so much room for amibiquity in what humanity collective knows, generally.

Edward Weber
07-30-2023, 10:48 AM
This situation is somewhat similar to using Wikipedia as a source.
Wikipedia isn't always accurate (I've had to correct entries dealing with Literature), but it is an excellent starting point.
You learn terminology, applications, what are some issues or points of contention: you can use all to start your "real research".

Example: you've always wanted to learn furniture making and you've heard of "joinery", so you Wiki it.
Some of what you read may be debatable, some may be flat-out wrong, but now you have learned some terminology, uses, pros/cons of the types of joinery.
And you use it all as jumping off point for more learning, now that you have an idea of the right questions to ask.

For some people the search is a bother and they want answers right away. Well, the maxim "Act in haste, repent in leisure" endures for a reason.
I like engaging in "the chase"... I learn an awful lot that way, maybe not usable right away, but eventually it pays off.

So I'm thinking the accuracy of ChatGTP is the same way. Use it as a starting point, knowing it probably has not a few mistakes/errors.
Then...Tally-ho!... Onward and upward!... "The game is afoot!" :)



505227

While I agree with the theory of using Wikipedia and ChatGPT as a starting off point, in that sense, it's a great tool.
In reality, that's not what's happening, people are using it as a credible source of information.
It's like learning woodworking solely from YouTube videos.
You may learn something but what you learn is up for debate. Because, as I said earlier, the information you're getting you answer from, is not vetted for accuracy.
Also "learning" or using the chatbots for answers, provides no depth of knowledge, you had to do nothing to arrive at the answer. No critical thinking, no problem solving, nothing. All you get is the answer, not any real knowledge.

There seems to be little point in using a tool, relying on it's performance, all the while knowing, that may not work as it should.

Question,
Why do you give me incorrect answers sometimes?
Answer,
I apologize if there have been instances where you feel I have provided incorrect answers. As an AI assistant, I strive to provide accurate and reliable information based on the data and knowledge available to me. However, there can be limitations or gaps in my understanding, and occasionally, mistakes may occur. If you believe I have provided an incorrect answer, please let me know, and I will do my best to rectify it and provide the correct information.

Michael Weber
07-30-2023, 12:54 PM
Out of curiosity mostly, I just completed an Arduino project using chatGPT to write virtually all the code. I ended up with an apparently working project. Getting there was frustrating. I can say that only by being highly specific and detailed in one’s requests will one get remotely what is expected. Otherwise, it’s a matter of (my) garbage in, garbage out. I cannot imagine using it alone for any critical coding task.

Stephen Tashiro
07-30-2023, 1:05 PM
In reality, that's not what's happening, people are using it as a credible source of information.


I haven't noticed people citing ChatGPT as a credible source in online discussions, but I don't look at forums about politics or literature etc. Of course - how often do people reveal sources when they discuss anything? People who do quote Wikipedia or ChatGPT deserve credit for honesty.



It's like learning woodworking solely from YouTube videos.


There's a difference between learning a trade versus learning a task. I find YouTube videos an excellent tool for learning specific practical tasks. For example, they aren't likely to make me an appliance repair professional, but they were a great help in teaching me how to take apart my clothes dryer. It's the type of education where people are likely to attempt the task or something like it. it isn't a situation where a person just hears something and then repeats it as fact.

Patty Hann
07-30-2023, 1:42 PM
They are all "tools"...apprenticeships/classes, Youtube, Wikipedia, forums, Instructables, ChatGPT/AI. Some are better than others. Some folks prefer one sort to another.
And there's always the misuse of a "tool"...analogous to using a table knife as a screw driver (had a denso B-I-L who did that), wherein someone will complain the "tool" didn't work right.
Do you homework...read up on the tool and learn its strengths and weaknesses; then there will be no (or very few) unpleasant surprises.

Edward Weber
07-30-2023, 1:56 PM
This whole, table knife isn't a screwdriver, thing has me confused. Since when?

In all seriousness, you're correct of course.
In the news, we've already seen lawyers getting into trouble, call centers putting peoples life in danger and on and on.
TOO many people assume the "AI" actually thinks, it does not.
All of the scare and panic we hear today about, it's going to take your job, is way overblown.
The tool still and always will have flaws, much less of them as time goes by hopefully.
You're still going to have to do some actual thinking of your own. I know that scares some people.

Jeff Roltgen
07-31-2023, 1:05 PM
You're still going to have to do some actual thinking of your own. I know that scares some people.

^^This.
For me, that does not scare me, it is a relief. What's scary is the seemingly general listing of the population into non-thinking, as there's less effort involved.
It is artificial intelligence, not intellect. Sadly, most people don't seem to realize there is a profound difference between the two terms.

Interesting that some feel they're gaining familiarity by diving into a subject this way, even though it could be misinformed. When one is learning a subject for the first time, those first bits of information, right or wrong, root very deeply. Human data storage in the brain does not erase/replace as cleanly as a microchip. It is concerning that inaccuracy is accepted for the sake of expediency, so I'm not on board with that justification.

The crux:
How much time does one spend deconstructing the AI conclusion for accuracy? Seems we're not gaining anything more than the ability to throw words around, sounding authoritative, as we spit out inaccuracies about said topic. Now we, personally, are much like the AI itself. No? Generally informed, but not ready to be trusted to act upon said knowledge, at least not until we've made corrections to the inaccuracies we've ingested. But now, how do we go about that? How much time does that take? No real gain here AFAICS.

But hey, you could get instruction from a really bad human carpenter and still get a house built, so...

Edward Weber
07-31-2023, 1:26 PM
Interesting that some feel they're gaining familiarity by diving into a subject this way, even though it could be misinformed. When one is learning a subject for the first time, those first bits of information, right or wrong, root very deeply. Human data storage in the brain does not erase/replace as cleanly as a microchip. It is concerning that inaccuracy is accepted for the sake of expediency, so I'm not on board with that justification.

I agree,

To simplify, bad habits are hard to un-learn.

Patty Hann
07-31-2023, 2:04 PM
^^This.
When one is learning a subject for the first time, those first bits of information, right or wrong, root very deeply. Human data storage in the brain does not erase/replace as cleanly as a microchip.
.

All the info I pick up on something from all the sources I listed is just so much "data".
I make/take notes. Invariably I have questions.. I always have questions.
I compare data/notes from all those sources and see how much overlap there is is.
But even overlap doesn't necessarily mean that those common points are actually right/accurate.
Then I take all that info and start asking questions,.. here, other forums, Rockler, Woodcraft, my instructors in my WW class (through the local JC), more reading... also asking the experienced woodworkers that help out in the class (some have been doing it for 40 years).
Nothing gets lodged in my brain as a habit until I actually start putting all that info into practice.

Edward Weber
07-31-2023, 3:06 PM
All the info I pick up on something from all the sources I listed is just so much "data".
I make/take notes. Invariably I have questions.. I always have questions.
I compare data/notes from all those sources and see how much overlap there is is.
But even overlap doesn't necessarily mean that those common points are actually right/accurate.
Then I take all that info and start asking questions,.. here, other forums, Rockler, Woodcraft, my instructors in my WW class (through the local JC), more reading... also asking the experienced woodworkers that help out in the class (some have been doing it for 40 years).
Nothing gets lodged in my brain as a habit until I actually start putting all that info into practice.

You know you're wasting your time, all you need to do is watch the first YT video that comes up. It's on the internet, it must be true. :rolleyes:

I have been working with tools for over 40 years and I still ask questions and double check things before I simply take someones word at face value.

Ernie Hobbs
07-31-2023, 10:47 PM
In my job in sales and marketing, I use ChatGPT quite a bit- one of the most useful functions for me is to help me write emails or marketing messaging. I plug in the main concept and what I'm trying to communicate and the AI helps me craft better writing. It isn't perfect, but it gets me mostly there before I have to tweak the final product. In addition, it is useful to help me clean up large amounts of data or spreadsheets that need curation- I just paste in the columns and it will do what I ask it to. Again, it isn't creating from scratch. I still have to come up with the concepts, but the tool is great for cutting out a lot of the busy work that would take me hours to do myself.

Steve Demuth
07-31-2023, 11:46 PM
In the news, we've already seen lawyers getting into trouble, call centers putting peoples life in danger and on and on.
TOO many people assume the "AI" actually thinks, it does not.
All of the scare and panic we hear today about, it's going to take your job, is way overblown.
The tool still and always will have flaws, much less of them as time goes by hopefully.
You're still going to have to do some actual thinking of your own. I know that scares some people.

ChatGPT has plenty of limitations, to be sure. But it's too easy to just say "it doesn't think" and therefore it represents no threat. There are things happening inside Large Language Models that are not easily explainable, and look a lot like thinking or intelligence, and the more advanced models (beyond what is in ChatGPT) are very impressive in this regard. A friend of mine is senior VP for research at Microsoft, and responsible for bringing GPT 4.0 to medical problems, and he - who is far more knowledgeable in AI and computer science than any of us commenting here - is convinced that advanced LLMs are reasoning - thinking - and doing so quite impressively. I can tell you that I have seen evidence both from Microsoft and Google that is rather mind blowing.

I don't think any of them are going to eliminate entire job categories in the next year or two, but they are going to get radically better, fairly quickly, and substantially change entire job categories, and quite probably, reduce demand for pure human services. When AI can reliably read a CT-scan better than an experience radiologist, and write a comprehensive report on the same - and they are nearly to that point - the job of radiologist will be very different than what today's experts trained for.

I've also seen work at Google DeepMind, where they are coupling their alphazero auto reinforcement learning to LLMs. AlphaZero can teach itself to outhink humans on almost any well defined, axiomatic reasoning task (and has done for playing Chess and Go, and for predicting the structure of proteins from their amino acid sequence, e.g.). Coupling it with language capabilities and you may well see a machine outhinking almost any person.

Edwin Santos
08-01-2023, 12:40 AM
ChatGPT has plenty of limitations, to be sure. But it's too easy to just say "it doesn't think" and therefore it represents no threat. There are things happening inside Large Language Models that are not easily explainable, and look a lot like thinking or intelligence, and the more advanced models (beyond what is in ChatGPT) are very impressive in this regard. A friend of mine is senior VP for research at Microsoft, and responsible for bringing GPT 4.0 to medical problems, and he - who is far more knowledgeable in AI and computer science than any of us commenting here - is convinced that advanced LLMs are reasoning - thinking - and doing so quite impressively. I can tell you that I have seen evidence both from Microsoft and Google that is rather mind blowing.

I don't think any of them are going to eliminate entire job categories in the next year or two, but they are going to get radically better, fairly quickly, and substantially change entire job categories, and quite probably, reduce demand for pure human services. When AI can reliably read a CT-scan better than an experience radiologist, and write a comprehensive report on the same - and they are nearly to that point - the job of radiologist will be very different than what today's experts trained for.

I've also seen work at Google DeepMind, where they are coupling their alphazero auto reinforcement learning to LLMs. AlphaZero can teach itself to outhink humans on almost any well defined, axiomatic reasoning task (and has done for playing Chess and Go, and for predicting the structure of proteins from their amino acid sequence, e.g.). Coupling it with language capabilities and you may well see a machine outhinking almost any person.

I'm no expert, but what you're saying tracks with the fact that a lot of brilliant people and powerful companies are hyperfocused on AI, machine learning and Large Language Models right now. Considering that this technology is still in its infancy, I would not be inclined to declare any of it (including ChatGPT) to be merely a glorified version of Google or Wikipedia, and certainly not overblown, at least not yet. OpenAI will continue to refine and improve these early versions of ChatGPT. The article I read on AlphaZero was remarkable.

For the majority of us, the implications of most forms of emerging technology are difficult to comprehend until they are widely adopted in daily life. Think about the internet and the smartphone as two examples.

Johnny Simms
08-01-2023, 2:06 PM
But what happens when Chat GPTs source of information becomes dominated articles created by Chat GPT?

Edward Weber
08-01-2023, 3:09 PM
ChatGPT has plenty of limitations, to be sure. But it's too easy to just say "it doesn't think" and therefore it represents no threat. There are things happening inside Large Language Models that are not easily explainable, and look a lot like thinking or intelligence, and the more advanced models (beyond what is in ChatGPT) are very impressive in this regard. A friend of mine is senior VP for research at Microsoft, and responsible for bringing GPT 4.0 to medical problems, and he - who is far more knowledgeable in AI and computer science than any of us commenting here - is convinced that advanced LLMs are reasoning - thinking - and doing so quite impressively. I can tell you that I have seen evidence both from Microsoft and Google that is rather mind blowing.

I don't think any of them are going to eliminate entire job categories in the next year or two, but they are going to get radically better, fairly quickly, and substantially change entire job categories, and quite probably, reduce demand for pure human services. When AI can reliably read a CT-scan better than an experience radiologist, and write a comprehensive report on the same - and they are nearly to that point - the job of radiologist will be very different than what today's experts trained for.

I've also seen work at Google DeepMind, where they are coupling their alphazero auto reinforcement learning to LLMs. AlphaZero can teach itself to outhink humans on almost any well defined, axiomatic reasoning task (and has done for playing Chess and Go, and for predicting the structure of proteins from their amino acid sequence, e.g.). Coupling it with language capabilities and you may well see a machine outhinking almost any person.

What you've described is that AI basically roughs out the shape or form, but humans are still necessary to refine it and make the final decisions.
Reading a CT scan (comparing and contrasting it with what's in it's database) is one thing determining what to do with the conclusions is another.

I think we have a different idea of what is meant by reasoning.

To be clear, I do think it's a threat because humans control it. It will no doubt be employed in some manner before it's ready and there will be problems, to say the least, (there already have been a few with the Chatbots)

Patty Hann
08-01-2023, 4:28 PM
A threat when humans control it... and a threat when it [eventually] acts independent of human control.
Either way, it should make for some fun times....

Steve Demuth
08-01-2023, 4:43 PM
What you've described is that AI basically roughs out the shape or form, but humans are still necessary to refine it and make the final decisions.
Reading a CT scan (comparing and contrasting it with what's in it's database) is one thing determining what to do with the conclusions is another.

I think we have a different idea of what is meant by reasoning.

To be clear, I do think it's a threat because humans control it. It will no doubt be employed in some manner before it's ready and there will be problems, to say the least, (there already have been a few with the Chatbots)

I'd love to listen in on the conversation where you explain to one of the radiologists I work with that they are not reasoning when they interpret a CT and write the diagnostic report.

But supposing you're right, let's suppose the scan is of a tumor in someone's neck. So having diagnosed the tumor, the scan goes to a radiation oncologist to develop a plan to treat the tumor with targeted radiation therapy. Your next job is to figure out how to hit the tumor with the radiation from multiple angles without hitting any vital organs with any of the beams. To do so, you have identify in the 3D scan all the off limits organs, and then figure out angles that will get the beams to the tumor. Wanna bet whether AI or humans are better at these reasoning tasks? Right now, it's done almost entirely by humans, but it's so difficult and time consuming that it's typically only done once for an entire treatment series, even though we know that the tumor shrinks and shifts over the multiple weeks of treatments. AI can do both the organ identification and the beam planning quickly and better than humans most of the time.

We can't, today, put together a single AI that does everything any particular human does as well as the human. But give me any specific reasoning task, there is a pretty good chance we can build an AI that outperforms humans at that task, and as LLMs have emerged, the range of reasoning that can be encapsulated in a given model is increasingly large. One can bet those things won't come together to make a combined approach that outperforms us generally; it's not a bet I'd take.

Edward Weber
08-02-2023, 10:19 AM
I'd love to listen in on the conversation where you explain to one of the radiologists I work with that they are not reasoning when they interpret a CT and write the diagnostic report.


The humans reason, the AI is simply a difference engine.

The computer can do the math quickly and gives to results to a human, who then determines next steps.
Again, not saying it's a bad thing but let's not give it qualities it doesn't have.

As to ChatGPT and others, as the OP asked about. Like any tech in it's infancy stages, it's having some issues.
https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4

I think some people need to exercise a bit more caution before they deploy these technologies in sensitive, potentially harmful areas.

Steve Demuth
08-02-2023, 10:53 AM
I think some people need to exercise a bit more caution before they deploy these technologies in sensitive, potentially harmful areas.

Of course, care is absolutely needed. I can tell you with high confidence that at least in medicine, where I work, that care is being taken. They do deploy AI (not ChatGPT) for certain tasks, because they have vetted it. That doesn't mean it makes no mistakes, only that improves decision making overall, by making fewer mistakes than humans (real flesh and blood doctors make mistakes in diagnosis or treatment rather regularly - by one accounting 40% of the referrals that come into our clinic are mis or under-diagnosed), or by doing cognitive tasks that humans actually can't do (AI can detect disease signals in ECGs and radiographs that a human literally cannot see or understand), or can't do efficiently enough to be scalable (as in the radiation therapy planning example I gave). Insisting that these things aren't reasoning when machines do them, but are when humans do them, is a distinction without a difference.

Edwin Santos
08-02-2023, 3:51 PM
Interesting conversation.
Looking at a parallel deployment of AI technology, autonomous vehicles, there were similar objections about harm and overconfidence in the capabilities of the tech. It seemed to me that critics were benchmarking autonomous vehicles against zero defect perfection.
But what if the goal is to not to achieve perfection, but just achieve better performance than humans? The latter becomes a way lower bar.
Right now we have over 2 million car accidents per year in the US. If autonomous vehicles reduce that number significantly do they get a thumbs up? Or does AI need to be perfect before it is ready for prime time and do absolutely no harm?

Medical error is a major root cause of harm in the health care setting. What if AI can reduce it dramatically with some potential degree of error? Do we hit the brakes until it's perfect, or reap the improvement benefits using status quo, not perfection as the benchmark?
Sometimes you can mitigate risk, but not eliminate it entirely.
It becomes a question of whether the benefits outweigh that risk.

Steve Demuth
08-02-2023, 5:14 PM
Medical error is a major root cause of harm in the health care setting. What if AI can reduce it dramatically with some potential degree of error? Do we hit the brakes until it's perfect, or reap the improvement benefits using status quo, not perfection as the benchmark?
Sometimes you can mitigate risk, but not eliminate it entirely.
It becomes a question of whether the benefits outweigh that risk.

Any decision making system, whether it is executed in a human brain, or in an AI model, or a combination of the two, will get some decisions wrong. It is in the nature of the problem - condensing complex situations down to a categorical decision (a diagnosis or treatement protocol, e.g., in medicine) using the best knowledge and heuristics we have - simply can't be perfect. So you've got the question exactly right - when do we trust the machines, and how do we combine the best of human intelligence with artifical intelligence to minimize errors.

Right now we get the best value out of AI when we use it to train models that do a small range of decision making very well. Quite often such limited systems outperform humans (and as I've alluded to, in medicine, sometimes do cognitive tasks that humans simply can't do). Those that try to cover a huge range of knowledge and arbitrarily complex situations (like autonomous driving) struggle to close the gap with human performance. But the gap is getting pretty narrow in many cases.

My son, who is statistician and researcher at the clinic/hospital where I used to work, uses AI based coding tools like ChatGPT to write code rather regularly. Even though he has to review the code, and sometimes fix it, it's still a big time saver. You'll see similar things in medicine very soon, if you haven't already. Your mammogram may still be read and signed off by a radiologist, but very soon the standard of care will be that an AI algorithm first interpret the scan and highlight/categorize suspicious areas in the scan, or ones that should be further investigated. I've seen dozens of similar proposed algorithms in the R & D pipeline. These things will change the way doctors and nurses work, and in many cases, subsume human judgment as we gain confidence in them.

Edward Weber
08-03-2023, 10:49 AM
Just more fuel for the fire

https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/

Patty Hann
08-03-2023, 11:07 AM
Aaaannd this.... https://fagenwasanni.com/news/new-godbots-and-the-temptation-of-ai/84586/

Edward Weber
08-03-2023, 12:26 PM
Aaaannd this.... https://fagenwasanni.com/news/new-godbots-and-the-temptation-of-ai/84586/
Wow, that's just scary on top of scary.
How long before we see the church of the integrated circuit?

Patty Hann
08-03-2023, 1:05 PM
also said to have answers to various ethical and religious dilemmas. 505495


dont be scared :rolleyes:
Yeah, except an 8-ball doesn't tell you to kill someone, unlike the Krishna godbot... "on some apps, Krishna has already advised killing unbelievers and supporting India’s ruling party."
These aren't just so many "fun apps" for parlor games.
People use them the same way they use palm readers and seance mediums (media? :rolleyes:).
And if you don't think folks believe palm-readers and mediums, you need to get out more. Google "palm reader scams" or "fortune teller scams" ... they are big bucks business.

Edward Weber
08-07-2023, 11:06 AM
https://techpolicy.press/ai-hurts-consumers-and-workers-and-isnt-intelligent/

This is what I've been saying all along, I was beginning to feel like I was the only one.

Patty Hann
08-07-2023, 11:52 AM
https://techpolicy.press/ai-hurts-consumers-and-workers-and-isnt-intelligent/

This is what I've been saying all along, I was beginning to feel like I was the only one.

Rest assured, you are not alone; there are many, many of us out there that have the same concerns.

Steve Demuth
08-07-2023, 4:26 PM
https://techpolicy.press/ai-hurts-consumers-and-workers-and-isnt-intelligent/

This is what I've been saying all along, I was beginning to feel like I was the only one.

The irony is that Bender and Hanna (along with a handful of other self-declared experts) are engaged in exactly the same hype behavior - in their case on behalf of a negative hypothesis about generative AI, rather than a positive one - that they criticize folks like Sam Altman for. Bender and Hanna have even less evidentiary and analytic support for their argument that generative AI is fundamentally "not intelligent" and is dangerous to workers, than the builders and proponents have for their arguments about the power and capability of the tools. They don't understand the working of AI as well as the people building it (despite their confident assertions about what it is and isn't), and they understand the consequences no better than the people building it. They're just selling blind techno gloom and doom, rather than nearly blind techno utopianism.

You can choose which of the two hypes suits your own world view better, based on you biases. There is no way for most of us to make an intelligent choice as to which is closer to reality, however.

Edward Weber
08-07-2023, 7:17 PM
There are proponents on both sides of this for certain.
There are however, only one set of objective facts.
If I'm unsure of the facts, I would rather be my normal skeptical self and a bit cautious, before stumbling forward on blind faith alone.
This is a developing technology, not sorcery.

Edwin Santos
08-07-2023, 8:09 PM
Right now we get the best value out of AI when we use it to train models that do a small range of decision making very well. Quite often such limited systems outperform humans (and as I've alluded to, in medicine, sometimes do cognitive tasks that humans simply can't do). Those that try to cover a huge range of knowledge and arbitrarily complex situations (like autonomous driving) struggle to close the gap with human performance. But the gap is getting pretty narrow in many cases.



One of my areas of study is behavioral finance, specifically neurological biases and how various logic flaws affect our decision making.
Whether machines can "think" may be a matter of semantics among the participants in a conversation like this thread. I think it's fair to say at the very least AI can be a much more sophisticated form of algorithmic processing than we have yet seen. Where it could shine is the freedom from human flaws. Think about it; bias, prejudice, emotion, fatigue, the list is very long. There's little question that the machine will be superior to the human, in some cases by orders of magnitude based on sheer processing power alone.

Perhaps the end result is not an either/or proposition where there has to be a humans vs. machines winner or loser. The opportunity IMO is a tool that will help protect us from our own flaws. Or stated differently, allow us greater performance by compensating for our flaws.
What comes to mind is the Steve Jobs quote where he called the computer a bicycle for the mind that would herald a new man-machine partnership that would unlock human potential (BTW, the backstory to his quote about the condor is worth looking up)

Fear is one of the most powerful biases, especially when it restricts us from embracing opportunities. Of course, the counter argument is that exuberance can lead to reckless decisions. So far in history I can't think of any tools or technology that caused humans to regress and become "stupid". I'd like to hope this will not be the first one to do so.

Patty Hann
08-07-2023, 10:29 PM
....So far in history I can't think of any tools or technology that caused humans to regress and become "stupid". .....

Really? How 'bout the "New Math" ;)
https://youtu.be/W6OaYPVueW4

Edward Weber
08-08-2023, 11:55 AM
I think it's fair to say at the very least AI can be a much more sophisticated form of algorithmic processing than we have yet seen. Where it could shine is the freedom from human flaws. Think about it; bias, prejudice, emotion, fatigue, the list is very long. There's little question that the machine will be superior to the human, in some cases by orders of magnitude based on sheer processing power alone.



The only area where it can "shine" is mathematics, where there is an objectively right or wrong answer.
The flawed humans provide the data that the machine uses, their flaws are now the algorithms flaws. Allowing you to get an incorrect or at least questionable answer more quickly.
GIGO

Keegan Shields
08-08-2023, 12:48 PM
All of the scare and panic we hear today about, it's going to take your job, is way overblown.
.

This depends on what job you do. In the marketing field, people that generate content for things like websites are already being impacted.

I personally know of AI's which are being trained to replace large numbers of sales people.

United Emirates Airline is rolling out a customer service AI chatbot that will handle all customer interactions. They are a small airline, but the business world is watching...

Just three examples... I'm sure there are countless more.


I also agree with the comments that we don't yet know the full extent of the impact given the early stage of the technology. Much of the AI buzz will settle out as a hype bubble, but real world impacts are already in motion. Lots of blockchain/crypto companies are pivoting to AI. It all has a gold rush feel to it, but there's still gold in them hills. :)

Steve Demuth
08-08-2023, 6:27 PM
There are proponents on both sides of this for certain.
There are however, only one set of objective facts.
If I'm unsure of the facts, I would rather be my normal skeptical self and a bit cautious, before stumbling forward on blind faith alone.
This is a developing technology, not sorcery.

For an interesting "development" in the ChatGPT (specifically) store, try this: https://chat.openai.com/share/f5341665-7f08-4fca-9639-04201363506e

You have to read well into the second attempt by ChatGPT to answer the question to get just how badly hallucination can go in LLMs.

Edward Weber
08-09-2023, 1:51 PM
This depends on what job you do. In the marketing field, people that generate content for things like websites are already being impacted.

I personally know of AI's which are being trained to replace large numbers of sales people.

United Emirates Airline is rolling out a customer service AI chatbot that will handle all customer interactions. They are a small airline, but the business world is watching...

Just three examples... I'm sure there are countless more.


I also agree with the comments that we don't yet know the full extent of the impact given the early stage of the technology. Much of the AI buzz will settle out as a hype bubble, but real world impacts are already in motion. Lots of blockchain/crypto companies are pivoting to AI. It all has a gold rush feel to it, but there's still gold in them hills. :)

White collar jobs are going to be affected and only a portion of them.
Also, marketing and sales don't have a "critical" component to them. If something goes wrong, it's not life or death.

Keegan Shields
08-09-2023, 6:26 PM
Edward,

Based on your responses to this thread, it sounds like you have it all figured out.

Good luck.

Patty Hann
08-09-2023, 10:02 PM
For an interesting "development" in the ChatGPT (specifically) store, try this: https://chat.openai.com/share/f5341665-7f08-4fca-9639-04201363506e

You have to read well into the second attempt by ChatGPT to answer the question to get just how badly hallucination can go in LLMs.

I took the time to read the whole "Convo" ....
Pardon me while I pick up my jaw from the floor.
Towards the end it somewhat reminded me of getting stuck in a phone menu loop.
But overall....that was ChatGPT gone mad. (It reminds me of HAL's increasingly deranged conversations with Dave)
A nascent technology for sure...and they want to let it loose in (among other places) the medical field. :rolleyes:

Edward Weber
08-10-2023, 8:32 AM
Edward,

Based on your responses to this thread, it sounds like you have it all figured out.

Good luck.
Not at all, this is why I discuss it, not blindly embrace it without care or thought. There are areas where this can eventually be an incredible tool but it's not a cure all for every question or situation.

Here's some more to think about

https://apnews.com/article/chatgpt-cheating-ai-college-1b654b44de2d0dfa4e50bf0186137fc1
“There is going to be a big shift back to paper-based tests,”

Patty Hann
08-10-2023, 1:09 PM
Not at all, this is why I discuss it, not blindly embrace it without care or thought. There are areas where this can eventually be an incredible tool but it's not a cure all for every question or situation.

Here's some more to think about

https://apnews.com/article/chatgpt-cheating-ai-college-1b654b44de2d0dfa4e50bf0186137fc1
“There is going to be a big shift back to paper-based tests,”

Wow. Think about it....your future MD or lawyer or civil engineer or pilot or your kid's teacher(s) had AI do some portion (how much? 20%? 50%? 70%? ) of their class work, exams, etc.

Stephen Tashiro
08-10-2023, 1:46 PM
The only area where it can "shine" is mathematics, where there is an objectively right or wrong answer.


That's not what I see. On mathematical questions , ChatGPT 3.5 excels at quoting technical definitions and giving precise verbal descriptions of algorithms, but it often fails when asked to execute these algorithms. For example, it can state the Euclidean algorithm for finding the greatest common factor of two polynomials, but it failed at executing this algorithm when tasked to do a specific problem. I find ChatGPT 3.5 does well at straightforward technical and scientific questions that have objectively correct verbal answers - questions like "What soil types favor colonies of micorrhizal fungi?".

Edward Weber
08-10-2023, 4:23 PM
That's not what I see. On mathematical questions , ChatGPT 3.5 excels at quoting technical definitions and giving precise verbal descriptions of algorithms, but it often fails when asked to execute these algorithms. For example, it can state the Euclidean algorithm for finding the greatest common factor of two polynomials, but it failed at executing this algorithm when tasked to do a specific problem. I find ChatGPT 3.5 does well at straightforward technical and scientific questions that have objectively correct verbal answers - questions like "What soil types favor colonies of micorrhizal fungi?".

That's what I said, ???
"The only area where it can "shine" is mathematics, where there is an objectively right or wrong answer."

They fail when the answer is subjective, open for debate or there is not enough data for an informed answer.

I asked a Chat bot this question a few months ago.
What are your limitations?
Answer;
As an AI assistant, there are a few limitations to be aware of:

1. Lack of physical presence: Since I am a virtual assistant, I cannot physically perform tasks or carry out actions in the real world.

2. Limited context understanding: While I strive to provide accurate and helpful information, my responses are based on the text input or voice command provided, and I may not always understand the complete context or nuances of a situation.

3. Inability to provide subjective opinions: As an AI, I don't possess personal beliefs, opinions, or emotions. Therefore, I can't provide subjective advice or personal opinions on matters.

4. Privacy and security: I prioritize user privacy and data security, and I am designed to handle information with utmost care. However, it is essential for users to exercise caution and avoid sharing sensitive personal details or confidential information.

5. Learning constraints: While I constantly learn and improve from interactions, my knowledge and abilities are based on pre-existing training data. Hence, I might not have information on very recent events or developments.

This should tell you a lot.

Stephen Tashiro
08-10-2023, 4:30 PM
That's what I said, ???
"The only area where it can "shine" is mathematics, where there is an objectively right or wrong answer."


And I said ChatGPT 3.5 does not shine at mathematics. It is only good at certain aspects of mathematics. It does shine at practical technical questions.

Steve Demuth
08-10-2023, 7:57 PM
That's not what I see. On mathematical questions , ChatGPT 3.5 excels at quoting technical definitions and giving precise verbal descriptions of algorithms, but it often fails when asked to execute these algorithms. For example, it can state the Euclidean algorithm for finding the greatest common factor of two polynomials, but it failed at executing this algorithm when tasked to do a specific problem. I find ChatGPT 3.5 does well at straightforward technical and scientific questions that have objectively correct verbal answers - questions like "What soil types favor colonies of micorrhizal fungi?".

This is exactly right. LLMs like GPT 3.5 (and 4.0, and the raft of similar models from Meta, Google and Anthropic and others are algorithms for completing a text in a way that is plausible given their supervisory instruction, plus the prompt, plus whatever text they've already generated, based on what they've learned from a large training set. They only have fact models of the world to the degree that the facts are inherent in their learning of the language. So a lot can be missing, and a lot can be just plain counterfactual in their models, because a lot of the text generated by humans is either deliberately and stylistically counterfactual, or just plain false. Learning language doesn't really prepare you very well to tell fact from fiction, or plausible from fantastic, because our language is equally adept at fiction and fantasy as it is at objectively grounded truth. LLMs only have reasoning models (generally speaking) to the degree that the model, as trained, constructs a reasoning model. It's appears that LLMs do construct some primitive reasoning models, and the very advanced ones some reasonably sophisticated ones, but I've never met an AI researcher who claims to understand how that happens, or what the models actually look like. So, a naive LLM, can't count, can't solve a problem it's never seen, certainly can't construct an integral that it's never seen through novel application of various techniques, has no facility to tell a real reference to authorative source from a fake one, and probably "knows" a lot stuff that just isn't true. If you want these things, you're going to have to combine your LLM with some additional AI and fact checking capabilities.

The post-naive training of services like ChatGPT tries to "fix" these problems by using reinforcement learning in various forms to smarten up the models, and keep them from saying absurd, untrue, and obnoxious stuff. It only kinda works, and in general is easily defeated with some efffort.

Steve Demuth
08-10-2023, 8:05 PM
I took the time to read the whole "Convo" ....
Pardon me while I pick up my jaw from the floor.
Towards the end it somewhat reminded me of getting stuck in a phone menu loop.
But overall....that was ChatGPT gone mad. (It reminds me of HAL's increasingly deranged conversations with Dave)
A nascent technology for sure...and they want to let it loose in (among other places) the medical field. :rolleyes:

Anyone who tries to use ChatGPT to "do" medicine, is guilty of gross negligence and malpractice. I know there are some who are doing so, but it's just plain wrong at this stage of the technology.

I have seen LLMs - like Google's Med PaLM that are trained on, and specifically for, medical tasks. They are night and day better than the naive, general purpose models, such as GPT 3.5. But even so, they should only make their way into medical practice in any form, after rigorous validation and testing - just as we do with any other medical algorithm, AI or not, or with new drugs (and after all, a drug protocol for treating disease is basically an algorithm for the analog "device" known as human metabolism and homeostasis). So far, I think the companies with these models (Google, Microsoft are the two I know best) are being responsible and working with real medical institutions to actually do the required validation.

Patty Hann
08-11-2023, 1:36 AM
https://www.boxingscene.com/forums/non-boxing-talk/boxing-scene-lounge/689163-any-wrestling-fans-on-boxingscene/page439
Pardon my "obtusity", but the relevance of that link is?..... :confused: :confused: :confused:

Bill Dufour
08-11-2023, 12:01 PM
I wonder if this explains the Amazon search function. It does not answer the question of what you want but it gives what most people want to sell you. Sorta kinda based on your search terms.
Bill D.