Page 1 of 4 1234 LastLast
Results 1 to 15 of 51

Thread: Tried ChatGPT yet?

  1. #1
    Join Date
    Sep 2006
    Location
    Las Cruces, NM
    Posts
    2,040

    Tried ChatGPT yet?

    I've been using ChatGPT and I'm impressed. I find a few minutes on ChatGPT accomplishes what I'd get in half an hour of web browsing on practical topics - like what species of juniper grow into trees, which have berries that birds like, etc - things I could find out by looking at several web sites and cross referencing the information. ChatGPT doesn't do a better job than I do, but it does the same work faster.

    I also use it to review advanced mathematics. It deals with technical definitions and general outlines of mathematical procedures quite well. However, it makes elementary blunders in doing specific calculations. ChatGPT will admit its errors when you point out contradictions. However, ti also says that the ChatGPT program does not learn from the chats - it only "knows" about the data used to train it. So don't waste your time expecting to teach it something it will remember. It will know what is done while in a chat. This doesn't carry over to new chats.

    Online, there are lots of posts about how ChatGPT handles (or fails at) logical puzzles or controversial political and philosophical questions. To me, that's like posting about how well a trim router breaks up concrete. Like all tools, I find ChatGPT is only useful for certain tasks.

  2. #2
    It can be helpful to a point.
    The only way it can provide accurate answers is if there is accurate data there from which to draw upon.
    The quality of the data that ChatGTP and others use is not vetted for accuracy. You just get the most common answers to your query from what data is available, regardless of accuracy.
    The general consensus answers are not always correct.
    It also further erodes the entire concept of critical thinking.

  3. #3
    Join Date
    Sep 2006
    Location
    Las Cruces, NM
    Posts
    2,040
    Quote Originally Posted by Edward Weber View Post
    The general consensus answers are not always correct.
    It also further erodes the entire concept of critical thinking.
    I agree that ihere's no gain in accuracy, but in that respect it doesn't differ from quick web research that I'd do myself. There's no guarantee of accuracy in either case. At to critical thinking, I'm apt to think critically about ChatGPT's answers because I know it's a program. That's not true of my attitude toward other "authoritative" websites - sawmillcreek included, of course.

  4. #4
    Well, let's ask it:

    As an AI language model, I (ChatGPT) strive to provide accurate information based on the data I was trained on. My responses are generated by predicting the most likely answer given the input I receive. However, it's essential to understand that I am not infallible and can make errors or provide inaccurate information.

    It's essential to use the information I provide as a starting point for further research and verification, especially for critical decisions or tasks where high accuracy is required. If you are unsure about the information I provide, I recommend consulting reputable sources or domain experts for confirmation.

  5. #5
    Join Date
    Apr 2017
    Location
    Southwest US
    Posts
    1,062
    Quote Originally Posted by Clifford McGuire View Post
    Well, let's ask it:

    As an AI language model, I (ChatGPT) strive to provide accurate information based on the data I was trained on. My responses are generated by predicting the most likely answer given the input I receive. However, it's essential to understand that I am not infallible and can make errors or provide inaccurate information.

    It's essential to use the information I provide as a starting point for further research and verification, especially for critical decisions or tasks where high accuracy is required. If you are unsure about the information I provide, I recommend consulting reputable sources or domain experts for confirmation.
    This situation is somewhat similar to using Wikipedia as a source.
    Wikipedia isn't always accurate (I've had to correct entries dealing with Literature), but it is an excellent starting point.
    You learn terminology, applications, what are some issues or points of contention: you can use all to start your "real research".

    Example: you've always wanted to learn furniture making and you've heard of "joinery", so you Wiki it.
    Some of what you read may be debatable, some may be flat-out wrong, but now you have learned some terminology, uses, pros/cons of the types of joinery.
    And you use it all as jumping off point for more learning, now that you have an idea of the right questions to ask.

    For some people the search is a bother and they want answers right away. Well, the maxim "Act in haste, repent in leisure" endures for a reason.
    I like engaging in "the chase"... I learn an awful lot that way, maybe not usable right away, but eventually it pays off.

    So I'm thinking the accuracy of ChatGTP is the same way. Use it as a starting point, knowing it probably has not a few mistakes/errors.
    Then...Tally-ho!... Onward and upward!... "The game is afoot!"



    Bullwinkle.jpg
    Last edited by Patty Hann; 07-30-2023 at 2:46 AM.
    "What you see and what you hear depends a great deal on where you are standing.
    It also depends on what sort of person you are.”

  6. #6
    Join Date
    Feb 2016
    Location
    NE Iowa
    Posts
    1,245
    It's pretty shakey as a way to get reliable information about anything that isn't common knowledge. Like Patty Ham says, you can get terminology and general drift, but relying on any detail that it produces is pretty risky. That goes for the version embedded in Bing/Edge as well - if you use it to find links to web pages, you might be ok, but to actually answer questions, it's not very reliable. Ditto Google's version, Bard. On the other hand, if you want a shortcut to get a page or two of "good enough" text about a topic for which you're willing to provide the outline, and scan the results for obvious stupidities, tools like ChatGPT can probably save you a lot of time.

    I've been working some with specialized versions of large language models, specifically those trained and tuned to answer medical questions (search for Med-Palm II to learn a bit about Google's). They still suffer from too much confabulation to be quite ready for prime time, but they are incredibly impressive, nonetheless. I expect to see LLMs coupled to explicit domain models like medicine, law, engineering, and the like, to emerge in the relatively near future that really are the top experts in their respective fields, and highly useful. General knowledge will be harder to nail down to adequate accuracy and reliability, because there is just so much room for amibiquity in what humanity collective knows, generally.

  7. #7
    Quote Originally Posted by Patty Hann View Post
    This situation is somewhat similar to using Wikipedia as a source.
    Wikipedia isn't always accurate (I've had to correct entries dealing with Literature), but it is an excellent starting point.
    You learn terminology, applications, what are some issues or points of contention: you can use all to start your "real research".

    Example: you've always wanted to learn furniture making and you've heard of "joinery", so you Wiki it.
    Some of what you read may be debatable, some may be flat-out wrong, but now you have learned some terminology, uses, pros/cons of the types of joinery.
    And you use it all as jumping off point for more learning, now that you have an idea of the right questions to ask.

    For some people the search is a bother and they want answers right away. Well, the maxim "Act in haste, repent in leisure" endures for a reason.
    I like engaging in "the chase"... I learn an awful lot that way, maybe not usable right away, but eventually it pays off.

    So I'm thinking the accuracy of ChatGTP is the same way. Use it as a starting point, knowing it probably has not a few mistakes/errors.
    Then...Tally-ho!... Onward and upward!... "The game is afoot!"



    Bullwinkle.jpg
    While I agree with the theory of using Wikipedia and ChatGPT as a starting off point, in that sense, it's a great tool.
    In reality, that's not what's happening, people are using it as a credible source of information.
    It's like learning woodworking solely from YouTube videos.
    You may learn something but what you learn is up for debate. Because, as I said earlier, the information you're getting you answer from, is not vetted for accuracy.
    Also "learning" or using the chatbots for answers, provides no depth of knowledge, you had to do nothing to arrive at the answer. No critical thinking, no problem solving, nothing. All you get is the answer, not any real knowledge.

    There seems to be little point in using a tool, relying on it's performance, all the while knowing, that may not work as it should.

    Question,
    Why do you give me incorrect answers sometimes?
    Answer,
    I apologize if there have been instances where you feel I have provided incorrect answers. As an AI assistant, I strive to provide accurate and reliable information based on the data and knowledge available to me. However, there can be limitations or gaps in my understanding, and occasionally, mistakes may occur. If you believe I have provided an incorrect answer, please let me know, and I will do my best to rectify it and provide the correct information.

  8. #8
    Join Date
    May 2007
    Location
    Fort Smith, Arkansas
    Posts
    1,992
    Out of curiosity mostly, I just completed an Arduino project using chatGPT to write virtually all the code. I ended up with an apparently working project. Getting there was frustrating. I can say that only by being highly specific and detailed in one’s requests will one get remotely what is expected. Otherwise, it’s a matter of (my) garbage in, garbage out. I cannot imagine using it alone for any critical coding task.
    Last edited by Michael Weber; 07-30-2023 at 12:58 PM.
    My three favorite things are the Oxford comma, irony and missed opportunities

    The problem with humanity is: we have paleolithic emotions; medieval institutions; and God-like technology. Edward O. Wilson

  9. #9
    Join Date
    Sep 2006
    Location
    Las Cruces, NM
    Posts
    2,040
    Quote Originally Posted by Edward Weber View Post
    In reality, that's not what's happening, people are using it as a credible source of information.
    I haven't noticed people citing ChatGPT as a credible source in online discussions, but I don't look at forums about politics or literature etc. Of course - how often do people reveal sources when they discuss anything? People who do quote Wikipedia or ChatGPT deserve credit for honesty.

    It's like learning woodworking solely from YouTube videos.
    There's a difference between learning a trade versus learning a task. I find YouTube videos an excellent tool for learning specific practical tasks. For example, they aren't likely to make me an appliance repair professional, but they were a great help in teaching me how to take apart my clothes dryer. It's the type of education where people are likely to attempt the task or something like it. it isn't a situation where a person just hears something and then repeats it as fact.

  10. #10
    Join Date
    Apr 2017
    Location
    Southwest US
    Posts
    1,062
    They are all "tools"...apprenticeships/classes, Youtube, Wikipedia, forums, Instructables, ChatGPT/AI. Some are better than others. Some folks prefer one sort to another.
    And there's always the misuse of a "tool"...analogous to using a table knife as a screw driver (had a denso B-I-L who did that), wherein someone will complain the "tool" didn't work right.
    Do you homework...read up on the tool and learn its strengths and weaknesses; then there will be no (or very few) unpleasant surprises.
    "What you see and what you hear depends a great deal on where you are standing.
    It also depends on what sort of person you are.”

  11. #11
    This whole, table knife isn't a screwdriver, thing has me confused. Since when?

    In all seriousness, you're correct of course.
    In the news, we've already seen lawyers getting into trouble, call centers putting peoples life in danger and on and on.
    TOO many people assume the "AI" actually thinks, it does not.
    All of the scare and panic we hear today about, it's going to take your job, is way overblown.
    The tool still and always will have flaws, much less of them as time goes by hopefully.
    You're still going to have to do some actual thinking of your own. I know that scares some people.

  12. #12
    You're still going to have to do some actual thinking of your own. I know that scares some people.
    ^^This.
    For me, that does not scare me, it is a relief. What's scary is the seemingly general listing of the population into non-thinking, as there's less effort involved.
    It is artificial intelligence, not intellect. Sadly, most people don't seem to realize there is a profound difference between the two terms.

    Interesting that some feel they're gaining familiarity by diving into a subject this way, even though it could be misinformed. When one is learning a subject for the first time, those first bits of information, right or wrong, root very deeply. Human data storage in the brain does not erase/replace as cleanly as a microchip. It is concerning that inaccuracy is accepted for the sake of expediency, so I'm not on board with that justification.

    The crux:
    How much time does one spend deconstructing the AI conclusion for accuracy? Seems we're not gaining anything more than the ability to throw words around, sounding authoritative, as we spit out inaccuracies about said topic. Now we, personally, are much like the AI itself. No? Generally informed, but not ready to be trusted to act upon said knowledge, at least not until we've made corrections to the inaccuracies we've ingested. But now, how do we go about that? How much time does that take? No real gain here AFAICS.

    But hey, you could get instruction from a really bad human carpenter and still get a house built, so...

  13. #13
    Quote Originally Posted by Jeff Roltgen View Post
    Interesting that some feel they're gaining familiarity by diving into a subject this way, even though it could be misinformed. When one is learning a subject for the first time, those first bits of information, right or wrong, root very deeply. Human data storage in the brain does not erase/replace as cleanly as a microchip. It is concerning that inaccuracy is accepted for the sake of expediency, so I'm not on board with that justification.
    I agree,

    To simplify, bad habits are hard to un-learn.

  14. #14
    Join Date
    Apr 2017
    Location
    Southwest US
    Posts
    1,062
    Quote Originally Posted by Jeff Roltgen View Post
    ^^This.
    When one is learning a subject for the first time, those first bits of information, right or wrong, root very deeply. Human data storage in the brain does not erase/replace as cleanly as a microchip.
    .
    All the info I pick up on something from all the sources I listed is just so much "data".
    I make/take notes. Invariably I have questions.. I always have questions.
    I compare data/notes from all those sources and see how much overlap there is is.
    But even overlap doesn't necessarily mean that those common points are actually right/accurate.
    Then I take all that info and start asking questions,.. here, other forums, Rockler, Woodcraft, my instructors in my WW class (through the local JC), more reading... also asking the experienced woodworkers that help out in the class (some have been doing it for 40 years).
    Nothing gets lodged in my brain as a habit until I actually start putting all that info into practice.
    "What you see and what you hear depends a great deal on where you are standing.
    It also depends on what sort of person you are.”

  15. #15
    Quote Originally Posted by Patty Hann View Post
    All the info I pick up on something from all the sources I listed is just so much "data".
    I make/take notes. Invariably I have questions.. I always have questions.
    I compare data/notes from all those sources and see how much overlap there is is.
    But even overlap doesn't necessarily mean that those common points are actually right/accurate.
    Then I take all that info and start asking questions,.. here, other forums, Rockler, Woodcraft, my instructors in my WW class (through the local JC), more reading... also asking the experienced woodworkers that help out in the class (some have been doing it for 40 years).
    Nothing gets lodged in my brain as a habit until I actually start putting all that info into practice.
    You know you're wasting your time, all you need to do is watch the first YT video that comes up. It's on the internet, it must be true.

    I have been working with tools for over 40 years and I still ask questions and double check things before I simply take someones word at face value.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •