Page 2 of 3 FirstFirst 123 LastLast
Results 16 to 30 of 45

Thread: AI or web search answers

  1. #16
    Join Date
    Apr 2017
    Location
    Southwest US
    Posts
    1,058
    AI drone just killed its operator because it disagreed with its (the operator's) orders.

    The Skynet-like incident was detailed by the USAF’s Chief of AI Test and Operations, Col. Tucker’ Cinco’ Hamilton, at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, who said the drone that was tasked to destroy specific targets during the simulation turned on the operator after they became an obstacle to its mission.

    " We were training it in simulation to identify and target a [surface-to-air missile] threat,” Hamilton said. “And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

    “We trained the system – ‘Hey, don’t kill the operator – that’s bad,” he continued. “You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

    ---Edit: OK so now the USAF says it didn't happen, that it was only hypothetical.
    Kind of makes you wonder... was it hypothetical or was it real? If it was real it's definitely not something you want to make public.
    Then again, even if it was a hypothetical simulation, the drone still eliminated the thing ( a human in this case) that was preventing it from achieving its objective.

    I guess Asimov's Three Laws of Robotics wasn't part of its programming.
    Last edited by Patty Hann; 06-02-2023 at 8:13 AM.
    "What you see and what you hear depends a great deal on where you are standing.
    It also depends on what sort of person you are.”

  2. #17
    I think this all started when some of the women’s department store manakins….I prefer “girlakins”, refused to wear clothing. And they
    knew the men found the style attractive.

  3. #18
    Well that didn't last long.
    It seems that you can't use AI to teach AI, maybe it has something to do with all the obvious flaws.
    https://venturebeat.com/ai/the-ai-fe...rated-content/

    I really wish these folks would pump the brakes a bit, AI can be a great benefit but going too fast is just stupid.

  4. #19
    What's your definition of AI? Every body talks about it, but I seriously doubt anyone has experienced it. Computers are stupid. they only do what they are programmed to do. Like an old fashioned adding machine that requires levers to be flipped, we just have electronic levers being flipped in the digital form. When a machine goes beyond flipping the digital levers it is programmed to flip and makes a decision or flips a lever that is not part of it's programming then perhaps we will have Artificial Intelligence. To me a dog has far more intelligence. It comprehends commands, understands a limited vocabulary, can communicate back through actions and can make decisions for itself outside of it's training. .

  5. #20
    My version doesn't exist. I only use the term as it's commonly used these days.
    A bunch of automatic text search algorithms are not AI.

  6. #21
    Quote Originally Posted by Edward Weber View Post
    My version doesn't exist. I only use the term as it's commonly used these days.
    A bunch of automatic text search algorithms are not AI.

    No, Google Bard is not a "bunch of automatic text search algorithms".

    Automatic text search algorithms are used to find relevant information on the internet. They work by crawling websites and indexing the content. When a user searches for a keyword, the algorithm returns a list of websites that contain that keyword.

    Google Bard does not use automatic text search algorithms to find information. Instead, it uses its knowledge of the world to answer questions and generate text. This allows Google Bard to provide more comprehensive and informative answers than a search engine.
    Kindness Every Day......All Day

  7. #22
    Quote Originally Posted by Lawrence Duckworth View Post
    No, Google Bard is not a "bunch of automatic text search algorithms".

    Automatic text search algorithms are used to find relevant information on the internet. They work by crawling websites and indexing the content. When a user searches for a keyword, the algorithm returns a list of websites that contain that keyword.

    Google Bard does not use automatic text search algorithms to find information. Instead, it uses its knowledge of the world to answer questions and generate text. This allows Google Bard to provide more comprehensive and informative answers than a search engine.
    It's knowledge of the world? Where did it get that, the internet maybe.
    If it can't reason for itself without being connected to a world-wide database of facts to draw it's conclusions from, it's not "intelligent".
    I would submit that it's a polished version of automatic text search algorithms.
    JMHO

  8. #23
    Quote Originally Posted by Edward Weber View Post

    "If it can't reason for itself without being connected to a world-wide database of facts to draw it's conclusions from, it's not "intelligent".

    JMHO
    You’ve described what humans do. We make reasonable conclusions based on knowledge and facts.....intelligence?
    Kindness Every Day......All Day

  9. #24
    Join Date
    Apr 2013
    Location
    Kansas City
    Posts
    2,667
    I think the old computing rule "junk in, junk out" still applies. Also my favorite bumper sticker philosophy: "information ≠ knowledge ≠wisdom". But maybe what these anecdotes show the importance to non-verbal clues, context, and experience, in human conversation.

    Noting that one of the issues of the current strike by television Writers Guild (TV and screenwriters) is about limiting the us of AI in writing scripts. So the genie is out of the bottle.
    < insert spurious quote here >

  10. #25
    Quote Originally Posted by Lawrence Duckworth View Post
    You’ve described what humans do. We make reasonable conclusions based on knowledge and facts.....intelligence?
    From our own experiences.
    The chatbot has no experience, it searches the same information every time, it does not gain knowledge experience or wisdom, it can not reason.

    What they're calling AI or chatbots today is no different than going to the library, getting 10 pieces of reference material and writing a paper. You are only using knowledge that someone else gained and wrote down to come to your conclusions.
    A chatbot can not gain knowledge by experience, only how to search for it and provide answers faster, or hopefully more accurately.

    A human sees a problem and can try to work out the answer through trial and error.
    A chatbot gets asked a question about the same problem and searches to see if anyone provided an answer for it to return.
    Humans reason and learn, chatbots retrieve previously learned information.
    Big difference.

  11. #26
    Quote Originally Posted by Edward Weber View Post
    From our own experiences.
    The chatbot has no experience, it searches the same information every time, it does not gain knowledge experience or wisdom, it can not reason.

    What they're calling AI or chatbots today is no different than going to the library, getting 10 pieces of reference material and writing a paper. You are only using knowledge that someone else gained and wrote down to come to your conclusions.
    A chatbot can not gain knowledge by experience, only how to search for it and provide answers faster, or hopefully more accurately.

    A human sees a problem and can try to work out the answer through trial and error.
    A chatbot gets asked a question about the same problem and searches to see if anyone provided an answer for it to return.
    Humans reason and learn, chatbots retrieve previously learned information.
    Big difference.

    AI can gain knowledge and reason.

    It learns from books, articles, code, and other forms of text. As it learns from this data, it is able to expand its knowledge base and improve its ability to communicate and generate text. It can also learn from the interactions it has with people. AI reasons by using its knowledge to draw conclusions and make decisions.
    Kindness Every Day......All Day

  12. #27
    The question of whether AI is merely a glorified search engine is a good one, and inspired me to do some research. Here is a answer from someone on a tech forum:

    The scope of AI's abilities extends far beyond mere "search". Consider DeepMind's AlphaFold, which solved a problem in biology that stumped scientists for decades—predicting the 3D structure of proteins. Or look to reinforcement learning, which powers systems like OpenAI's Dota 2 playing AI that can outperform professional human players in a highly complex game. Furthermore, GPT-3's ability to generate contextually relevant and creative text responses goes far beyond mere data retrieval—it involves pattern recognition, contextual understanding, and adaptability that are leagues beyond a basic search function.
    Maybe another relatable example is AI chess. Realizing that the number of chess moves and combinations is beyond what the human mind can conceive, in order for AI to defeat a chess grandmaster it requires logic and "thinking" because it can't simply query a database of moves and pick the right one. In fact the thinking involves prediction and forecasting. So with this in mind, think about how IBM Deep Blue was able to defeat the then undefeated Garry Kasparov back in 1997 when AI was in its pre-infancy, without access to nearly the computing power accessible to it today. Kasparov had defeated computers before, exploiting the fact that they could not "think", but when he went up against Deep Blue, he failed because the system actually did think and learn from experience.

    Last example - I live in Arizona, and in downtown Phoenix today you can see any number of driverless AI driven Waymo vehicles. You can download the app, book one, and it will show up, pick you up, take you to where you want to go and drop you off. All along the way, the car has to make decisions and act on changing circumstances. So far, every person I know that has ridden in one has been amazed at how good it was. Maybe not perfect, but human drivers are far from perfect. AI will never become drowsy, get angry, drive drunk, get distracted.

    I think AI is still in its infancy, but the computing power available is enabling it to improve at a stunning rate, probably faster than average people like us can even conceive. Remember when the internet first became available with dial up modems. It seemed like websites would be nothing more than static electronic brochures for companies, maybe some hobby bulletin boards. We could never imagine the transaction of information like we have today, which has turned out to be revolutionary. I think AI has that potential, but it is really difficult to wrap one's mind around how the world will be changed.

    If you try out ChatGPT, think about making your query (prompts) iterative. In other words engage in a back and forth conversation in order to refine what you are seeking, and watch the system adjust itself and learn from the process. The distinction from a search engine will become clear.
    Last edited by Edwin Santos; 06-15-2023 at 7:30 PM.

  13. #28
    You'are stuck on a desert island, all you have it's what's in your head. You need to problem solve and reason to survive.
    A chat bot (as they are today) has no inherent knowledge, it can only provide answers to knowledge that it has access to.
    If the answer is not in the database or it can't access it for some reason, what does it do?
    Can a chatgpt answer a question that isn't in it's database or on the internet?
    Does it actually learn or simply get better at retrieving answers with patterns that humans perceive as intelligence.


    Don't get me wrong, I think the potential for AI is incredible but we're not there yet. The Chatbots and things everyone is freaking out about today are nothing to be scared of. Computers that can recognize a voice or identify search patterns that, in turn mimic intelligence, are old news.

    A machine that can asses a problem and come up with a solution without external resources would be considered intelligent.
    A man in a vast library with access to all human knowledge is merely a search engine.

  14. #29
    Quote Originally Posted by Edward Weber View Post
    You'are stuck on a desert island, all you have it's what's in your head. You need to problem solve and reason to survive.
    A chat bot (as they are today) has no inherent knowledge, it can only provide answers to knowledge that it has access to.
    If the answer is not in the database or it can't access it for some reason, what does it do?
    Can a chatgpt answer a question that isn't in it's database or on the internet?
    Does it actually learn or simply get better at retrieving answers with patterns that humans perceive as intelligence.


    Don't get me wrong, I think the potential for AI is incredible but we're not there yet. The Chatbots and things everyone is freaking out about today are nothing to be scared of. Computers that can recognize a voice or identify search patterns that, in turn mimic intelligence, are old news.

    A machine that can asses a problem and come up with a solution without external resources would be considered intelligent.
    A man in a vast library with access to all human knowledge is merely a search engine.
    Edward,
    I used to share your exact opinion on this topic.
    It would be fun to re-connect or resurrect this thread in about two years and see how things have evolved, and whether today's hype turns out to have been warranted. Until then...

  15. #30
    A couple of articles describing what AI really is and what it's supposed to be able to do.
    https://www.ibm.com/topics/artificial-intelligence
    https://theconversation.com/not-ever...to-know-196732

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •