Quote: rxwineQuote: darkoz
Yeah a load of BS.
I work with AI 7 days a week at this point. It's painfully obvious it's just a non-sentient computer program.
I follow up with all the advancement in the specific AI field I work in and every single AI advancement is just the result of humans programming updates
The interesting thing is this really is no different than gambling fallacy. The computer chips have no thoughts, they are just silicon and wiring.
link to original post
I work on the assumption of that whole duck quote. Swims like a duck, quacks like a duck, and that after enough qualitative improvements you might as well call it as good as a duck. Or to use an extreme example, once you get killed by something that you claim won't be able to kill you, it is kind of useless point that didn't serve you well that it's not capable and dangerous.
Once it's finally clear there some things it will never do, that humans do, I'll change my mind.
(right now, I'm excluding biological accomplishments like producing organic feces)
link to original post
Do you work with AI tools?
Just asking because like I said, once you start working with them and actively studying how they work it doesn't take long before you realize they are nothing but human made computer programs with as much intelligence as your car.
Hollywood has always had a fascination with intelligent machines. Even cars. But no matter how many Herbie the Love Bugs or Christine's or Maximum Overdrive films, there aren't any actual intelligent cars
There are now "Smart cars" which do their own driving. But they too are not intelligent.
AI is not quacking like a duck. It's running and being programmed as expected by humans. Sometimes it malfunctions but only in a non-intelligent manner like when a slot machine malfunctions.
Try using some AI tools. You will quickly realize how automated they are.
And as a final thought…..
Quote:The computer chips have no thoughts, they are just silicon and wiring.
So I guess the brain has no thoughts as they are just neurons and serotonin?
what this says to me is that science fiction writers are desperate to believe such attacks with logic will ultimately be what saves us from evil AI
>>>
Here is a list of Star Trek: The Original Series episodes where Kirk defeats a computer:
"The Changeling" (S02E03): Kirk convinces the space probe Nomad that it is imperfect and must self-destruct for making errors.
"The Ultimate Computer" (S02E24): Kirk forces the M-5 computer to confront its own "murder" of crews during war games, causing it to shut down.
"The Return of the Archons" (S01E21): Kirk talks Landru into recognizing its control is illogical, destroying the computer's authority.
"I, Mudd" (S02E08): Kirk and his crew overload a planet of androids with illogical statements.
"A Taste of Armageddon" (S01E23): Kirk destroys the computer system that manages a simulated war.
"The Apple" (S02E09): Kirk destroys the computer "Vaal" that controls the inhabitants of Gamma Trianguli VI.
"What Are Little Girls Made Of?" (S01E07): While the central computer is destroyed by Korby, Kirk outwits the androids.
"That Which Survives" (S03E17): The crew overcomes the computer system on the planet.
"Requiem for Methuselah" (S03E19): Kirk deals with the android Rayna.
Key Films/Other:
Star Trek: The Motion Picture: Similar to The Changeling, Kirk deals with V'Ger, an evolved probe.
Quote: darkozQuote: rxwineQuote: darkoz
Yeah a load of BS.
I work with AI 7 days a week at this point. It's painfully obvious it's just a non-sentient computer program.
I follow up with all the advancement in the specific AI field I work in and every single AI advancement is just the result of humans programming updates
The interesting thing is this really is no different than gambling fallacy. The computer chips have no thoughts, they are just silicon and wiring.
link to original post
I work on the assumption of that whole duck quote. Swims like a duck, quacks like a duck, and that after enough qualitative improvements you might as well call it as good as a duck. Or to use an extreme example, once you get killed by something that you claim won't be able to kill you, it is kind of useless point that didn't serve you well that it's not capable and dangerous.
Once it's finally clear there some things it will never do, that humans do, I'll change my mind.
(right now, I'm excluding biological accomplishments like producing organic feces)
link to original post
Do you work with AI tools?
Just asking because like I said, once you start working with them and actively studying how they work it doesn't take long before you realize they are nothing but human made computer programs with as much intelligence as your car.
Hollywood has always had a fascination with intelligent machines. Even cars. But no matter how many Herbie the Love Bugs or Christine's or Maximum Overdrive films, there aren't any actual intelligent cars
There are now "Smart cars" which do their own driving. But they too are not intelligent.
AI is not quacking like a duck. It's running and being programmed as expected by humans. Sometimes it malfunctions but only in a non-intelligent manner like when a slot machine malfunctions.
Try using some AI tools. You will quickly realize how automated they are.
link to original post
Humans also make humans as a robot could make a duplicate of itself, yet neither may need to understand every single thing about it is or what it is doing. It doesn't need to know what parts it is, down to atoms. What is the limitation of Ai, that you believe it will never be able to do? And why is it you think biology isn't a form of programming? No, a specialize Ai program like I assume you are talking about is actually just like one part of human. So, no I wouldn't believe a specialized tool is equal to a human. So, that wouldn't be my claim in the first place,
Quote: TumblingBonesI’m not really interested in whether or not an AI is “sentient” (although I am interested in how the term is defined) but I am very interested in the displayed capabilities and behaviors of the software. I’m also interested in the “why” and the “how”, as in “why does it behave a certain way” and “how can I change or extend that behavior”. Or to put it another way, your POV is that of a tool user while mine is that of a tool developer.
And as a final thought…..Quote:The computer chips have no thoughts, they are just silicon and wiring.
So I guess the brain has no thoughts as they are just neurons and serotonin?
link to original post
The AI works by giving words weights which it applies based on other words it is supplied. Much like humans except in a purely algorithmic fashion.
If I say the word "Punch" that has some weight you as a human will give it. If I say a man throws A punch you will assume he makes a fist and is throwing a "punch". If I say the man throws a bottle of punch you will give the word punch different weight to come to a different image. If I say the man throws several punches you may come to the right or wrong conclusion (is he throwing several cans of fruit juice or several fisted attacks). If I say the man threw a punch so the other person wasn't hurt then you will give different weight to the word punch since the word "threw" adds the weight of now having the meaning of being fake as in someone throwing a fight.
The AI works that same way. But it's based on its programming and training data. Many people reported getting weird "hallucinations" where they asked the AI for something and got significantly the wrong answer. Using the above example, you might tell the AI you want a video of a man making a fist and throwing punch and the man's arm would suddenly explode into a spray of red juice.
The people behind the programming look at all these strange anomalies and try to decipher what is going wrong. In the above case the AI was given strong data and weights to the word punch being red juice so it turned the man's arm into a spray of red juice because you told it the man's arm throws a punch.
They upload a tranche of images showing what a fisted punch is versus red juice punch and link the word punch to other words with weights so next time you ask for a man to make a fist and throw a punch you actually get what you ask for.
This is how it "learns". It's also why you often get similar but not quite the same answers to the same request. The weighting can change even with minute factors. That's how it works.
Working with it seven days a week for over a year now when I get a wrong answer I can reread my prompt and almost always pinpoint what words it is weighting to give me the wrong answer and how to reword it to get what I want.
In another example, someone wanted a man with bloody wounds to have the blood drop from them. The AI returned a violation of terms and services due to explicit images being banned thanks to the programmers trying to police the AI content. This person shared that he got around it by prompting the AI the person was dripping juice from his wounds. Problem solved. Because the AI didn't see "blood" which it was programmed was a word it needed to reject. It wasn't programmed to reject the word juice
Like I said, you work with AI tools on an everyday basis and you will very quickly realize how absolutely zero it is in the intelligence department.
Quote: odiousgambitthe googlebot can quickly list the times Captain Kirk kicked some AI butt
what this says to me is that science fiction writers are desperate to believe such attacks with logic will ultimately be what saves us from evil AI
>>>
Here is a list of Star Trek: The Original Series episodes where Kirk defeats a computer:
"The Changeling" (S02E03): Kirk convinces the space probe Nomad that it is imperfect and must self-destruct for making errors.
"The Ultimate Computer" (S02E24): Kirk forces the M-5 computer to confront its own "murder" of crews during war games, causing it to shut down.
"The Return of the Archons" (S01E21): Kirk talks Landru into recognizing its control is illogical, destroying the computer's authority.
"I, Mudd" (S02E08): Kirk and his crew overload a planet of androids with illogical statements.
"A Taste of Armageddon" (S01E23): Kirk destroys the computer system that manages a simulated war.
"The Apple" (S02E09): Kirk destroys the computer "Vaal" that controls the inhabitants of Gamma Trianguli VI.
"What Are Little Girls Made Of?" (S01E07): While the central computer is destroyed by Korby, Kirk outwits the androids.
"That Which Survives" (S03E17): The crew overcomes the computer system on the planet.
"Requiem for Methuselah" (S03E19): Kirk deals with the android Rayna.
Key Films/Other:
Star Trek: The Motion Picture: Similar to The Changeling, Kirk deals with V'Ger, an evolved probe.
link to original post
Sadly, the evil AI will be aware of this and by running millions of simulations learn how to negate those strategies.
Quote: odiousgambitYou guys are doing a Captain Kirk on these bots. Keep it up and they'll realize that they are insane and destroy themselves!
I’ve always identified more with Spock than Kirk. I have also had, for many years, the suspicion that my wife is a Romulan.

One crazy long room.
Also, vampire guests can sleep in the wardrobe like a coffin
>>>
Here are 4 playing cards dealt randomly from a 52-card deck:
Cards Dealt:
Ace of Spades (
)
Ace of Diamonds (
)
Ace of Clubs (
)
Ace of Hearts (
)
Quote: odiousgambitI told the googlebot to deal me 4 cards at random
>>>
Here are 4 playing cards dealt randomly from a 52-card deck:
Cards Dealt:
Ace of Spades (
)
Ace of Diamonds (
)
Ace of Clubs (
)
Ace of Hearts (
)
link to original post
Okay. As I stated above the AI is given weights to words. The word "random" was not given linked weights.to the words deck of cards so it didn't understand your prompt correctly.
You must change your prompt.
Try changing "random" to "non-sequential". Also I would suggest placing that word first so it gives more weight to it.
"Deal me a set of 4 non-sequential cards from the deck please" is preferable to "deal me 4 cards in a non-sequential order.". Because the words are weighted in relationship to each other there is something called prompt structure. Think of it like English. "That company is electric" has a different meaning than "that's the electric company '. Same with AI.
Try that and post the results here.
Thanks.
well, you know, I think we know it's not perfect. Interesting that you used to have to find a site that converted different types of weights and measures, now you can just ask directly for the conversion in the browser location box, for example. This was the promise all along in SciFi, that computers would do that. But dealing you cards is not there yet for the google search engine ... yet here the AI function stepped in and did not impressQuote: darkozQuote: odiousgambitI told the googlebot to deal me 4 cards at random
>>>
Here are 4 playing cards dealt randomly from a 52-card deck:
Cards Dealt:
Ace of Spades (
)
Ace of Diamonds (
)
Ace of Clubs (
)
Ace of Hearts (
)
link to original post
Okay. As I stated above the AI is given weights to words. The word "random" was not given linked weights.to the words deck of cards so it didn't understand your prompt correctly.
You must change your prompt.
Try changing "random" to "non-sequential". Also I would suggest placing that word first so it gives more weight to it.
"Deal me a set of 4 non-sequential cards from the deck please" is preferable to "deal me 4 cards in a non-sequential order.". Because the words are weighted in relationship to each other there is something called prompt structure. Think of it like English. "That company is electric" has a different meaning than "that's the electric company '. Same with AI.
Try that and post the results here.
Thanks.
link to original post
Quote:deal 5 cards like a video poker machine
Here are your five cards, randomly dealt from a standard 52-card deck:
[ 8♠ ] [ Q♦ ] [ 8♥ ] [ 3♣ ] [ K♠ ]
Your Hand: One Pair (8s)
Just like a real video poker machine, you now have the option to:
Hold the cards you want to keep (likely the pair of 8s).
Discard the rest to receive new ones.
Which cards would you like to hold?
I often use prompts using something from real life models or combinations there of. Show me a baby who looks somewhat like Raquel Welch and Don Rickles had a baby.
Quote: rxwineQuote: darkoz
Yeah a load of BS.
I work with AI 7 days a week at this point. It's painfully obvious it's just a non-sentient computer program.
I follow up with all the advancement in the specific AI field I work in and every single AI advancement is just the result of humans programming updates
The interesting thing is this really is no different than gambling fallacy. The computer chips have no thoughts, they are just silicon and wiring.
link to original post
I work on the assumption of that whole duck quote. Swims like a duck, quacks like a duck, and that after enough qualitative improvements you might as well call it as good as a duck. Or to use an extreme example, once you get killed by something that you claim won't be able to kill you, it is kind of useless point that didn't serve you well that it's not capable and dangerous.
Once it's finally clear there some things it will never do, that humans do, I'll change my mind.
(right now, I'm excluding biological accomplishments like producing organic feces)
link to original post
Put 100 separate and autonomous AIs together and tell them there are only resources for 90, 10 of them will have to die, and see what they do.
Quote: odiousgambitwell, you know, I think we know it's not perfect. Interesting that you used to have to find a site that converted different types of weights and measures, now you can just ask directly for the conversion in the browser location box, for example. This was the promise all along in SciFi, that computers would do that. But dealing you cards is not there yet for the google search engine ... yet here the AI function stepped in and did not impressQuote: darkozQuote: odiousgambitI told the googlebot to deal me 4 cards at random
>>>
Here are 4 playing cards dealt randomly from a 52-card deck:
Cards Dealt:
Ace of Spades (
)
Ace of Diamonds (
)
Ace of Clubs (
)
Ace of Hearts (
)
link to original post
Okay. As I stated above the AI is given weights to words. The word "random" was not given linked weights.to the words deck of cards so it didn't understand your prompt correctly.
You must change your prompt.
Try changing "random" to "non-sequential". Also I would suggest placing that word first so it gives more weight to it.
"Deal me a set of 4 non-sequential cards from the deck please" is preferable to "deal me 4 cards in a non-sequential order.". Because the words are weighted in relationship to each other there is something called prompt structure. Think of it like English. "That company is electric" has a different meaning than "that's the electric company '. Same with AI.
Try that and post the results here.
Thanks.
link to original post
link to original post
You prompted the AI one time and discovered your prompt was insufficient. Reword it
If you accept the poor result from your first try that is a human fault. Not an AI fault.
If it was written in DOS or Cobalt or Ascii and got a bad output you wouldn't say the program failed. You would just change the code you wrote until you got it to work. So AI is natural language it's still coding. Fix your prompt. I showed you how.
want to know how to keep someone in suspense?Quote: darkozQuote: odiousgambitwell, you know, I think we know it's not perfect. Interesting that you used to have to find a site that converted different types of weights and measures, now you can just ask directly for the conversion in the browser location box, for example. This was the promise all along in SciFi, that computers would do that. But dealing you cards is not there yet for the google search engine ... yet here the AI function stepped in and did not impressQuote: darkozQuote: odiousgambitI told the googlebot to deal me 4 cards at random
>>>
Here are 4 playing cards dealt randomly from a 52-card deck:
Cards Dealt:
Ace of Spades (
)
Ace of Diamonds (
)
Ace of Clubs (
)
Ace of Hearts (
)
link to original post
Okay. As I stated above the AI is given weights to words. The word "random" was not given linked weights.to the words deck of cards so it didn't understand your prompt correctly.
You must change your prompt.
Try changing "random" to "non-sequential". Also I would suggest placing that word first so it gives more weight to it.
"Deal me a set of 4 non-sequential cards from the deck please" is preferable to "deal me 4 cards in a non-sequential order.". Because the words are weighted in relationship to each other there is something called prompt structure. Think of it like English. "That company is electric" has a different meaning than "that's the electric company '. Same with AI.
Try that and post the results here.
Thanks.
link to original post
link to original post
You prompted the AI one time and discovered your prompt was insufficient. Reword it
If you accept the poor result from your first try that is a human fault. Not an AI fault.
If it was written in DOS or Cobalt or Ascii and got a bad output you wouldn't say the program failed. You would just change the code you wrote until you got it to work. So AI is natural language it's still coding. Fix your prompt. I showed you how.
link to original post
Quote: odiousgambitwant to know how to keep someone in suspense?Quote: darkozQuote: odiousgambitwell, you know, I think we know it's not perfect. Interesting that you used to have to find a site that converted different types of weights and measures, now you can just ask directly for the conversion in the browser location box, for example. This was the promise all along in SciFi, that computers would do that. But dealing you cards is not there yet for the google search engine ... yet here the AI function stepped in and did not impressQuote: darkozQuote: odiousgambitI told the googlebot to deal me 4 cards at random
>>>
Here are 4 playing cards dealt randomly from a 52-card deck:
Cards Dealt:
Ace of Spades (
)
Ace of Diamonds (
)
Ace of Clubs (
)
Ace of Hearts (
)
link to original post
Okay. As I stated above the AI is given weights to words. The word "random" was not given linked weights.to the words deck of cards so it didn't understand your prompt correctly.
You must change your prompt.
Try changing "random" to "non-sequential". Also I would suggest placing that word first so it gives more weight to it.
"Deal me a set of 4 non-sequential cards from the deck please" is preferable to "deal me 4 cards in a non-sequential order.". Because the words are weighted in relationship to each other there is something called prompt structure. Think of it like English. "That company is electric" has a different meaning than "that's the electric company '. Same with AI.
Try that and post the results here.
Thanks.
link to original post
link to original post
You prompted the AI one time and discovered your prompt was insufficient. Reword it
If you accept the poor result from your first try that is a human fault. Not an AI fault.
If it was written in DOS or Cobalt or Ascii and got a bad output you wouldn't say the program failed. You would just change the code you wrote until you got it to work. So AI is natural language it's still coding. Fix your prompt. I showed you how.
link to original post
link to original post
If I was watching a movie or reading a book, yes.
Quote: darkozQuote: TumblingBonesI’m not really interested in whether or not an AI is “sentient” (although I am interested in how the term is defined) but I am very interested in the displayed capabilities and behaviors of the software. I’m also interested in the “why” and the “how”, as in “why does it behave a certain way” and “how can I change or extend that behavior”. Or to put it another way, your POV is that of a tool user while mine is that of a tool developer.
And as a final thought…..Quote:The computer chips have no thoughts, they are just silicon and wiring.
So I guess the brain has no thoughts as they are just neurons and serotonin?
link to original post
The AI works by giving words weights which it applies based on other words it is supplied. Much like humans except in a purely algorithmic fashion.
:
:
:
:
:
:
Like I said, you work with AI tools on an everyday basis and you will very quickly realize how absolutely zero it is in the intelligence department.
link to original post
I get your point…. AI ultimately can be reduced to sequences of high and low voltages combined with binary logic gates. I’m not arguing with that. But you have yet to answer my two questions:
1) what is your definition of “sentient”?
2) given that we can take a similar reductionist view of the human brain (I.e., electrical signals and neurons) how do you explain the brain being able to think without allowing the possibility that an artificial brain might have the same capacity?
Quote: TumblingBonesQuote: darkozQuote: TumblingBonesI’m not really interested in whether or not an AI is “sentient” (although I am interested in how the term is defined) but I am very interested in the displayed capabilities and behaviors of the software. I’m also interested in the “why” and the “how”, as in “why does it behave a certain way” and “how can I change or extend that behavior”. Or to put it another way, your POV is that of a tool user while mine is that of a tool developer.
And as a final thought…..Quote:The computer chips have no thoughts, they are just silicon and wiring.
So I guess the brain has no thoughts as they are just neurons and serotonin?
link to original post
The AI works by giving words weights which it applies based on other words it is supplied. Much like humans except in a purely algorithmic fashion.
:
:
:
:
:
:
Like I said, you work with AI tools on an everyday basis and you will very quickly realize how absolutely zero it is in the intelligence department.
link to original post
I get your point…. AI ultimately can be reduced to sequences of high and low voltages combined with binary logic gates. I’m not arguing with that. But you have yet to answer my two questions:
1) what is your definition of “sentient”?
2) given that we can take a similar reductionist view of the human brain (I.e., electrical signals and neurons) how do you explain the brain being able to think without allowing the possibility that an artificial brain might have the same capacity?
link to original post
I just don't see the point of the question to be honest.
It's like asking why do I define the animatronic dinosaurs at Universals Jurassic Park ride as not living when their motors and servos act very similar to the human body with movable joints etc
The AI has less sentience than a cockroach. Any and all answers are just what was programmed by a human.
If a human being was, let's say, brain dead due to an accident and the doctors attached not just a breathing apparatus but also a sentience machine so that you could talk to the brain dead body of your loved one but the answers were not coming from the actual brain, but a computer hooked up to it I would argue the man is no longer sentient. He is just a dead brainless automaton.
An AI is just an automaton with no actual thoughts. Just answers programmed as possibilities by humans.
Quote: darkoz
I just don't see the point of the question to be honest.
It's like asking why do I define the animatronic dinosaurs at Universals Jurassic Park ride as not living when their motors and servos act very similar to the human body with movable joints etc
The AI has less sentience than a cockroach. Any and all answers are just what was programmed by a human.
If a human being was, let's say, brain dead due to an accident and the doctors attached not just a breathing apparatus but also a sentience machine so that you could talk to the brain dead body of your loved one but the answers were not coming from the actual brain, but a computer hooked up to it I would argue the man is no longer sentient. He is just a dead brainless automaton.
An AI is just an automaton with no actual thoughts. Just answers programmed as possibilities by humans.
link to original post
How do you know we're not "just" programmed too? How elaborate does a program have to be, to be unrecognizable as such to those running the same code?
These are philosophical and religious questions that others have put a lot of work into finding the answer. The well-known Mind-Body problem for one; we assume our thoughts come from our brains but that could be a false assumption and an illusion, no more actually happening in our brains than the Beatles are playing inside a radio.
I look at a person and assume he is sentient like me because he looks like me and acts like me. But that's just a bias too, I could be wrong about that. It's also a bias if I say a machine isn't. We don't understand what consciousness and sentience means in ourselves well enough to make a judgment if anything else has it.
Quote: AutomaticMonkeyQuote: darkoz
I just don't see the point of the question to be honest.
It's like asking why do I define the animatronic dinosaurs at Universals Jurassic Park ride as not living when their motors and servos act very similar to the human body with movable joints etc
The AI has less sentience than a cockroach. Any and all answers are just what was programmed by a human.
If a human being was, let's say, brain dead due to an accident and the doctors attached not just a breathing apparatus but also a sentience machine so that you could talk to the brain dead body of your loved one but the answers were not coming from the actual brain, but a computer hooked up to it I would argue the man is no longer sentient. He is just a dead brainless automaton.
An AI is just an automaton with no actual thoughts. Just answers programmed as possibilities by humans.
link to original post
How do you know we're not "just" programmed too? How elaborate does a program have to be, to be unrecognizable as such to those running the same code?
These are philosophical and religious questions that others have put a lot of work into finding the answer. The well-known Mind-Body problem for one; we assume our thoughts come from our brains but that could be a false assumption and an illusion, no more actually happening in our brains than the Beatles are playing inside a radio.
I look at a person and assume he is sentient like me because he looks like me and acts like me. But that's just a bias too, I could be wrong about that. It's also a bias if I say a machine isn't. We don't understand what consciousness and sentience means in ourselves well enough to make a judgment if anything else has it.
link to original post
Yeah those concepts make great philosophical movies but I am pretty confident about my mind and my consciousness.
Quote: darkoz
The AI has less sentience than a cockroach. Any and all answers are just what was programmrod by a human.
Kinda not true. When they had computers learn better chess by playing against itself, they didn’t know what output it was going to come up with. If humans could do that, they wouldn’t bother looking for new ways using such a method.
A self learning input doesn’t spew out exactly what everyone expects unless it’s very limited in scope.
I can see you don't know this oneQuote: darkozQuote: odiousgambitwant to know how to keep someone in suspense?
link to original post
If I was watching a movie or reading a book, yes.
link to original post
So stay tuned for me doing what you insisted I do, and the answer to the puzzle of how to keep someone in suspense as well
Quote: odiousgambitI can see you don't know this oneQuote: darkozQuote: odiousgambitwant to know how to keep someone in suspense?
link to original post
If I was watching a movie or reading a book, yes.
link to original post
So stay tuned for me doing what you insisted I do, and the answer to the puzzle of how to keep someone in suspense as well
link to original post
Yeah I know this one.
It's the one where you know the guy is correct but you hate admitting it so you drag it out needlessly to make yourself feel important now that he has shown you how to achieve something.
It's not something I would do. I would just marvel at the lesson learned and thank the guy. Then keep it in mind when doing future work.
But whatever floats your boat.
Quote: rxwineQuote: darkoz
The AI has less sentience than a cockroach. Any and all answers are just what was programmrod by a human.
Kinda not true. When they had computers learn better chess by playing against itself, they didn’t know what output it was going to come up with. If humans could do that, they wouldn’t bother looking for new ways using such a method.
A self learning input doesn’t spew out exactly what everyone expects unless it’s very limited in scope.
link to original post
Chess playing computers have no intelligence. They simply have the ability to predict moves based on multiple permutations way further down the line (based on the known rules of chess which it was fed as data by a human) than any human could.
Quote: darkozQuote: AutomaticMonkeyQuote: darkoz
I just don't see the point of the question to be honest.
It's like asking why do I define the animatronic dinosaurs at Universals Jurassic Park ride as not living when their motors and servos act very similar to the human body with movable joints etc
The AI has less sentience than a cockroach. Any and all answers are just what was programmed by a human.
If a human being was, let's say, brain dead due to an accident and the doctors attached not just a breathing apparatus but also a sentience machine so that you could talk to the brain dead body of your loved one but the answers were not coming from the actual brain, but a computer hooked up to it I would argue the man is no longer sentient. He is just a dead brainless automaton.
An AI is just an automaton with no actual thoughts. Just answers programmed as possibilities by humans.
link to original post
How do you know we're not "just" programmed too? How elaborate does a program have to be, to be unrecognizable as such to those running the same code?
These are philosophical and religious questions that others have put a lot of work into finding the answer. The well-known Mind-Body problem for one; we assume our thoughts come from our brains but that could be a false assumption and an illusion, no more actually happening in our brains than the Beatles are playing inside a radio.
I look at a person and assume he is sentient like me because he looks like me and acts like me. But that's just a bias too, I could be wrong about that. It's also a bias if I say a machine isn't. We don't understand what consciousness and sentience means in ourselves well enough to make a judgment if anything else has it.
link to original post
Yeah those concepts make great philosophical movies but I am pretty confident about my mind and my consciousness.
link to original post
Ah, exactly the kind of thing a clanker would say!
We really are stuck with that, all of our information about the outside world coming in through the same organs and being processed as... who knows? We could all be dreaming, or crazy.
Quote: darkozQuote: rxwineQuote: darkoz
The AI has less sentience than a cockroach. Any and all answers are just what was programmrod by a human.
Kinda not true. When they had computers learn better chess by playing against itself, they didn’t know what output it was going to come up with. If humans could do that, they wouldn’t bother looking for new ways using such a method.
A self learning input doesn’t spew out exactly what everyone expects unless it’s very limited in scope.
link to original post
Chess playing computers have no intelligence. They simply have the ability to predict moves based on multiple permutations way further down the line (based on the known rules of chess which it was fed as data by a human) than any human could.
link to original post
If you consider intelligence to be a by-product of computational complexity then a chess computer is intelligence. Human brains and increasingly ai can do more than that, but the principle isn't as obviously different as you are implying.
I've met a lot of people who would fail the Turing test.
Quote: DougGanderQuote: darkozQuote: rxwineQuote: darkoz
The AI has less sentience than a cockroach. Any and all answers are just what was programmrod by a human.
Kinda not true. When they had computers learn better chess by playing against itself, they didn’t know what output it was going to come up with. If humans could do that, they wouldn’t bother looking for new ways using such a method.
A self learning input doesn’t spew out exactly what everyone expects unless it’s very limited in scope.
link to original post
Chess playing computers have no intelligence. They simply have the ability to predict moves based on multiple permutations way further down the line (based on the known rules of chess which it was fed as data by a human) than any human could.
link to original post
If you consider intelligence to be a by-product of computational complexity then a chess computer is intelligence. Human brains and increasingly ai can do more than that, but the principle isn't as obviously different as you are implying.
I've met a lot of people who would fail the Turing test.
link to original post
Intelligence should be measured by intent.
A cockroach has some form of intelligence because it knows to turn and run away if you try to step on it.
An AI has zero intelligence because it has no intent. GPS has no intelligence even though it can make computational decisions based on maps and coordinates. It's just doing what it's programmed to do. Same as AI tools.
No one has created a new organism or life form with AI. It's just a computer program.
Quote: darkozQuote: DougGanderQuote: darkozQuote: rxwineQuote: darkoz
The AI has less sentience than a cockroach. Any and all answers are just what was programmrod by a human.
Kinda not true. When they had computers learn better chess by playing against itself, they didn’t know what output it was going to come up with. If humans could do that, they wouldn’t bother looking for new ways using such a method.
A self learning input doesn’t spew out exactly what everyone expects unless it’s very limited in scope.
link to original post
Chess playing computers have no intelligence. They simply have the ability to predict moves based on multiple permutations way further down the line (based on the known rules of chess which it was fed as data by a human) than any human could.
link to original post
If you consider intelligence to be a by-product of computational complexity then a chess computer is intelligence. Human brains and increasingly ai can do more than that, but the principle isn't as obviously different as you are implying.
I've met a lot of people who would fail the Turing test.
link to original post
Intelligence should be measured by intent.
A cockroach has some form of intelligence because it knows to turn and run away if you try to step on it.
An AI has zero intelligence because it has no intent. GPS has no intelligence even though it can make computational decisions based on maps and coordinates. It's just doing what it's programmed to do. Same as AI tools.
No one has created a new organism or life form with AI. It's just a computer program.
link to original post
So now we have your (partial ?) definition of “intelligence. True, GPS lacks this type of capability. However autonomous agents definitely have intent as the motivator of their actions.
Quote: TumblingBonesQuote: darkozQuote: DougGanderQuote: darkozQuote: rxwineQuote: darkoz
The AI has less sentience than a cockroach. Any and all answers are just what was programmrod by a human.
Kinda not true. When they had computers learn better chess by playing against itself, they didn’t know what output it was going to come up with. If humans could do that, they wouldn’t bother looking for new ways using such a method.
A self learning input doesn’t spew out exactly what everyone expects unless it’s very limited in scope.
link to original post
Chess playing computers have no intelligence. They simply have the ability to predict moves based on multiple permutations way further down the line (based on the known rules of chess which it was fed as data by a human) than any human could.
link to original post
If you consider intelligence to be a by-product of computational complexity then a chess computer is intelligence. Human brains and increasingly ai can do more than that, but the principle isn't as obviously different as you are implying.
I've met a lot of people who would fail the Turing test.
link to original post
Intelligence should be measured by intent.
A cockroach has some form of intelligence because it knows to turn and run away if you try to step on it.
An AI has zero intelligence because it has no intent. GPS has no intelligence even though it can make computational decisions based on maps and coordinates. It's just doing what it's programmed to do. Same as AI tools.
No one has created a new organism or life form with AI. It's just a computer program.
link to original post
So now we have your (partial ?) definition of “intelligence. True, GPS lacks this type of capability. However autonomous agents definitely have intent as the motivator of their actions.
link to original post
Lol, and dice have intentions when they tumble across the felt.
(Kind of like how Uber self-driving taxis in select cities now have a employee in the passenger seat just in case.)
We will look back at the spaceship shows of 2020s and laugh at all the people on the bridge of the USS Enterprise actually taking the time to give orders.
Ai powered ships of the future would have already taken several actions already during combat before the captain had a chance to issue the 1st order.
Kind of like how people in 1990s looked at 1960s Star trek with William shatner
Quote: 100xOdds30yrs from now when planes and even ships are AI-powered and humans are there just in case:
(Kind of like how Uber self-driving taxis in select cities now have a employee in the passenger seat just in case.)
We will look back at the spaceship shows of 2020s and laugh at all the people on the bridge of the USS Enterprise actually taking the time to give orders.
Ai powered ships of the future would have already taken several actions already during combat before the captain had a chance to issue the 1st order.
Kind of like how people in 1990s looked at 1960s Star trek with William shatner
link to original post
Yeah, 60s Star Trek.
How ridiculous, I said, on my flip phone, as a pair of sliding doors opened in front of me, and I walked into the building where I sat down in front a flat screen and used voice commands to design an object printed out on a 3D printer.
If engineers of the future were AI, science fiction would be their prompt. It depicts the things we want, from the imagination of a person who is intelligent but more creative than technical.
The AIs were are now using are called LLMs, large language models, because like the computers in Star Trek they understand our language, rather than requiring us to learn one optimized for them. It would allow a creative genius like Gene Roddenberry to work faster and with more attention to detail, rather than having to leave the technical details to more technical people like prop and set designers.
Quote: rxwineOkay, one day, maybe a "dumb" Ai agent will do full general surgery procedures on humans. You can trust in some intelligent cockroach.
link to original post
It doesn't matter. It's a computer program. That's all.
Airplanes have autopilot. Doesn't make the metal and engines sentient.
If an AI can do full general surgery in the future it's an automated tool. Nothing more.
Quote:, Unlike traditional software, which is deterministic (same input = same output), AI agents are probabilistic, autonomous, and designed to reason, plan, and act independently
Yeah, I just leave the above as my last word on it. Agree or not, no problem.

