(at least as French as this guy)
What needs to be added to the AI request is the information of skilled avoidance to detection. Then ask for the AI to regenerate with the extra information.
The new output should delineate proper mathematical moves with proper times to act.
It probably will miss some nuances that a skilled human hole carder will know from experience but again it only answers what it has been trained on.
I should note from the answers supplied from both Hunterhill and AitomaticMonkey that they learned not to take advantage of this particular situation through the act of actually trying it so like the AI, the correct move needed to be tempered with other information to get it correct.
Bill Ryan's assessment that anyone who does that is beyond stupid would seem to be directed at them? Considering they both admit to having done it at some point?
If you think slot vultures are bad, try running afoul of the LVHCM,
Quote: billryanIn Vegas, one has to deal with the Hole Card Mafia..
If you think slot vultures are bad, try running afoul of the LVHCM,
link to original post
Oh yeah I heard about that. Stupid and talentless, all of that type of play. It's never good to make enemies in a casino. If you end up rolling around on the floor with somebody you're both going to get banned for real and you're both going to be a guest of the county, possibly lose whatever cash you have on you and where's the +EV in that?
Quote: darkozWhat the AI was asked was to provide a proper strategy for hole carding in diagram form. It correctly did so.
What needs to be added to the AI request is the information of skilled avoidance to detection. Then ask for the AI to regenerate with the extra information.
The new output should delineate proper mathematical moves with proper times to act.
It won't be able to do that. AI essentially draws from its training data which is mostly stolen off public data networks (reddit etc).
Most hole-card players, even the ones that think they are skilled at avoiding detection, are not. The information on how to avoid detection is not public, or even on the dark web.
There's two issues here: one is avoiding trigger plays which make initiate a more thorough investigation. That's why you avoid hitting 17. It is something generally only hole-carders do. You can only ever get away with weird plays like this with an act and play-style themed around it. It is just not viable as a practical matter most of the time.
The other is that persistent correlation with information you are not supposed to have, even with more subtle plays is VERY easy to pick up. Someone looks at a chart and they can work out with virtual certainty you have info and you can get made within 12 hands or so. This is why your dwarven LVHCM types tend to get banned all the time. The casino might not even be looking for hole-carders, it might be looking for types of cheating, but the actual play and the correlation with foreknowledge of the cards is the same.
The sad thing is that on a per session basis it is very easy to be undetectable. Put very crudely you deviate from basic strategy within one standard deviation of all decisions. Then you just look like a regular gambler who occasionally plays into favorable draws. This costs you surprisingly little and makes it impossible to separate you from the general public.
For this to be truly optimal you'd need to weight it according to the potential gains: plays with higher EV from basic strategy departures should be made less frequently. Finally, as with poker, you are randomizing your decisions so that any one trying to get a read on you can't work out what you are doing.
This is way beyond what AI can do or even understand at the moment.
Quote: billryanIn Vegas, one has to deal with the Hole Card Mafia..
If you think slot vultures are bad, try running afoul of the LVHCM,
link to original post
Did they bite your kneecaps?
Quote: DougGanderQuote: darkozWhat the AI was asked was to provide a proper strategy for hole carding in diagram form. It correctly did so.
What needs to be added to the AI request is the information of skilled avoidance to detection. Then ask for the AI to regenerate with the extra information.
The new output should delineate proper mathematical moves with proper times to act.
It won't be able to do that. AI essentially draws from its training data which is mostly stolen off public data networks (reddit etc).
Most hole-card players, even the ones that think they are skilled at avoiding detection, are not. The information on how to avoid detection is not public, or even on the dark web.
There's two issues here: one is avoiding trigger plays which make initiate a more thorough investigation. That's why you avoid hitting 17. It is something generally only hole-carders do. You can only ever get away with weird plays like this with an act and play-style themed around it. It is just not viable as a practical matter most of the time.
The other is that persistent correlation with information you are not supposed to have, even with more subtle plays is VERY easy to pick up. Someone looks at a chart and they can work out with virtual certainty you have info and you can get made within 12 hands or so. This is why your dwarven LVHCM types tend to get banned all the time. The casino might not even be looking for hole-carders, it might be looking for types of cheating, but the actual play and the correlation with foreknowledge of the cards is the same.
The sad thing is that on a per session basis it is very easy to be undetectable. Put very crudely you deviate from basic strategy within one standard deviation of all decisions. Then you just look like a regular gambler who occasionally plays into favorable draws. This costs you surprisingly little and makes it impossible to separate you from the general public.
For this to be truly optimal you'd need to weight it according to the potential gains: plays with higher EV from basic strategy departures should be made less frequently. Finally, as with poker, you are randomizing your decisions so that any one trying to get a read on you can't work out what you are doing.
This is way beyond what AI can do or even understand at the moment.
link to original post
I agree the info about avoiding detection not being publicly available for training. The newer models have a way of you uploading the training you need. As I said, upload the parameters for the diagram to include proper detection avoidance and you should get the desired results (notwithstanding some possible hallucinations) which is why AI always requires human oversight.
I suspect you are an anti-AI person? Usually antis mention stealing when it comes to AI. I feel it's going to fall under fair use. Aa you mentioned they "steal from PUBLIC data networks". The information is public. It's not theft.
Quote: darkozQuote: DougGanderQuote: darkozWhat the AI was asked was to provide a proper strategy for hole carding in diagram form. It correctly did so.
What needs to be added to the AI request is the information of skilled avoidance to detection. Then ask for the AI to regenerate with the extra information.
The new output should delineate proper mathematical moves with proper times to act.
It won't be able to do that. AI essentially draws from its training data which is mostly stolen off public data networks (reddit etc).
Most hole-card players, even the ones that think they are skilled at avoiding detection, are not. The information on how to avoid detection is not public, or even on the dark web.
There's two issues here: one is avoiding trigger plays which make initiate a more thorough investigation. That's why you avoid hitting 17. It is something generally only hole-carders do. You can only ever get away with weird plays like this with an act and play-style themed around it. It is just not viable as a practical matter most of the time.
The other is that persistent correlation with information you are not supposed to have, even with more subtle plays is VERY easy to pick up. Someone looks at a chart and they can work out with virtual certainty you have info and you can get made within 12 hands or so. This is why your dwarven LVHCM types tend to get banned all the time. The casino might not even be looking for hole-carders, it might be looking for types of cheating, but the actual play and the correlation with foreknowledge of the cards is the same.
The sad thing is that on a per session basis it is very easy to be undetectable. Put very crudely you deviate from basic strategy within one standard deviation of all decisions. Then you just look like a regular gambler who occasionally plays into favorable draws. This costs you surprisingly little and makes it impossible to separate you from the general public.
For this to be truly optimal you'd need to weight it according to the potential gains: plays with higher EV from basic strategy departures should be made less frequently. Finally, as with poker, you are randomizing your decisions so that any one trying to get a read on you can't work out what you are doing.
This is way beyond what AI can do or even understand at the moment.
link to original post
I agree the info about avoiding detection not being publicly available for training. The newer models have a way of you uploading the training you need. As I said, upload the parameters for the diagram to include proper detection avoidance and you should get the desired results (notwithstanding some possible hallucinations) which is why AI always requires human oversight.
I suspect you are an anti-AI person? Usually antis mention stealing when it comes to AI. I feel it's going to fall under fair use. Aa you mentioned they "steal from PUBLIC data networks". The information is public. It's not theft.
link to original post
I'm pro-ai, but very much opposed to the corporations that have appropriated it. It should be open source.
Public data MAY not be theft. There are multiple court cases deciding this. Some of the cases have real teeth: Midjourney spitting out images of copyrighted Disney characters for example. In specific use-cases ai creations are clearly just reproductions of content and have no transformative properties.
As to whether you can get your software to scan youtube/reddit etc for training purposes, it may be legal but IMO it is certainly unethical. Creators made their work in the expectation that humans could enjoy it, not that it would be helping some vast corporate network that would be displacing them. If that continues without new legislation then creators are going to produce much less content or actively sabotage it - already creators are poisoning their content so AI that trains on it will produce junk. At very least their needs to be an opt out. In addition you have concerns about bandwidth theft for the hosting sites. This whole area is a legal minefield you can't just write it off as fair use.
Quote: DougGanderQuote: darkozQuote: DougGanderQuote: darkozWhat the AI was asked was to provide a proper strategy for hole carding in diagram form. It correctly did so.
What needs to be added to the AI request is the information of skilled avoidance to detection. Then ask for the AI to regenerate with the extra information.
The new output should delineate proper mathematical moves with proper times to act.
It won't be able to do that. AI essentially draws from its training data which is mostly stolen off public data networks (reddit etc).
Most hole-card players, even the ones that think they are skilled at avoiding detection, are not. The information on how to avoid detection is not public, or even on the dark web.
There's two issues here: one is avoiding trigger plays which make initiate a more thorough investigation. That's why you avoid hitting 17. It is something generally only hole-carders do. You can only ever get away with weird plays like this with an act and play-style themed around it. It is just not viable as a practical matter most of the time.
The other is that persistent correlation with information you are not supposed to have, even with more subtle plays is VERY easy to pick up. Someone looks at a chart and they can work out with virtual certainty you have info and you can get made within 12 hands or so. This is why your dwarven LVHCM types tend to get banned all the time. The casino might not even be looking for hole-carders, it might be looking for types of cheating, but the actual play and the correlation with foreknowledge of the cards is the same.
The sad thing is that on a per session basis it is very easy to be undetectable. Put very crudely you deviate from basic strategy within one standard deviation of all decisions. Then you just look like a regular gambler who occasionally plays into favorable draws. This costs you surprisingly little and makes it impossible to separate you from the general public.
For this to be truly optimal you'd need to weight it according to the potential gains: plays with higher EV from basic strategy departures should be made less frequently. Finally, as with poker, you are randomizing your decisions so that any one trying to get a read on you can't work out what you are doing.
This is way beyond what AI can do or even understand at the moment.
link to original post
I agree the info about avoiding detection not being publicly available for training. The newer models have a way of you uploading the training you need. As I said, upload the parameters for the diagram to include proper detection avoidance and you should get the desired results (notwithstanding some possible hallucinations) which is why AI always requires human oversight.
I suspect you are an anti-AI person? Usually antis mention stealing when it comes to AI. I feel it's going to fall under fair use. Aa you mentioned they "steal from PUBLIC data networks". The information is public. It's not theft.
link to original post
I'm pro-ai, but very much opposed to the corporations that have appropriated it. It should be open source.
Public data MAY not be theft. There are multiple court cases deciding this. Some of the cases have real teeth: Midjourney spitting out images of copyrighted Disney characters for example. In specific use-cases ai creations are clearly just reproductions of content and have no transformative properties.
As to whether you can get your software to scan youtube/reddit etc for training purposes, it may be legal but IMO it is certainly unethical. Creators made their work in the expectation that humans could enjoy it, not that it would be helping some vast corporate network that would be displacing them. If that continues without new legislation then creators are going to produce much less content or actively sabotage it - already creators are poisoning their content so AI that trains on it will produce junk. At very least their needs to be an opt out. In addition you have concerns about bandwidth theft for the hosting sites. This whole area is a legal minefield you can't just write it off as fair use.
link to original post
We are in agreement on most points.
The training and the output will most likely be separated for fair use purposes with training deemed fair use. And the output will be deemed fair use if new and/or transformative. Output that clearly copies IP will be infringing.
If the court deems otherwise you would have an artist who knows an AI model was trained on their work saying everyone who used it is stealing their IP and arguing even though the output is unrecognizable from their IP, that they deserve to get paid.
I disagree on the ethics. You put something in public it's fair game. Ever throw something in the trash and see a homeless guy rummaging through it? Did you put it there for him? Of course not. But you put your private stuff in public. Courts have already sided with for example law enforcement, that their ability to rummage for clues through your garbage is fair.
The Anthropic case has already swung in favor of fair use. (Anthropic just had to pay for the material behind a paywall which they stole from pirate sites).
Glaze and Nightshade are a different conversation. Total scams. They don't work. That's why you don't hear of not one AI company announcing they have had their model poisoned and are out of business. Not to mention the irony that Glaze and Nightshade are AI tools. Yeah, Anti-AI's using AI to fight AI is richly ironic
Anyway that's my analysis. We probably have one to two more years before the courts will bear me out
Paying for synthesis is tricky for me, whereas sampling is much more clearly a copy of something.
Quote:David Bowie used the "cut-up" technique, which involved taking existing texts from sources like newspapers and magazines, cutting them into pieces, and rearranging them randomly to create lyrics for several songs and albums.
This method, inspired by the writer William S. Burroughs and artist Brion Gysin, was a significant part of Bowie's creative process, helping him break habitual thought patterns and generate unexpected, surreal lyrical combinations.
I would say the human mind is a whole lot of synthesis even if you don't consciously realize it.. All that new stuff you came up with probably didn't come out of nowhere each time.
Quote: rxwineI would argue there's a difference between sampling and synthesis. David Bowie produced a whole album using bits and pieces of magazines and newspapers to form unique lyrics. He made money from that but he didn't pay anyone. Sampling is when you can clearly make out an excerpt of something someone else made. . When you can't clearly identify the actual source it's synthesis.
Paying for synthesis is tricky for me, whereas sampling is much more clearly a copy of something.Quote:David Bowie used the "cut-up" technique, which involved taking existing texts from sources like newspapers and magazines, cutting them into pieces, and rearranging them randomly to create lyrics for several songs and albums.
This method, inspired by the writer William S. Burroughs and artist Brion Gysin, was a significant part of Bowie's creative process, helping him break habitual thought patterns and generate unexpected, surreal lyrical combinations.
I would say the human mind is a whole lot of synthesis even if you don't consciously realize it.. All that new stuff you came up with probably didn't come out of nowhere each time.
link to original post
Absolutely agree on subconscious synthesis.
Funny story. For years I disgusted my kids by offering to make them Peanut Butter and Tuna Fish sandwiches. I said it was the most disgusting combination my mind could come up with
One day I sit them down to watch Freaky Friday 1976 with Jodie Foster. First five minutes in and Jodie is offered a Peanut Butter and Tuna Sandwich by a kid in the school yard.
My kids looked at me and I turned so red. I hadn't seen the film since I was a kid. I had no memory of hearing it in the film. Subconsciously I had internalized how disgusting that combo must be.
Oh well I still use the joke.
Quote: darkoz
The only point to hitting is you already know you lost so try for a hail Mary. On that 19 for example most of the time you aren't going to pull off an Ace or deuce. So you will bust most of the time looking like an idiot to pit and other players.
That's my two cents
You are 100% correct. You will last longer hitting those hands than staying on them if you know you are currently beat.
Quote: DRichQuote: darkoz
The only point to hitting is you already know you lost so try for a hail Mary. On that 19 for example most of the time you aren't going to pull off an Ace or deuce. So you will bust most of the time looking like an idiot to pit and other players.
That's my two cents
You are 100% correct. You will last longer hitting those hands than staying on them if you know you are currently beat.
link to original post
Hitting on 19s, win or lose, will get you tossed quickly. Greed kills. Given a choice of winning five hands and getting booted or winning two extra hands an hour indefinitely, will separate the APs from the wannabes. An AP team slowrolled the Plaza for months a few years ago.
I would think any AP would try for longevity.
Quote: darkoz
I disagree on the ethics. You put something in public it's fair game. Ever throw something in the trash and see a homeless guy rummaging through it? Did you put it there for him? Of course not. But you put your private stuff in public. Courts have already sided with for example law enforcement, that their ability to rummage for clues through your garbage is fair.
The Anthropic case has already swung in favor of fair use. (Anthropic just had to pay for the material behind a paywall which they stole from pirate sites).
Glaze and Nightshade are a different conversation. Total scams. They don't work. That's why you don't hear of not one AI company announcing they have had their model poisoned and are out of business. Not to mention the irony that Glaze and Nightshade are AI tools. Yeah, Anti-AI's using AI to fight AI is richly ironic
Anyway that's my analysis. We probably have one to two more years before the courts will bear me out
The metaphor there is off because you clearly intend to abandon the trash, and there is no real possibility of your trash harming any one. Creators have continued emotional investment in their creations and the last thing a writer or an artist wants is to put writers and artists out of business, especially themselves.
I don't think you understand how much people hate AI because of the appropriation of their art. It is going to seriously damage the amount of quality content produced if matters proceed as they are. I use ai in various content I produce and the reaction is basically 100% negative whatever I do with it simply because of this. You get the accusation of ai slop thrown at you even if you produce something interesting with it. These people are very angry and mostly irrational to the point of violence.
If the courts sanction the use of ai retrospectively then they'll have to bring in new legislation to control it because there are enough people who will vote for that. People do not like their livelihood being taken away. If that doesn't happen then you have a horrible future for online content with everything but mass-produced garbage being pay-walled. . .
Btw by "poisoning" I meant propietary/open source/intuitive methods, very difficult to counter those structurally.
Quote: billryanPeople can complain all they want, but until they organize and have paid lobbyists to promote their agenda, they are just farting in the wind.
link to original post
I know what you mean but when people stop buying something it has an impact. It doesn't even take a formal boycott if there is enough alignment of interests. Disney folded in hours after people cancelled their subs.
That said I think people underestimate how important the human experience is. A painting is basically worthless in terms of intrinsic value, it is the underlying human story that can give something painted by a master a valuation in the multi-millions.
Quote: DougGanderWay ahead of you on that one. But I think ai is a relatively small part of that. Look what spotify did to music, that's not really anything to do with ai.
That said I think people underestimate how important the human experience is. A painting is basically worthless in terms of intrinsic value, it is the underlying human story that can give something painted by a master a valuation in the multi-millions.
link to original post
Agreed
However if you use AI as a tool, you can bring that human experience to the art form. There are a lot of people who just let AI do all the heavy lifting but the serious filmmakers will be subjugating the AI to their will and vision.
Full live action AI generated feature film with copyright protection will be here between January and early Spring. No cameras, no actors, no crew, all created in the computer,
Hollywood freaked out about Tilly Norwood. They're really going to freak out when AI features get released.
Quote: darkoz
However if you use AI as a tool, you can bring that human experience to the art form. There are a lot of people who just let AI do all the heavy lifting but the serious filmmakers will be subjugating the AI to their will and vision.
Full live action AI generated feature film with copyright protection will be here between January and early Spring. No cameras, no actors, no crew, all created in the computer,
Hollywood freaked out about Tilly Norwood. They're really going to freak out when AI features get released.
link to original post
I do think people should (at least) try to look at Ai as just a new super tool. Even if two people are using Ai one may be able to figure out how to use it to outcompete his fellow human instead of just giving up, throwing up his hands in defeat. (at least until it throws us in a can and makes it, its' battery)
Human thug - I will make you my b____h
AI thug - I will make you my battery.
Quote: rxwineQuote: darkoz
However if you use AI as a tool, you can bring that human experience to the art form. There are a lot of people who just let AI do all the heavy lifting but the serious filmmakers will be subjugating the AI to their will and vision.
Full live action AI generated feature film with copyright protection will be here between January and early Spring. No cameras, no actors, no crew, all created in the computer,
Hollywood freaked out about Tilly Norwood. They're really going to freak out when AI features get released.
link to original post
I do think people should (at least) try to look at Ai as just a new super tool. Even if two people are using Ai one may be able to figure out how to use it to outcompete his fellow human instead of just giving up, throwing up his hands in defeat. (at least until it throws us in a can and makes it, its' battery)
Human thug - I will make you my b____h
AI thug - I will make you my battery.
link to original post
It's just a program. Not intelligent at all. Just simulated intelligence. It's not taking over ala Terminator
Quote: DougGanderWay ahead of you on that one. But I think ai is a relatively small part of that. Look what spotify did to music, that's not really anything to do with ai.
That said I think people underestimate how important the human experience is. A painting is basically worthless in terms of intrinsic value, it is the underlying human story that can give something painted by a master a valuation in the multi-millions.
link to original post
Perhaps that was true in the past, but these days, profit is what drives art sales. Syndicates have been buying and selling art in NY since the 90s
Quote: darkozQuote: rxwineQuote: darkoz
However if you use AI as a tool, you can bring that human experience to the art form. There are a lot of people who just let AI do all the heavy lifting but the serious filmmakers will be subjugating the AI to their will and vision.
Full live action AI generated feature film with copyright protection will be here between January and early Spring. No cameras, no actors, no crew, all created in the computer,
Hollywood freaked out about Tilly Norwood. They're really going to freak out when AI features get released.
link to original post
I do think people should (at least) try to look at Ai as just a new super tool. Even if two people are using Ai one may be able to figure out how to use it to outcompete his fellow human instead of just giving up, throwing up his hands in defeat. (at least until it throws us in a can and makes it, its' battery)
Human thug - I will make you my b____h
AI thug - I will make you my battery.
link to original post
It's just a program. Not intelligent at all. Just simulated intelligence. It's not taking over ala Terminator
link to original post
Yeah, but as I said before, it doesn't have to super intelligent to be dangerous. A Roomba with a machine gun can be dangerous for at least a short time.
Quote: rxwineQuote: darkozQuote: rxwineQuote: darkoz
However if you use AI as a tool, you can bring that human experience to the art form. There are a lot of people who just let AI do all the heavy lifting but the serious filmmakers will be subjugating the AI to their will and vision.
Full live action AI generated feature film with copyright protection will be here between January and early Spring. No cameras, no actors, no crew, all created in the computer,
Hollywood freaked out about Tilly Norwood. They're really going to freak out when AI features get released.
link to original post
I do think people should (at least) try to look at Ai as just a new super tool. Even if two people are using Ai one may be able to figure out how to use it to outcompete his fellow human instead of just giving up, throwing up his hands in defeat. (at least until it throws us in a can and makes it, its' battery)
Human thug - I will make you my b____h
AI thug - I will make you my battery.
link to original post
It's just a program. Not intelligent at all. Just simulated intelligence. It's not taking over ala Terminator
link to original post
Yeah, but as I said before, it doesn't have to super intelligent to be dangerous. A Roomba with a machine gun can be dangerous for at least a short time.
link to original post
I assume the priorities of the machines will be stopping senseless destruction caused by wars and addressing the inequity of the supply and logistics systems that leave so many humans behind.
Can a machine rule the world better than the clown class that dominated the last quarter century or so?
All those new inventive potential weapons are being tested on a virtual Earth i.e., landscape and near-Earth space environment. If there is a terminator option being conceived of any kind, it's probably being looked at there, both from a defense and offense perspective.
Quote: darkozQuote: HunterhillNo offense Darkoz but I speak from experience and as far as I know your experience is with running cards. I can tell you that hitting those hands you won’t last long.Quote: darkozQuote: HunterhillIt’s the correct strategy but my point was see how long you last if you hit hard 17,18,or 19 when you know the dealer has you beat.Quote: DRichQuote: HunterhillThis is where Ai could cause trouble. Following the rules for blackjack. If dealer has 17-20 it says hit until you beat the dealer. Try that strategy and see how long you last before getting picked off.Quote: rxwineSO, my prompt was for the Ai to use the “Wizard of Odds” site to do a “colorful” infographic on “hole carding”. The agent searched out the site and did all the work. I did no corrections or additions.
link to original post
link to original post
I would agree with that strategy if you knew the dealer had 17-20. Sadly, most of us don't know that unless we are hole carding.
link to original post
link to original post
I don't think security or the pit would recognize the strategy as hole carding
They probably would see it as bad and ridiculously bad play.
The only point to hitting is you already know you lost so try for a hail Mary. On that 19 for example most of the time you aren't going to pull off an Ace or deuce. So you will bust most of the time looking like an idiot to pit and other players.
That's my two cents
link to original post
link to original post
It's not unusual to have a "good Samaritan" somewhere in an AP career. I have my own stories.
One time a good Samaritan was so incensed I was using other people's players cards he grabbed my arm and called security. When they arrived I started screaming I was being accosted by some stranger babbling nonsense.
Security took him away, not me, lol. (They told him to leave me alone before I pressed charges and the guy got scared cause he clammed up).
But I can see how that experience can make you cautious. Hole carding is not my forte as you noted.
link to original post
The guy called Security to try to get YOU into trouble, but HE was the one who got into trouble with Security instead , LMAO! 🤣🤭 That's what he deserved for sticking his nose into YOUR business and not minding his own business. 🤣🤭
asking Google Ai.
Quote:What is a test question that proves that a computer can't think?
There is no single "test question" universally accepted as definitive proof that a computer cannot think, as the definition of "thinking" itself remains a topic of philosophical debate. However, philosopher John Searle's famous Chinese Room thought experiment argues that a computer can pass a behavioral test (like the Turing Test) without genuine understanding or consciousness.
The Chinese Room Thought Experiment
Searle's argument is not a specific question, but a scenario designed to show that a computer merely manipulates symbols based on rules (syntax) without grasping their meaning (semantics).
Imagine you are in a room and receive messages in Chinese through a slot in the door. You know no Chinese, but you have an elaborate rulebook in English that tells you how to manipulate the Chinese symbols and slide a response back out. To an outside observer who understands Chinese, your responses are indistinguishable from those of a native Chinese speaker.
According to Searle, you, the person in the room, do not understand Chinese; you are simply following formal instructions. He argues that a computer functions in the same way, manipulating meaningless symbols without true comprehension, and thus, without "thinking" in the human sense.
Quote: billryanQuote: rxwineQuote: darkozQuote: rxwineQuote: darkoz
However if you use AI as a tool, you can bring that human experience to the art form. There are a lot of people who just let AI do all the heavy lifting but the serious filmmakers will be subjugating the AI to their will and vision.
Full live action AI generated feature film with copyright protection will be here between January and early Spring. No cameras, no actors, no crew, all created in the computer,
Hollywood freaked out about Tilly Norwood. They're really going to freak out when AI features get released.
link to original post
I do think people should (at least) try to look at Ai as just a new super tool. Even if two people are using Ai one may be able to figure out how to use it to outcompete his fellow human instead of just giving up, throwing up his hands in defeat. (at least until it throws us in a can and makes it, its' battery)
Human thug - I will make you my b____h
AI thug - I will make you my battery.
link to original post
It's just a program. Not intelligent at all. Just simulated intelligence. It's not taking over ala Terminator
link to original post
Yeah, but as I said before, it doesn't have to super intelligent to be dangerous. A Roomba with a machine gun can be dangerous for at least a short time.
link to original post
I assume the priorities of the machines will be stopping senseless destruction caused by wars and addressing the inequity of the supply and logistics systems that leave so many humans behind.
Can a machine rule the world better than the clown class that dominated the last quarter century or so?
link to original post
I cannot imagine that any rational thinking person or AI would conclude that the Earth can support 8 billion people (or more) in a sustainable way. AI would not prioritize helping humans that have "fallen behind"; in a world of finite resources I think that AI would first prioritize the gradual elimination of the sickly, the mentally ill, the retarded, and the grossly obese. Those are the cold equations because overpopulation threatens everything, including the existence of power systems that support AI.
there are oddly also some misspellingsQuote: rxwineMe; just noticing it used the French word for etiquette on #5 properly pronounced "Ay-Tee-Ket-Say"
(at least as French as this guy)
link to original post
Quote: rxwineI think the "Chinese Room" example is arguable but have to take my car to the repair doctor at the moment. (isn't everything arguable?)
asking Google Ai.Quote:What is a test question that proves that a computer can't think?
There is no single "test question" universally accepted as definitive proof that a computer cannot think, as the definition of "thinking" itself remains a topic of philosophical debate. However, philosopher John Searle's famous Chinese Room thought experiment argues that a computer can pass a behavioral test (like the Turing Test) without genuine understanding or consciousness.
The Chinese Room Thought Experiment
Searle's argument is not a specific question, but a scenario designed to show that a computer merely manipulates symbols based on rules (syntax) without grasping their meaning (semantics).
Imagine you are in a room and receive messages in Chinese through a slot in the door. You know no Chinese, but you have an elaborate rulebook in English that tells you how to manipulate the Chinese symbols and slide a response back out. To an outside observer who understands Chinese, your responses are indistinguishable from those of a native Chinese speaker.
According to Searle, you, the person in the room, do not understand Chinese; you are simply following formal instructions. He argues that a computer functions in the same way, manipulating meaningless symbols without true comprehension, and thus, without "thinking" in the human sense.
link to original post
Do animals think in a human sense? Or insects? A dog doesn't think like a human, but they think. Just because a computer might think differently doesn't mean it can't think.
Before WW2, Phoenix, Arizona, was smaller than Tucson, but now it has a population four times that of Tucson.
Arizona's population has more than tripled in my lifetime, but few would say it's overcrowded.
A computer could easily solve those problems without having to weed out anyone.
Quote: billryan
Do animals think in a human sense? Or insects? A dog doesn't think like a human, but they think. Just because a computer might think differently doesn't mean it can't think.
link to original post
My problem with the Chinese Room example is when you turn the question around it doesn't actually prove the human thinks. You could presumedly teach a 5-year-old child to recognize that a Chinese symbol stands for a certain word or words of English on a card and never have to teach what either the Chinese or the English means..
You could continue that process within the limitations of 5-year-old child's abilities, while never including understanding of what either means.
Quote: rxwineWhat things have you done with Ai, useful or not?
I've been having lots of fun asking it to mix characters from one TV show and another. Asked it how I could get a date with Nina or Chloe from "24."
Most fun might be I asked it if there was a website to buy Instant Hole as I could not find it on Amazon.
This is the most internet fun since the early days.
Quote: billryanWhy would a computer decide the Earth is overpopulated when its density is somewhere around 60 people per square mile, while some cities are 10,000X that? The population density and farming methods would need to be tweaked, but we can easily grow enough food to feed a much larger population. In the USA, our older cities tend to be overpopulated, underscoring the need for new ones.
Before WW2, Phoenix, Arizona, was smaller than Tucson, but now it has a population four times that of Tucson.
Arizona's population has more than tripled in my lifetime, but few would say it's overcrowded.
A computer could easily solve those problems without having to weed out anyone.
link to original post
...unless it was being directed by someone who was already intrigued by eugenics and the elimination of categories of people. Arguments for population control or population reduction have always been expressed more with adjectives than nouns, where we are talking about the control or reduction of types of people, rather than uniformly across the board.
But that's normal for primates. Our instinct to reproduce our own genomes has a corollary, in an instinct to limit the reproduction of those with other genomes, allowing "our own kind" to prosper in relative terms. Just evolution in action, nothing to be afraid of.
Quote: AutomaticMonkeyJust evolution in action, nothing to be afraid of.
Evolution is like a box of chocolates?
{q]all blue-eyed people are linked to a single common ancestor, a person who lived 6,000 to 10,000 years ago and experienced a genetic mutation in the OCA2 gene, effectively "turning off" significant melanin production in the iris, creating blue eyes from the original brown. This unique genetic switch, called the H-1 haplotype, is passed down, making every blue-eyed person a distant relative through this shared origin, likely near the Black Sea
Quote: rxwineQuote: AutomaticMonkeyJust evolution in action, nothing to be afraid of.
Evolution is like a box of chocolates?
{q]all blue-eyed people are linked to a single common ancestor, a person who lived 6,000 to 10,000 years ago and experienced a genetic mutation in the OCA2 gene, effectively "turning off" significant melanin production in the iris, creating blue eyes from the original brown. This unique genetic switch, called the H-1 haplotype, is passed down, making every blue-eyed person a distant relative through this shared origin, likely near the Black Sea
link to original post
Fun fact- Several Native nations have legends of blue-eyed beings, almost all evil, before they encountered Europeans in the 17th century. A problem with many of the stories is that they were passed down orally until the 20th century, making their dating difficult.
"We tend to underprepare and overestimate our control."
I do think there is some hype and some things not explained. For example, when chatgpt or similar is asked what will happen to humanity if and when AI takes over completely, you get chilling responses, but really at this stage I say the bots are just repeating what some humans think. Perhaps a consensus of really knowledgeable ones ... or not. We don't know. The implication that *the bot* thinks this and knows more than anybody else is dubious. Not at the current stage I think. But what do i know bwa-ha-ha-ha
His videos jump around too much. In the first video you found, I think he is trying to say he asked chatgpt or similar to roleplay as his girlfriend and as his male friend. It would be interesting if the bot dreamed up what that would be like all on its own, but I think instead the bot was told to make the girlfriend neurotic, for example. If not that's fascinating that this is what the bot has as a concept of women! I don't think he says, but he jumps around so much in the video that maybe I missed it
Quote: odiousgambitthanks for finding this channel
I do think there is some hype and some things not explained. For example, when chatgpt or similar is asked what will happen to humanity if and when AI takes over completely, you get chilling responses, but really at this stage I say the bots are just repeating what some humans think. Perhaps a consensus of really knowledgeable ones ... or not. We don't know. The implication that *the bot* thinks this and knows more than anybody else is dubious. Not at the current stage I think. But what do i know bwa-ha-ha-ha
His videos jump around too much. In the first video you found, I think he is trying to say he asked chatgpt or similar to roleplay as his girlfriend and as his male friend. It would be interesting if the bot dreamed up what that would be like all on its own, but I think instead the bot was told to make the girlfriend neurotic, for example. If not that's fascinating that this is what the bot has as a concept of women! I don't think he says, but he jumps around so much in the video that maybe I missed it
link to original post
I work with AI every day. It has no concept of what women are like. It has as much intelligence as a toaster or a hammer and nails
It is basing answers on key words in the prompt being fed it and based on weight training (weights given to key words and word combinations) it is giving near instantaneous answers
The weights change as it gets further questions and training (,hence it's "learning"). So any answer you receive is how the humans who programmed the AI and the people who interact with AI think of women.
Video generators, especially the early models has bias towards people with USA models almost always biased toward white people (black characters turning white sometimes right in the video) and the Chinese models had characters turning Chinese thereby exposing the training materials that were predominantly used.
Quote: darkozQuote: odiousgambitthanks for finding this channel
I do think there is some hype and some things not explained. For example, when chatgpt or similar is asked what will happen to humanity if and when AI takes over completely, you get chilling responses, but really at this stage I say the bots are just repeating what some humans think. Perhaps a consensus of really knowledgeable ones ... or not. We don't know. The implication that *the bot* thinks this and knows more than anybody else is dubious. Not at the current stage I think. But what do i know bwa-ha-ha-ha
His videos jump around too much. In the first video you found, I think he is trying to say he asked chatgpt or similar to roleplay as his girlfriend and as his male friend. It would be interesting if the bot dreamed up what that would be like all on its own, but I think instead the bot was told to make the girlfriend neurotic, for example. If not that's fascinating that this is what the bot has as a concept of women! I don't think he says, but he jumps around so much in the video that maybe I missed it
link to original post
I work with AI every day. It has no concept of what women are like. It has as much intelligence as a toaster or a hammer and nails
It is basing answers on key words in the prompt being fed it and based on weight training (weights given to key words and word combinations) it is giving near instantaneous answers
The weights change as it gets further questions and training (,hence it's "learning"). So any answer you receive is how the humans who programmed the AI and the people who interact with AI think of women.
Video generators, especially the early models has bias towards people with USA models almost always biased toward white people (black characters turning white sometimes right in the video) and the Chinese models had characters turning Chinese thereby exposing the training materials that were predominantly used.
link to original post
Curious how you'd prove humans have intelligence? They do all the same things in learning although our brains have no awareness of weighted words, doesn't mean we don't do the same thing. In a way, that is exactly what people do in writing or talking. Unless you're aiming to be eccentric, create a certain effect or avoid cliches' you use the equivalent of "weighted" words over possible rarer alternate words. You don't hear people say, "Sol came up this morning" though that is an alternate for "Sun'.
Quote: rxwineQuote: darkozQuote: odiousgambitthanks for finding this channel
I do think there is some hype and some things not explained. For example, when chatgpt or similar is asked what will happen to humanity if and when AI takes over completely, you get chilling responses, but really at this stage I say the bots are just repeating what some humans think. Perhaps a consensus of really knowledgeable ones ... or not. We don't know. The implication that *the bot* thinks this and knows more than anybody else is dubious. Not at the current stage I think. But what do i know bwa-ha-ha-ha
His videos jump around too much. In the first video you found, I think he is trying to say he asked chatgpt or similar to roleplay as his girlfriend and as his male friend. It would be interesting if the bot dreamed up what that would be like all on its own, but I think instead the bot was told to make the girlfriend neurotic, for example. If not that's fascinating that this is what the bot has as a concept of women! I don't think he says, but he jumps around so much in the video that maybe I missed it
link to original post
I work with AI every day. It has no concept of what women are like. It has as much intelligence as a toaster or a hammer and nails
It is basing answers on key words in the prompt being fed it and based on weight training (weights given to key words and word combinations) it is giving near instantaneous answers
The weights change as it gets further questions and training (,hence it's "learning"). So any answer you receive is how the humans who programmed the AI and the people who interact with AI think of women.
Video generators, especially the early models has bias towards people with USA models almost always biased toward white people (black characters turning white sometimes right in the video) and the Chinese models had characters turning Chinese thereby exposing the training materials that were predominantly used.
link to original post
Curious how you'd prove humans have intelligence? They do all the same things in learning although our brains have no awareness of weighted words, doesn't mean we don't do the same thing. In a way, that is exactly what people do in writing or talking. Unless you're aiming to be eccentric, create a certain effect or avoid cliches' you use the equivalent of "weighted" words over possible rarer alternate words. You don't hear people say, "Sol came up this morning" though that is an alternate for "Sun'.
link to original post
I agree. It is the same as human intelligence in operations so it gives the "artificial" appearance of intelligence.
It's like the animatronic dinosaurs at Jurassic Park in Universal studios. They give the artificial look of being alive by having gears and motors that are patterned on real joints and muscle. But they are as alive as a rock.
Quote: darkozQuote: rxwineQuote: darkozQuote: odiousgambitthanks for finding this channel
I do think there is some hype and some things not explained. For example, when chatgpt or similar is asked what will happen to humanity if and when AI takes over completely, you get chilling responses, but really at this stage I say the bots are just repeating what some humans think. Perhaps a consensus of really knowledgeable ones ... or not. We don't know. The implication that *the bot* thinks this and knows more than anybody else is dubious. Not at the current stage I think. But what do i know bwa-ha-ha-ha
His videos jump around too much. In the first video you found, I think he is trying to say he asked chatgpt or similar to roleplay as his girlfriend and as his male friend. It would be interesting if the bot dreamed up what that would be like all on its own, but I think instead the bot was told to make the girlfriend neurotic, for example. If not that's fascinating that this is what the bot has as a concept of women! I don't think he says, but he jumps around so much in the video that maybe I missed it
link to original post
I work with AI every day. It has no concept of what women are like. It has as much intelligence as a toaster or a hammer and nails
It is basing answers on key words in the prompt being fed it and based on weight training (weights given to key words and word combinations) it is giving near instantaneous answers
The weights change as it gets further questions and training (,hence it's "learning"). So any answer you receive is how the humans who programmed the AI and the people who interact with AI think of women.
Video generators, especially the early models has bias towards people with USA models almost always biased toward white people (black characters turning white sometimes right in the video) and the Chinese models had characters turning Chinese thereby exposing the training materials that were predominantly used.
link to original post
Curious how you'd prove humans have intelligence? They do all the same things in learning although our brains have no awareness of weighted words, doesn't mean we don't do the same thing. In a way, that is exactly what people do in writing or talking. Unless you're aiming to be eccentric, create a certain effect or avoid cliches' you use the equivalent of "weighted" words over possible rarer alternate words. You don't hear people say, "Sol came up this morning" though that is an alternate for "Sun'.
link to original post
I agree. It is the same as human intelligence in operations so it gives the "artificial" appearance of intelligence.
It's like the animatronic dinosaurs at Jurassic Park in Universal studios. They give the artificial look of being alive by having gears and motors that are patterned on real joints and muscle. But they are as alive as a rock.
link to original post
Yes, showing actual differences in the way the same things can be accomplished, is like showing time measurement from a sun dial, a mechanical Swiss watch or a fully electric watch. And I suppose a nuclear one. While not the same they all accomplish some of the same thing without being the same. Some more precise than others.
Intelligence isn't all we are though, and likely why people note that something is missing,
Quote:The following job professions are associated with the lowest rates of memory decline:
Taxi and Ambulance Drivers: These professions have some of the lowest Alzheimer's-related death rates, approximately 40% lower than the general population. This is attributed to the intense, real-time spatial navigation and "on-the-fly" problem-solving required to find routes, which keeps the hippocampus (the brain's memory center) healthy and enlarged. Notably, this benefit does not extend to bus drivers or pilots who follow predetermined, repetitive routes.
Teachers: Consistently ranked among the most cognitively protective jobs due to high levels of verbal intelligence, complex social interaction, and daily mentoring.
Engineers and Physicists: Professions requiring high "fluid" cognitive tasks, such as solving mathematical or scientific problems and critical thinking, help build a strong cognitive reserve.
Social Workers and Lawyers: These roles involve "complex social interaction," such as negotiating and resolving conflicts, which has been shown to be more protective than working with data or objects alone.
Managers and Chief Executives: While their risk is close to the average, the high level of "executive" tasks—scheduling, multitasking, and decision-making—contributes to higher memory scores and slower rates of decline compared to routine-based roles.
Skilled Trades (Electricians, Carpenters, Watch Repairmen): "Builders" who engage in complex, hands-on problem-solving and work with intricate "things" also show better cognitive aging.
The first one is interesting but not surprising that long distance movement jobs don't benefit cognitively.
I'm betting "google maps" is not befitting Uber drivers though.
It wasn't scientific, just a couple of long-time observers of both sports. For various reasons, wrestlers seem to get it less than boxers or football players. One of the reasons given was that wrestlers think more, choreographing their matches with an opponent, whereas boxers work by rote. I personally thought it was because more wrestlers die young, but that opinion wasn't very popular.
Quote: billryanAt a Cauliflower Ear convention in Vegas a few years ago, there was a discussion about the difference in Alzheimer's between professional wrestlers and boxers.
It wasn't scientific, just a couple of long-time observers of both sports. For various reasons, wrestlers seem to get it less than boxers or football players. One of the reasons given was that wrestlers think more, choreographing their matches with an opponent, whereas boxers work by rote. I personally thought it was because more wrestlers die young, but that opinion wasn't very popular.
link to original post
They would have to look at just old wrestlers, I guess.
I've heard online brain games have questionable improvement, but maybe it's because there's a benefit to using real 3d space with all the senses involved.
It's hard to do any professional movement sport into your 60s, but you can drive a taxi until you drop. Unless it's like golf, and senior games are possible. I guess there's a few of those.
Perhaps golf would be a better sport, if even finding where the hole is was a real mental exercise.
https://www.zerohedge.com/ai/dystopic-f-k-website-lets-ai-bots-rent-humans
I like it! The AI can give humans some tests of abilities, and hook them up with gigs that match and that they can be paid well for. The human gets a rating for their reliability, integrity, and quality of work, and awaits their next task.
I envision it as: you are in a restaurant and your phone beeps. You have a message that says "The restaurant you are in is short staffed today. Would you terribly mind bussing the 3 tables to your right while you await your meal, for a $10 discount off your bill?"
Or: "Your neighbor at 1313 Mockingbird Lane has a leg injury and has been unable to go on walks with their dog. I see in your profile you are a dog lover and enjoy walking. Would you be willing to take the dog for a half hour walk for $10?"
If you have some skills and are willing to work, and can be trusted to do what you have agreed to do, you could probably spend your whole day like this and make a living.


