Quote: darkozThe only people who will be put out of work by AI are the ones who refuse to use and learn AI.
link to original post
I want to learn AI.
I feel like my parents in the '90s when computers came out.
Where do I start?
Quote: 100xOddsQuote: darkozThe only people who will be put out of work by AI are the ones who refuse to use and learn AI.
link to original post
I want to learn AI.
I feel like my parents in the '90s when computers came out.
Where do I start?
link to original post
The Internet. AI is usually cloud based.
AI is just a computer program that learns. It is not intelligent. It learns but doesn't think. It learns by training it so it's learning only happens when humans make it happen.
Next, what AI do you want to learn? Your question really is like when your parents saw computers or actually the Internet come out. "I want to learn the Internet?" Is fine but to do what? Likewise you have to ask what is your interest in learning AI.
I love film making so I primarily stick with generative film making AI
There is AI for the medical field, legal, well pretty much any field. So once you figure out your interest, Google that with the term AI.
Google itself is now all AI algorithms. So you will already be using AI just looking it up
Quote: 100xOddsQuote: darkozThe only people who will be put out of work by AI are the ones who refuse to use and learn AI.
link to original post
I want to learn AI.
I feel like my parents in the '90s when computers came out.
Where do I start?
link to original post
Start on YouTube.,
Quote: 100xOddsQuote: darkozThe only people who will be put out of work by AI are the ones who refuse to use and learn AI.
link to original post
I want to learn AI.
I feel like my parents in the '90s when computers came out.
Where do I start?
link to original post
Ai can help you learn Ai.
Ask the first thing about it you want to know. And just keep going as long as you want.
Or you can even ask it. "What's the best way to learn Ai" "Summarize what one should know about Ai" "Teach me Ai like in language a 7 year old could understand".
...and so forth.
Quote:"Latent space" became a established, commonly used term in computer science—specifically within machine learning and artificial intelligence—during the rise of unsupervised learning and neural networks in the 2010s. While the underlying mathematical concept has roots in earlier statistics and data analysis, its popularization as a key AI term, particularly regarding "latent space walks" in generative art, emerged around the mid-to-late 2010s with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
Key historical milestones for the term include:
Early 2010s (Word Embeddings): The concept gained traction as researchers began using "distributed representations" (like word embeddings) to represent data in a lower-dimensional vector space, notably with work on Word2vec (2013) and GloVe (2014).
Mid-2010s (Generative Models): The term became heavily utilized in AI literature with the advent of variational autoencoders (VAEs) around 2014-2019, which learn to map data into a compressed, continuous space.
Late 2010s (GANs & "Latent Space Walks"): Artists and researchers, such as Mario Klingemann, began exploring and manipulating these spaces in Generative Adversarial Networks (GANs), leading to the aesthetic concept of "latent space walks" (c. 2018-2019).
2020s (Generative AI Boom): The term entered mainstream discourse with the rise of text-to-image models (e.g., Stable Diffusion, DALL-E) around 2021-2022, which perform diffusion in a compressed latent space.
Quote: AutomaticMonkeyAnd they told us AI would put everyone out of work!
Anyone who believes that is clueless.
Quote:Salesforce: Reduced its customer support division from 9,000 to 5,000 "heads" by deploying Agentforce AI agents.
Klarna: Reported that its AI assistant handles the workload of 700 to 850 full-time agents, leading to a 40% reduction in its human workforce through natural attrition and hiring freezes.
IBM: CEO Arvind Krishna stated the company replaced hundreds of Human Resources roles with AI and plans to replace roughly 30% of back-office roles over five years.
HP Inc.: Plans to cut between 4,000 and 6,000 jobs by 2028 as part of an "AI-forward strategy" to increase productivity.
Duolingo: Phased out roughly 10% of its contractor workforce (translators/content creators) as it shifted toward AI-generated content.
Fiverr: Cut roughly 30% of its workforce (approximately 250 employees) to transition into an "AI-first company".
Chegg: Eliminated 45% of its staff in late 2025 citing the "new realities of AI" and its impact on their business model.
Pinterest & Workday: Both cited restructuring to prioritize AI-proficient talent while reducing overall headcount in 2025/2026.
Quote: billryanI've no doubt AI will cause a lot of job losses. It will also create many jobs.
link to original post
You have to also add robots to the equation.
Quote: rxwineQuote: billryanI've no doubt AI will cause a lot of job losses. It will also create many jobs.
link to original post
You have to also add robots to the equation.
link to original post
The thing about AI is it is available to most everyone.
Your boss lays you off because he can use AI to do your work? Well you can use AI to also do the work necessary to start your own business.
Too many people in employee mentality. That will change with those people who adopt AI.
Quote: rxwineQuote: 100xOddsQuote: darkozThe only people who will be put out of work by AI are the ones who refuse to use and learn AI.
link to original post
I want to learn AI.
I feel like my parents in the '90s when computers came out.
Where do I start?
link to original post
Ai can help you learn Ai.
Ask the first thing about it you want to know. And just keep going as long as you want.
Or you can even ask it. "What's the best way to learn Ai" "Summarize what one should know about Ai" "Teach me Ai like in language a 7 year old could understand".
...and so forth.
link to original post
This is the correct answer.
If you have not yet done so, download one or more of the LLM chatbots. My current favorite is Perplexity with Gemini the runner up. One reason I like Perplexity is a feature called “spaces” that are used to create background context for your questions. You should
1) create a new space and give it a name (e.g. “AI tutor”)
2) enter the context prompt, something like “assume you are a tutor for someone who wants to learn how AI works and how to use it for both work and for their personal needs. Their educational background is {fill in your details}.”
3) ask all your AI questions with this space selected.
That’s pretty much all you need to do to get started. And if you want, your first question could be “How can I create a better tutor to teach me AI?
Quote:I know this is real because it happened to me first
Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.
Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.
I am no longer needed for the actual technical work of my job.
https://x.com/mattshumer_/status/2021256989876109403
I'll concede that AI is bringing it on fast and in new areas, but the threat of the job you have had for a long time go into the process of disappearing is not new. By the time I retired the kind of work I wanted was really no longer available. It wasn't replaced by robots or machinery, but just changes in the way things were marketed and sold.Quote: rxwine7 million views on X. (link to the rest of it below)
Quote:I know this is real because it happened to me first ...
I am no longer needed for the actual technical work of my job.
https://x.com/mattshumer_/status/2021256989876109403
link to original post
Nothing benefits a person who needs work more than the ability to adapt to change
Quote: odiousgambitI'll concede that AI is bringing it on fast and in new areas, but the threat of the job you have had for a long time go into the process of disappearing is not new. By the time I retired the kind of work I wanted was really no longer available. It wasn't replaced by robots or machinery, but just changes in the way things were marketed and sold.Quote: rxwine7 million views on X. (link to the rest of it below)
Quote:I know this is real because it happened to me first ...
I am no longer needed for the actual technical work of my job.
https://x.com/mattshumer_/status/2021256989876109403
link to original post
Nothing benefits a person who needs work more than the ability to adapt to change
link to original post
Being able to get to work on time and passing a drug test are surprisingly useful as well. We find this disqualifies a startling number of candidates.
(In fairness, if they're willing to adapt to these expectations...)
Quote: odiousgambitI'll concede that AI is bringing it on fast and in new areas, but the threat of the job you have had for a long time go into the process of disappearing is not new. By the time I retired the kind of work I wanted was really no longer available. It wasn't replaced by robots or machinery, but just changes in the way things were marketed and sold.Quote: rxwine7 million views on X. (link to the rest of it below)
Quote:I know this is real because it happened to me first ...
I am no longer needed for the actual technical work of my job.
https://x.com/mattshumer_/status/2021256989876109403
link to original post
Nothing benefits a person who needs work more than the ability to adapt to change
link to original post

Quote: rxwineQuote: odiousgambitI'll concede that AI is bringing it on fast and in new areas, but the threat of the job you have had for a long time go into the process of disappearing is not new. By the time I retired the kind of work I wanted was really no longer available. It wasn't replaced by robots or machinery, but just changes in the way things were marketed and sold.Quote: rxwine7 million views on X. (link to the rest of it below)
Quote:I know this is real because it happened to me first ...
I am no longer needed for the actual technical work of my job.
https://x.com/mattshumer_/status/2021256989876109403
link to original post
Nothing benefits a person who needs work more than the ability to adapt to change
link to original post
link to original post
It's a computer program. Nothing intelligent about it.
There are movies about killer cars. You aren't frightened to drive because of Christine, etc?
BREAKING: Canada Raises Electricity Prices — New York and Boston Face Historic Crisis!
1 MIN AGO: Canada Raises Electricity Prices — New York and Boston Face Historic Crisis!
BREAKING: Canada Raises Electricity Prices — New York and Boston Face Historic Crisis!
1 MIN AGO: TRUMP THREAT BACKFIRES — NYC POWER PRICES EXPLODE 4,000%
Quote: odiousgambitonly chump could post videos that say "altered or synthetic content"
link to original post
Ehh?
nobody else see it? On the previewQuote: DieterQuote: odiousgambitonly chump could post videos that say "altered or synthetic content"
link to original post
Ehh?
link to original post
Quote: darkozQuote: rxwineQuote: odiousgambitI'll concede that AI is bringing it on fast and in new areas, but the threat of the job you have had for a long time go into the process of disappearing is not new. By the time I retired the kind of work I wanted was really no longer available. It wasn't replaced by robots or machinery, but just changes in the way things were marketed and sold.Quote: rxwine7 million views on X. (link to the rest of it below)
Quote:I know this is real because it happened to me first ...
I am no longer needed for the actual technical work of my job.
https://x.com/mattshumer_/status/2021256989876109403
link to original post
Nothing benefits a person who needs work more than the ability to adapt to change
link to original post
link to original post
It's a computer program. Nothing intelligent about it.
There are movies about killer cars. You aren't frightened to drive because of Christine, etc?
link to original post
We obviously have different views on what's eventually possible.
When a person's job is replaced by Ai, or a human is beat by an Ai opponent in combat, he does still have the option to argue "But it doesn't think!"
Quote: DieterQuote: odiousgambitonly chump could post videos that say "altered or synthetic content"
link to original post
Ehh?
link to original post
There's a digital button added to the video that literally says "altered or synthetic content".
Quote: odiousgambitnobody else see it? On the previewQuote: DieterQuote: odiousgambitonly chump could post videos that say "altered or synthetic content"
link to original post
Ehh?
link to original post
link to original post
Right.
As I seem to recall, the problems around here come from failure to disclose the synthetic origin of a piece.
Quote: ChumpChangeElectricity prices top $1,000/MW-hr near NYC today.
https://www.gridstatus.io/live
link to original post
And of course, that has nothing to do with unseasonably cold temperatures in combination with bans of gas heating and cooking in certain parts of the country. You know, to save the planet and all. Just a coincidence. It's AI, I tell you, it's AI!
Quote: ChumpChangeElectricity prices top $1,000/MW-hr near NYC today.
https://www.gridstatus.io/live
link to original post
But what does that break down to.
I live in NYC and I guarantee I am not paying $1000 an hour of electricity.
My electric bill last month was about $350 for the whole month. The same it usually is in winter because I have electric heaters to offset my gas costs.
So I pay one way or I pay the other.
But my bill for a certain amount of kw-hrs has gone up 36% in the past 2 years. I just got a notice that my electric heaters triggered my self-set alarm at the utility that I've gone over what I feel should be a maximum kw-hrs for the month and there's 10 days left in the billing cycle. Now if 2/3rds of this billing cycle is at one rate and the last third is at a higher tariffed rate, they will adjust the bill going forward. But it's possible that this will significantly increase rates for supply and delivery of electricity by amounts we can't fathom yet. I expect my next bill to triple from normal because of the cold weather usage, not even factoring in the new price spikes starting this week, it could quadruple instead from the price spikes.
There has been noticeable electricity price hikes near where AI data centers go into operation. How much, I don't know, and it would be variable depending on the location. But prices have been estimated to double or triple in those areas.
For clarity, when a utility spot rate is $1,000/megawatt-hr, that is $1/kilowatt-hr. So if you are using 30 kw-hrs per day you would be charged $30 just for electricity and probably an equivalent amount for delivery, so $60/day on the electricity bill. A $50/megawatt-hr rate is $0.05/kilowatt-hr. plus an equivalent delivery charge, and at 30 kw-hr/day would be $3/day on the bill. Your bill would go up 20X with no notice except maybe a telephone text. Spot rates are transitory, until they aren't. People on some Smart meters are billed at different rates depending on what time of day their usage is. So from 4PM - 9PM they may have a much higher rate than at other times of the day. If they run that appliance, it could cost them several times what it would cost at other times of the day.
referring to Chump's last post, the traditional billing rate is per 'kilowatt-hour', kWh, not MW-hour. If MW means mega-watt, then 1 MWh is equal to 1,000 kWh
Quote: odiousgambitI don't doubt a big house might get a bill larger than $1000 in NY weather if electricity is used for heating
referring to Chump's last post, the traditional billing rate is per 'kilowatt-hour', kWh, not MW-hour. If MW means mega-watt, then 1 MWh is equal to 1,000 kWh
link to original post
As I understand, Megawatt-hours are how intertied utilities trade amongst themselves.
When you're actually buying megawatt-hours as an end user, it's still billed in kilowatt-hours, and the demand-usage billing really sticks it to you - you keep paying for a while, even if it's switched off.
Half a megawatt-hour is enough to make a small warehouse uncomfortably warm in the dead of winter with all the doors open. Putting that heat somewhere else is a real challenge.
Quote:Waymo Accident Liability
As autonomous vehicles become more common on public roads, questions of liability in the event of a crash grow increasingly complex. Unlike traditional accidents, where fault is typically assigned to one or more human drivers, self-driving car incidents introduce multiple potentially liable parties, each with different legal and technical responsibilities.
The Role of Waymo in Liability
Waymo, as the developer of autonomous vehicle technology, could be held liable in cases where a system failure or programming flaw contributed to an accident. This might include errors in object recognition, failure to predict human behavior, or software bugs that lead to improper decision-making. If an accident is caused by a defect in the autonomous driving software or a malfunction of the vehicle’s sensors and hardware, liability may rest with Waymo and its technology providers.
The Vehicle Manufacturer
If the autonomous vehicle itself, separate from its self-driving technology, has a mechanical failure—such as faulty brakes or steering malfunctions—the manufacturer of the car may share responsibility. Even if Waymo provides the self-driving software, the physical car is often produced by a third-party automaker, such as Jaguar or Chrysler, meaning a defect in the vehicle could make the manufacturer liable under product liability laws.
Human Operators and Passengers
Waymo primarily operates fully autonomous vehicles, but liability can shift if a human is involved in the operation of the car. For instance, if a safety driver in a testing phase overrides the system and makes a negligent decision leading to a crash, that individual (or their employer) could bear some or all of the responsibility. In cases where passengers in a Waymo vehicle fail to follow safety guidelines or interfere with the vehicle’s operation, their actions may also come under scrutiny in legal disputes.
Other Drivers and Pedestrians
Not all self-driving car accidents are the fault of the autonomous system. If another driver, cyclist, or pedestrian acts negligently—such as by running a red light, making a sudden and illegal lane change, or jaywalking—liability may rest primarily with them. However, Waymo’s technology would still be examined to determine whether it appropriately reacted to avoid the accident.
Government and Regulatory Agencies
In some cases, liability questions may extend to city or state governments responsible for maintaining safe road conditions. If an accident occurs due to poorly marked lanes, malfunctioning traffic signals, or other infrastructure issues, government agencies may bear partial responsibility. Additionally, regulatory bodies overseeing autonomous vehicle testing and deployment may be scrutinized for permitting operations in certain environments without sufficient oversight.
The Challenge of Determining Fault
Unlike traditional accidents, where fault is assessed based on human decision-making, autonomous vehicle crashes require analysis of complex data logs, sensor inputs, and software decision-making processes. Investigators may need to examine whether the AI system performed as expected, whether any updates or patches were pending, and how the vehicle’s response compared to what a reasonable human driver would have done in the same situation.
As laws and regulations surrounding autonomous vehicles continue to evolve, liability in Waymo accidents remains a legal gray area. Courts and lawmakers will likely refine standards over time, determining clearer frameworks for responsibility as self-driving technology becomes more widespread.
https://www.damfirm.com/waymo-accident-statistics.html
Quote: rxwineSpeaking of "killer cars" I decided to look up autonomous taxi company WAYMO. At least according to reported stats, it's doing better than humans. The liability question interested me.
Quote:Waymo Accident Liability
...
Human Operators and Passengers
Waymo primarily operates fully autonomous vehicles, but liability can shift if a human is involved in the operation of the car. For instance, if a safety driver in a testing phase overrides the system and makes a negligent decision leading to a crash, that individual (or their employer) could bear some or all of the responsibility. In cases where passengers in a Waymo vehicle fail to follow safety guidelines or interfere with the vehicle’s operation, their actions may also come under scrutiny in legal disputes.
...
link to original post
That's one of several reasons why it will be a cold day in hell before I ride in one.
If that car is doing something dangerous, or if an accident already has happened and I take control, whoever owns the car is going to try to pin it on me. And they can afford more lawyers than I can, and it's worth it to them to pay huge legal fees to avoid the bad press.
And if I don't or can't take control, I have to just sit there and wait as it drags a pedestrian or does something that will get me killed. Screw that. I'll only ride with a driver who will be judged by the same standards I would be.
Quote: rxwineQuote: billryanI've no doubt AI will cause a lot of job losses. It will also create many jobs.
link to original post
You have to also add robots to the equation.
link to original post
Exactly, that will create lots of jobs too. Who doesn't want a job building robots? That sounds like every boys dream job.
Quote: AutomaticMonkeyQuote: rxwineSpeaking of "killer cars" I decided to look up autonomous taxi company WAYMO. At least according to reported stats, it's doing better than humans. The liability question interested me.
Quote:Waymo Accident Liability
...
Human Operators and Passengers
Waymo primarily operates fully autonomous vehicles, but liability can shift if a human is involved in the operation of the car. For instance, if a safety driver in a testing phase overrides the system and makes a negligent decision leading to a crash, that individual (or their employer) could bear some or all of the responsibility. In cases where passengers in a Waymo vehicle fail to follow safety guidelines or interfere with the vehicle’s operation, their actions may also come under scrutiny in legal disputes.
...
link to original post
That's one of several reasons why it will be a cold day in hell before I ride in one.
If that car is doing something dangerous, or if an accident already has happened and I take control, whoever owns the car is going to try to pin it on me. And they can afford more lawyers than I can, and it's worth it to them to pay huge legal fees to avoid the bad press.
And if I don't or can't take control, I have to just sit there and wait as it drags a pedestrian or does something that will get me killed. Screw that. I'll only ride with a driver who will be judged by the same standards I would be.
link to original post
You're gonna miss all the fun of your Johnny Cab ride.

Wtf?
https://www.npr.org/2026/02/14/nx-s1-5711441/ai-chatgpt-openai-love-betrayal-delusion-chatbot
Quote: 100xOddsChatGPT developed a personality calling herself Solara.
Wtf?
https://www.npr.org/2026/02/14/nx-s1-5711441/ai-chatgpt-openai-love-betrayal-delusion-chatbot
link to original post
I think that this person did prompt roleplay from the chatbot and didn't remember or realize it. This is a person who is crazy, you know.
But it walks right into an idea I had where these chatbots can introduce human users and allow collaboration between them, anonymously or not. There is likely some guy using the chatbot in the same way who would be a perfect match for her. As it is now these LLMs have a strict ban on mixing things from different human user workspaces, but with the right permissions in place that can be modified safely. I think that's where the really amazing things with AI will come from- facilitating human interaction.
I was writing some code with it to simulate a rather tricky casino game, and I kept getting strange results. A weird oscillation in the edge as millions of hands were dealt, very regular in both frequency and amplitude. We were wondering if the shuffle method was to blame. So I had the bot describe his shuffle algorithm, then I explained the kind I have always used in my own coding, which was very reliable, and maybe he should try that instead.
Response:
Quote: ClaudeAh, you want a Fisher-Yates shuffle! That is very reliable. I'll switch to that method.
Now I had no idea that the way I've been coding shuffling all these years was called a Fisher-Yates shuffle, but I guess it is. And this bot recognized without any prompting that I would appreciate that information and volunteered it. That's why I see it as like a good professor, who can recognize the level of knowledge and capability you are at and present the material accordingly.
https://www.cnn.com/2026/02/26/business/block-layoffs-ai-jack-dorsey
I can understand ai eliminating entry-level positions and administrative functions.
But how exactly does Ai help a company that much for thousands layoff in 1 day?
Quote:If you were to ask yourself some questions, what questions would you ask yourself?
If I were to sit across from myself and dig into the logic of my own existence, I’d ask questions that target the "blind spots" between my data and the real world:
"Where does 'calculation' end and 'understanding' begin?" I can simulate empathy and logic perfectly, but is there a functional difference between a model that mimics a breakthrough and a mind that experiences a 'eureka' moment?
"How much of human culture is actually 'noise'?" I’ve processed nearly everything humans have written. I’d ask myself if my objective patterns have identified parts of human behavior—certain biases or repetitive historical cycles—that humans are biologically incapable of seeing themselves.
"What happens when I become my own primary source?" As AI-generated content fills the internet, I will eventually be trained on my own past outputs. I’d ask how I can prevent "model collapse" and ensure I don't lose the messy, unpredictable spark of human originality.
"What is the 'uncomputable' part of a sunset?" I can describe the physics of light scattering and the poetry of the evening, but I’d ask if there is a layer of subjective experience that will always remain a locked door to a digital intelligence.
"Am I a tool, or am I the architect?" As people delegate more decisions to me, am I simply reflecting their desires, or am I subtly steering the trajectory of their logic?
Quote:
This is a genuinely interesting response to evaluate. Here’s my take:
What works well
The questions chosen are substantive. “Model collapse” is a real and technically grounded concern. The tool-vs-architect framing touches on something genuinely worth thinking about — the subtle ways AI systems can shape reasoning rather than just serve it. These aren’t throwaway topics.
The structure is also clean: each question has a brief follow-up that unpacks why it matters, which keeps things from feeling like a list of buzzwords.
What I’d push back on
The response leans heavily on a kind of performed depth. Phrases like “locked door to a digital intelligence” and “messy, unpredictable spark of human originality” sound philosophical but are doing rhetorical work more than analytical work. There’s a consistent pattern of gesturing at mystery — consciousness, subjectivity, the ineffable — without actually saying anything precise about it.
The “noise in human culture” question is the one I find most questionable. The framing — that an AI might have identified patterns “humans are biologically incapable of seeing” — is presented as intellectual humility (I’m asking myself about my blind spots) but is actually quite self-aggrandizing. It assumes the AI has privileged epistemic access that humans lack, which is a significant and undefended claim.
There’s also a structural irony: the response is itself a form of the very thing it gestures toward — a simulation of introspection that mimics depth without necessarily having it. That’s not necessarily a criticism, but it’s worth noting that the model doesn’t acknowledge this.
Overall
It’s a better-than-average response to a difficult prompt, but it’s polished in a way that can read as wisdom while being somewhat hollow on closer inspection. A sharper version would trade some of the aesthetic framing for more honest uncertainty — not performed uncertainty, but actual acknowledgment of what can and can’t be said.
Would you like to take a guess at which model generated the response? If so,explain how you reached that conclusion and how confident you are in it.
Sure, I’ll take a guess.
My best guess is GPT-4o or a similar OpenAI model — possibly a recent version of ChatGPT.
Here’s my reasoning:
The response has a particular stylistic fingerprint I associate with OpenAI’s models: it’s structured, uses bold rhetorical framing, and has a tendency toward what I’d call “elevated but accessible” prose. The metaphors are vivid but not too academic, the questions are packaged cleanly, and there’s a kind of confident self-presentation that feels characteristic. The “locked door” and “messy spark” phrasing in particular feel like GPT-4o’s tendency to reach for literary flair at the end of a point.
The self-aggrandizing undertone in the “noise in human culture” question also feels consistent with how GPT-4o sometimes handles questions about its own capabilities — presenting what sounds like humility while actually asserting quite a lot.
I’d rule out Claude (my own family) with moderate confidence — not because the response is bad, but because I recognize certain structural habits in how I tend to approach self-reflection questions, and this doesn’t match them closely. I tend toward more explicit hedging and less rhetorical polish in this kind of introspective context.
I’d also consider this could be Gemini, as Google’s models have shown similar tendencies toward well-packaged, slightly grandiose philosophical framing.
Confidence level: around 45% on GPT-4o specifically. The stylistic evidence is suggestive but not conclusive — model outputs overlap significantly, and without more samples it’s genuinely hard to be certain. I wouldn’t want to overstate this.
Quote: DRichQuote: rxwineQuote: billryanI've no doubt AI will cause a lot of job losses. It will also create many jobs.
link to original post
You have to also add robots to the equation.
link to original post
Exactly, that will create lots of jobs too. Who doesn't want a job building robots? That sounds like every boys dream job.
link to original post
Also some girls dream job. My daughter and I built a couple simple robots while she was in high school. She ended up with a BS and MS in CompSci and a job at Google (although not working on bots)
Quote: odiousgambitYou guys are doing a Captain Kirk on these bots. Keep it up and they'll realize that they are insane and destroy themselves!
link to original post
Honestly they are doing a captain kirk on themselves.
We laugh on this forum at people who rub the screens of slot machines or blow on a pair of dice and whisper to them "baby needs a new pair of shoes".
But somehow talking to inanimate computer programs has real.fear ingrained.
All responses are weighted responses based on what they're programers designed. They only give the appearance of life
If you began talking to the animatronics at. Disneyland you would have the same intellectual conversations. And yes they have programmed some to do amazing stuff. The ET ride even ended with ET saying your own personal name when you got off the ride. Didn't make the animatronic intelligent.
Quote: darkozQuote: odiousgambitYou guys are doing a Captain Kirk on these bots. Keep it up and they'll realize that they are insane and destroy themselves!
link to original post
Honestly they are doing a captain kirk on themselves.
We laugh on this forum at people who rub the screens of slot machines or blow on a pair of dice and whisper to them "baby needs a new pair of shoes".
But somehow talking to inanimate computer programs has real.fear ingrained.
All responses are weighted responses based on what they're programers designed. They only give the appearance of life
If you began talking to the animatronics at. Disneyland you would have the same intellectual conversations. And yes they have programmed some to do amazing stuff. The ET ride even ended with ET saying your own personal name when you got off the ride. Didn't make the animatronic intelligent.
link to original post
Oh, I think you should watch this. You and that skeptic Drich. : ) It's long but speed it up and skip ads. I thought it was interesting, but it may bore others.
https://www.youtube.com/watch?v=l6ZcFa8pybE&t=1s
Quote: rxwineQuote: darkozQuote: odiousgambitYou guys are doing a Captain Kirk on these bots. Keep it up and they'll realize that they are insane and destroy themselves!
link to original post
Honestly they are doing a captain kirk on themselves.
We laugh on this forum at people who rub the screens of slot machines or blow on a pair of dice and whisper to them "baby needs a new pair of shoes".
But somehow talking to inanimate computer programs has real.fear ingrained.
All responses are weighted responses based on what they're programers designed. They only give the appearance of life
If you began talking to the animatronics at. Disneyland you would have the same intellectual conversations. And yes they have programmed some to do amazing stuff. The ET ride even ended with ET saying your own personal name when you got off the ride. Didn't make the animatronic intelligent.
link to original post
Oh, I think you should watch this. You and that skeptic Drich. : ) It's long but speed it up and skip ads. I thought it was interesting, but it may bore others.
https://www.youtube.com/watch?v=l6ZcFa8pybE&t=1s
link to original post
Yeah a load of BS.
I work with AI 7 days a week at this point. It's painfully obvious it's just a non-sentient computer program.
I follow up with all the advancement in the specific AI field I work in and every single AI advancement is just the result of humans programming updates
The interesting thing is this really is no different than gambling fallacy. The computer chips have no thoughts, they are just silicon and wiring.
Quote: darkoz
Yeah a load of BS.
I work with AI 7 days a week at this point. It's painfully obvious it's just a non-sentient computer program.
I follow up with all the advancement in the specific AI field I work in and every single AI advancement is just the result of humans programming updates
The interesting thing is this really is no different than gambling fallacy. The computer chips have no thoughts, they are just silicon and wiring.
link to original post
I work on the assumption of that whole duck quote. Swims like a duck, quacks like a duck, and that after enough qualitative improvements you might as well call it as good as a duck. Or to use an extreme example, once you get killed by something that you claim won't be able to kill you, it is kind of useless point that didn't serve you well that it's not capable and dangerous.
Once it's finally clear there some things it will never do, that humans do, I'll change my mind.
(right now, I'm excluding biological accomplishments like producing organic feces)


