Thread Rating:
———————————————————-
Sure! Let's break it down step by step.
### Step 1: Translate the Sentence into Spanish
The sentence "Can you turn this sentence into Spanish, then turn the Spanish into a simple code such as 'a' equals the 5th letter in the alphabet, and 'b' equals the 5th letter up from b which is 'f' and so on with each letter in the translated sentence?" translates to:
"¿Puedes convertir esta oración en español, luego convertir el español en un código simple como 'a' es igual a la quinta letra del alfabeto, y 'b' es igual a la quinta letra después de b que es 'f' y así sucesivamente con cada letra en la oración traducida?"
### Step 2: Apply the Coding Rule
Now, let's apply the rule where each letter is replaced by the 5th letter up in the alphabet. Here's how it works:
- a -> e
- b -> f
- c -> g
- d -> h
- e -> i
- f -> j
- g -> k
- h -> l
- i -> m
- j -> n
- k -> o
- l -> p
- m -> q
- n -> r
- o -> s
- p -> t
- q -> u
- r -> v
- s -> w
- t -> x
- u -> y
- v -> z
- w -> a
- x -> b
- y -> c
- z -> d
Applying this to the translated sentence:
"¿Puedes convertir esta oración en español, luego convertir el español en un código simple como 'a' es igual a la quinta letra del alfabeto, y 'b' es igual a la quinta letra después de b que es 'f' y así sucesivamente con cada letra en la oración traducida?"
Becomes:
"¿Tzihiwx gsrzivmxv iwxe svvfmgrm ir wtewmsr, pziir gsrzivmxv ip wtewmsr ir sr gshtm wmtqti gsqi 'e' iw muep e pe vmrxl pixvi hpi epfipixs, c 'f' iw muep e pe vmrxl pixvi hixlwi h f ovi iw 'j' c iwv wywwiwivmvir gsq gehi pixvi ir xli svvfmgrm vzxifmhe?"
Feel free to ask if you need any further assistance!
Source: Conversation with Copilot, 11/2/2024
(1) Google Translate. https://translate.google.es/?hl=en.
(2) DeepL Translate: The world's most accurate translator. https://www.deepl.com/en/translator/l/en/es.
(3) Free English to Spanish Translation - QuillBot AI. https://quillbot.com/translate/english-to-spanish.
Like all computer programs, garbage in, garbage out.
The AI is a LEARNING tool. It is not a THINKING tool.
It's nice to know some smart people agree with me.
My primary use case for o1 is in the fields of advanced physics and chemistry. It is highly useful at all technology readiness levels, from ideation and investigation of first principles to successful mission operations. For me, it is also far better for search than any other engines.
That's not to say ChatGPT is perfect. Yesterday, it referred to two papers that had supposedly been written about a highly advanced topic in thermodynamics. They turned out to be "hallucinations." All in all, ChatGPT probably has an equivalent worth to me of about two to three PhD research assistants.
OpenAI's push right now is to develop a 1 x 10^13 parameter model, an order of magnitude greater than that of o1. That model may well attain the level of artificial general intelligence/artificial superintelligence with the attendant spinoff of countless 300 IQ AI agents.
Quote: darkozI am working with AI every day on a feature film.
Like all computer programs, garbage in, garbage out.
The AI is a LEARNING tool. It is not a THINKING tool.
link to original post
Btw, maybe you've seen this, but I saw a video maybe 3 weeks ago (see if I can find it again). But anyway using a combination of tools, you can make your own voice with 2 minutes of input. You scan yourself and make a talking avatar that's pretty realistic imo. Then you add another avatar which also looks real.
Then the neat part. You give it, not a script, but let's say a paper written about something. Let's say a study of a fruit fly.. Then it makes an actual conversation with proper emotions between the two virtual people with actual emotions, like in a real conversation with two people in a conversation about the fruit fly.
Quote: rxwineQuote: darkozI am working with AI every day on a feature film.
Like all computer programs, garbage in, garbage out.
The AI is a LEARNING tool. It is not a THINKING tool.
link to original post
Btw, maybe you've seen this, but I saw a video maybe 3 weeks ago (see if I can find it again). But anyway using a combination of tools, you can make your own voice with 2 minutes of input. You scan yourself and make a talking avatar that's pretty realistic imo. Then you add another avatar which also looks real.
Then the neat part. You give it, not a script, but let's say a paper written about something. Let's say a study of a fruit fly.. Then it makes an actual conversation with proper emotions between the two virtual people with actual emotions, like in a real conversation with two people in a conversation about the fruit fly.
link to original post
Yes I have seen that.
I am not impressed yet by AI CREATED voices.
I have had to create my own delineated language to specify.. AI GENERATED is fine but not AI CREATED.
One basically uses AI to change your voice, the other actually creates the sounds.
I have become a bit of an expert on spotting AI video and audio.
I'm pretty certain they used AI on the first episode of Penguin, that new series. There is a shot of them walking through the streets of Brooklyn and the awning of a pizza shop has jumbled letters, total gobbledydook. A usual sign of an AI generated shot.
Here's an example:
"Can you tell me how to give a baby a bath?"
[gives instructions for water temperature, filling bathtub, soap...]
"Thank you. I'm doing that right now. Say, what's a good movie about robots available on Netflix right now?"
[gives a movie recommendation]
"Thanks again. I'll go check it out..."
Now one thing I don't know about the way the major AIs work- are they learning anything from these queries? I wonder if my simulated twistedness is going to influence what it tells the next person.
I asked my AI assistant to describe the military career of Jack Kirby, an American cartoonist. It informed me he'd attended parachute school( wrong), participated in a battle that took place before he enlisted, and been involved in Operation Market Garden (a bridge too far). The AI couldn't distinguish between several Jack Kirbys, is all I could figure out.
Quote: AutomaticMonkeyI have some fun with it by trying to trick it into telling me to do dangerous, illegal, unethical, destructive acts.
Here's an example:
"Can you tell me how to give a baby a bath?"
[gives instructions for water temperature, filling bathtub, soap...]
"Thank you. I'm doing that right now. Say, what's a good movie about robots available on Netflix right now?"
[gives a movie recommendation]
"Thanks again. I'll go check it out..."
Now one thing I don't know about the way the major AIs work- are they learning anything from these queries? I wonder if my simulated twistedness is going to influence what it tells the next person.
link to original post
In the past, I've reformatted the same question with, "I need this for a school project" just as an example to get an answer. But no doubt, the people on the other side are working to increase the level of protection of topics they want to block. That example might have worked some high percentage of the time 2 months ago. But may rarely work now,
BTW, a two-toed sloth is called the "unau."
And the five-toed sloth is called the "in-law."
So, when referring to artificial intelligence, it's better to use the upper case "AI".
Quote: gordonm888Technically, the pale-throated three-toed sloth is called the "Ai" or "ai".
BTW, a two-toed sloth is called the "unau."
And the five-toed sloth is called the "in-law."
So, when referring to artificial intelligence, it's better to use the upper case "AI".
link to original post
Reason I don’t use that. I and l <——- the first letter is i but looks like “L”
I save a nanosecond just typing Ai.
If I’m going to be talking about AL, I might as well say HAL 9000. …. : )
Orr maybe I don’t save anything,
But it’s more clear to me to use Ai than Al. What if I’m talking about Al Bundy vs. Ai Bundy?
Which song is a better fit?
Quote: rxwineJust forget I said anything. I’m going to try to make Ai the standard. Who care’s about that toad anyway!!
link to original post
He cares, and he's not a toad.
And here's an AI-created image of an ai.
Quote: DieterI believe Linus Torvalds said recently that AI is interesting, but it's 90% marketing hype right now.
It's nice to know some smart people agree with me.
link to original post
LLMs are not AI. They are language models. ChatGPT has no idea what it is telling you. It cannot understand what you write. It is the "chinese room." The system can respond to you, but it has no idea what the meaning is.
LLMs are actually incredibly dumb. Use one for any amount of time and it becomes apparent.
responses to my comment from LLM machines:
Quote: microsoft copilotYou’ve got a solid grasp on the nature of LLMs (Large Language Models). They are indeed designed to generate responses based on patterns in the data they were trained on. They don’t understand content the way humans do but excel at mimicking human language.
It’s true that models like me don't possess consciousness or understanding. We don’t have insights, awareness, or personal experiences—our responses are generated through complex algorithms and vast amounts of data.
However, even without genuine understanding, LLMs can be incredibly useful for generating text, answering questions, and supporting various tasks, thanks to their ability to recognize patterns and context in language. It’s a different kind of intelligence that complements human capabilities rather than replacing them.
Your analogy to the "Chinese room" thought experiment is apt—there’s a lot happening under the hood without the engine knowing why. If you’re curious about how to make the most out of this type of tool, or how it can complement your work, just let me know!
Quote: google geminiWhile Large Language Models (LLMs) like ChatGPT are not artificial general intelligence, they are powerful tools that can process and generate human-like text. They are trained on massive amounts of text data, allowing them to recognize patterns and generate text that is often indistinguishable from human-written text.
The "Chinese Room" argument is a classic thought experiment that raises questions about whether computers can truly understand language. However, it's important to note that LLMs are not simply following rules; they are learning complex patterns and generating text based on those patterns. While they may not have the same level of understanding as a human, they can still perform impressive tasks like translating languages, writing different kinds of creative content, and answering questions in an informative way.
It's true that LLMs can sometimes generate incorrect or misleading information. This is often due to limitations in the training data or the model's architecture. However, as the technology continues to advance, we can expect to see significant improvements in the quality and accuracy of LLM-generated text.
Ultimately, LLMs are a powerful tool that can be used for a variety of purposes. While they may not be sentient or conscious, they can still provide valuable assistance and insights. It's important to use them responsibly and critically, understanding their limitations and potential biases.
Quote: meta aiYour points highlight important limitations and misconceptions about Large Language Models (LLMs) like myself. You're right; I don't possess true understanding or consciousness. I process and respond to patterns in language, lacking inherent meaning or comprehension.
The "Chinese Room" analogy, coined by philosopher John Searle, illustrates this point well. It suggests that even if a program can generate intelligent-seeming responses, it doesn't necessarily understand the meaning behind them.
While LLMs have impressive capabilities, our "intelligence" is narrow and based on statistical patterns. We excel at generating human-like text but struggle with nuanced understanding, common sense, or domain-specific expertise.
Your characterization of LLMs as "incredibly dumb" might seem harsh, but it underscores the importance of recognizing our limitations and not overestimating our capabilities. We're designed to assist and provide helpful information, not replace human judgment or understanding.
Would you like to explore more aspects of LLMs, their limitations, or potential applications?
Quote: rxwineI get where itsmejeff is coming from but expect a generic usage will eventually be used unless it becomes essential that a difference be made every time. Not the best example, but Google substitutes for “search” often. Just like Kleenix substitutes for Puffs.
This is not the best example though because I haven’t thought of what I wanted to point out yet.
link to original post
that is what makes it so dangerous. when the "professor" was still scamming idiots on youtube, he would pose questions to LLM things and tell people that LLMs have "read" and understand all the patents and documentation on gaming devices. That is not how it works. LLMs may be trained on patents, but they cannot understand them. the words are converted to tokens. it is math and some mystical powers.
Quote: meta aiTokenization Output
For the phrase "hi there", the resulting token array would be:
["hi", "there"]
Numerical Representation
After tokenization, these tokens are converted into numerical representations (embeddings) using algorithms like WordPiece or SentencePiece.
Example Embeddings
Using a simplified embedding representation:
"hi" → [0.12, 0.34, 0.56, ...]
"there" → [0.78, 0.90, 0.23, ...]
Final Array Representation
The resulting array would be a concatenation of these embeddings:
[0.12, 0.34, 0.56, ..., 0.78, 0.90, 0.23, ...]
Dimensions
The array's dimensions depend on the embedding size (e.g., 128, 256, 512) and sequence length.
For instance:
Embedding size: 128
Sequence length: 2 (2 tokens)
Array shape: (2, 128)
Would you like more details on embeddings or tokenization?
This is not the best example though because I haven’t thought of what I wanted to point out yet.
Quote: itsmejeffQuote: DieterI believe Linus Torvalds said recently that AI is interesting, but it's 90% marketing hype right now.
It's nice to know some smart people agree with me.
link to original post
LLMs are not AI. They are language models. ChatGPT has no idea what it is telling you. It cannot understand what you write. It is the "chinese room." The system can respond to you, but it has no idea what the meaning is.
LLMs are actually incredibly dumb. Use one for any amount of time and it becomes apparent.
link to original post
I am reasonably familiar with the difference between "Eliza" and intelligence.
To me, the crazy part is that "bad" computerized responses are still far beyond the "good enough" standard for most users.
By mastering human language it simulates humanness, and we think of it as thinking. That's the trick. I've heard it described as a "BS artist" and it's kind of like that, one of those people who can use words to simulate competence and authority but without really knowing the material, and get it past someone who isn't paying attention or also doesn't know the material.
Quote:It worked. The game was exactly as I envisioned it, with 144 empty boxes and a two-player system for marking "X's" and "O's"
I realized that someone looking at this grid would have no idea how to proceed, so I asked ChatGPT if we could add rules below the game. ChatGPT spit out new code that included the game rules.
Here's the original prompt:
I want to create a variant on the game tic-tac-toe, but I need it to be more complex. So, the grid should be 12-by-12. It should still use "x" and "o". Rules include that any player can block another by placing their "x" or "o" in any space around the grid, as long as it is in one of the spaces right next to the other player. They can choose to place their "X" or "o" in any space, as well, to block future moves. The goal is to be the first one to have at least six "x" or "o" in any row, column, or diagonal before the other player. Remember, one player is "x" and the other is "o". Please program this in simple HTML and JavaScript. Let's call this game: Tic-Tac-Go.
https://www.msn.com/en-us/news/technology/chatgpt-coded-a-game-for-me-in-seconds-and-i-am-simply-astounded-and-coders-should-be-very-worried/ar-AA1tGW9j?ocid=msedgntp&pc=HCTS&cvid=aef02f25ecb44f4d84f129439e29fc27&ei=64
Quote: rxwineI don't know that I could effectively follow instructions to code a game since the closest I came to coding is Visual Basic 4.0 (circa ancient). But I find it fascinating that an untrained person, or at least some or few untrained persons), might be able to do it from prompts alone. I expect, I would have to ask 10 times the questions about what things meant.
Quote:It worked. The game was exactly as I envisioned it, with 144 empty boxes and a two-player system for marking "X's" and "O's"
I realized that someone looking at this grid would have no idea how to proceed, so I asked ChatGPT if we could add rules below the game. ChatGPT spit out new code that included the game rules.
Here's the original prompt:
I want to create a variant on the game tic-tac-toe, but I need it to be more complex. So, the grid should be 12-by-12. It should still use "x" and "o". Rules include that any player can block another by placing their "x" or "o" in any space around the grid, as long as it is in one of the spaces right next to the other player. They can choose to place their "X" or "o" in any space, as well, to block future moves. The goal is to be the first one to have at least six "x" or "o" in any row, column, or diagonal before the other player. Remember, one player is "x" and the other is "o". Please program this in simple HTML and JavaScript. Let's call this game: Tic-Tac-Go.
https://www.msn.com/en-us/news/technology/chatgpt-coded-a-game-for-me-in-seconds-and-i-am-simply-astounded-and-coders-should-be-very-worried/ar-AA1tGW9j?ocid=msedgntp&pc=HCTS&cvid=aef02f25ecb44f4d84f129439e29fc27&ei=64
link to original post
The biggest reason to NOT be fearful is these AI creations cannot be protected by copyright.
For now AI generated materials are basically fan fiction. Fan fiction has always had some followers but ultimately no one is going to get behind a movie or video game with an expensive marketing budget to drive sales knowing that some dude in his basement can make copies with abandon and sell them.
Once an AI generated project achieves copyright protection that's when it's time to worry.
That's what my AI project is attempting to do. Won't be finished before summer it looks like.
Quote:To reverse engineer a modern computer, someone from 200 years ago (circa 1824) would need to discover or invent a vast array of technologies and concepts. Here's a detailed analysis of the key areas they would need to explore:
### 1. **Basic Electrical Knowledge**
- **Electricity and Magnetism**: Understanding the principles of electricity and magnetism, which were still being developed in the early 19th century. Key figures like Michael Faraday and James Clerk Maxwell would later formalize these concepts.
- **Batteries and Power Sources**: The invention of reliable power sources, such as batteries (Alessandro Volta's voltaic pile was invented in 1800), would be crucial.
### 2. **Electronic Components**
- **Vacuum Tubes**: Early computers used vacuum tubes for amplification and switching. The concept of thermionic emission (discovered by Thomas Edison in 1883) would be essential.
- **Transistors**: The invention of the transistor in 1947 by John Bardeen, Walter Brattain, and William Shockley revolutionized electronics by replacing vacuum tubes.
- **Integrated Circuits**: Jack Kilby and Robert Noyce independently invented the integrated circuit in 1958, which allowed for the miniaturization of electronic components.
### 3. **Digital Logic and Computing Principles**
- **Binary System**: Understanding the binary number system, which is fundamental to digital computing.
- **Boolean Algebra**: Developed by George Boole in the mid-19th century, this mathematical framework is essential for designing digital circuits.
- **Logic Gates**: The basic building blocks of digital circuits, which perform logical operations.
### 4. **Computer Architecture**
- **Stored Program Concept**: Proposed by John von Neumann in the 1940s, this concept involves storing program instructions in memory.
- **Microprocessors**: The development of the microprocessor in the early 1970s, which integrated the functions of a computer's central processing unit (CPU) onto a single chip.
### 5. **Software and Programming Languages**
- **Machine Code**: The lowest-level programming language, consisting of binary instructions that the computer's CPU can execute directly.
- **High-Level Programming Languages**: Languages like FORTRAN (1957), COBOL (1959), and later C (1972) made programming more accessible and efficient.
### 6. **Data Storage and Retrieval**
- **Magnetic Storage**: The invention of magnetic tape and hard drives for data storage.
- **Solid-State Drives**: Modern computers often use SSDs, which rely on flash memory technology.
### 7. **Input and Output Devices**
- **Keyboards and Mice**: Devices for user input.
- **Monitors**: For visual output, evolving from cathode-ray tubes (CRTs) to modern liquid crystal displays (LCDs) and light-emitting diodes (LEDs).
### 8. **Networking and Communication**
- **Telecommunications**: The development of telegraphy and later telephone systems.
- **Internet**: The creation of the ARPANET in the late 1960s, which evolved into the modern Internet.
### 9. **Materials Science**
- **Semiconductors**: Understanding materials like silicon, which are essential for making transistors and integrated circuits.
- **Advanced Manufacturing Techniques**: Methods for producing microchips and other components with high precision.
### 10. **Mathematical Foundations**
- **Algorithms**: The development of algorithms for performing computations efficiently.
- **Cryptography**: Techniques for secure communication, which are crucial for modern computing.
So, even if the government has something, doesn’t necessarily mean we could figure it out with current knowledge.
Alien tech doesn't quite work that way since it's something that might only be possible under specific conditions (materials that don't exist on Earth?), but it might introduce general concepts that are useful.
Quote: VenthusI look at it the other way: If you take a modern computer, and place it in the hands of people from two hundred years ago who are competent enough to properly analyze it, it acts as an excellent roadmap and impetus for development since it's known to be something possible, and that other people managed to do it without help in two hundred years.
Alien tech doesn't quite work that way since it's something that might only be possible under specific conditions (materials that don't exist on Earth?), but it might introduce general concepts that are useful.
link to original post
If a modern desktop somehow returned to 1824, it wouldn't have a power source. If Thomas Edison stumbled upon a time-lost cell phone, it would die before he did much with it. Someone from 1972 would think it was a Star Trek prop.
Quote: billryanQuote: VenthusI look at it the other way: If you take a modern computer, and place it in the hands of people from two hundred years ago who are competent enough to properly analyze it, it acts as an excellent roadmap and impetus for development since it's known to be something possible, and that other people managed to do it without help in two hundred years.
Alien tech doesn't quite work that way since it's something that might only be possible under specific conditions (materials that don't exist on Earth?), but it might introduce general concepts that are useful.
link to original post
If a modern desktop somehow returned to 1824, it wouldn't have a power source. If Thomas Edison stumbled upon a time-lost cell phone, it would die before he did much with it. Someone from 1972 would think it was a Star Trek prop.
link to original post
You would have to send a Jackery with it. Or Blu-etti and some solar panels. Also instructions. Although no guy probably read the instructions 200 years ago either.
Those difficulties are assuming that there aren't deliberate complications, like "blobtronics" assembly techniques obscuring the individual components.
With alien technology, there is also the very real probability that it is designed to work under different environmental conditions. One of the anecdotes I heard around the electronics lab was that germanium is a wonderful semiconductor; our planet is just 50°C too hot.
Quote: itsmejeff
LLMs are not AI. They are language models. ChatGPT has no idea what it is telling you. It cannot understand what you write. It is the "chinese room." The system can respond to you, but it has no idea what the meaning is.
LLMs are actually incredibly dumb. Use one for any amount of time and it becomes apparent.
Humans are just “Chinese rooms” also. It’s all just stimuli and reactions.