Thread Rating:

billryan
billryan
Joined: Nov 2, 2009
  • Threads: 179
  • Posts: 10669
March 7th, 2021 at 11:34:26 AM permalink
Quote: OnceDear

Yes. I'd seen that, too, which is why I observed that parts of America had such a department. It doesn't even seem to be a State wide department, but what would I know?

As to typos... And not being pedantic or trying to bully someone for using Alexa. . .

Quote: HokusPokus

I have a bachelor's in phycology, what physiological skills are relevant to game design?


I would never expect to see myself misspell my own physics degree as a Bachelor's in psychics. One just doesn't spend years getting such a qualification and then spell it so wrong! Does one? Even with Alexa?
And to conflate psychology with physiology and phycology, let alone 'description' with 'discrimination' and 'aid' with 'aide'! You job is managing aid, for goodness sake.

Somehow HokusPokus's posts just don't seem right and I'm not the only one to notice that.

Now, if someone presented a benefit claim to HokusPokus and he thought it was fraudulent, then, sure, he'd be entitled and obliged to investigate with the tools at his disposal. If an anonymous person, or even an identified person made any claim whatsoever, unrelated to such submitted paperwork, then I cannot see what business it is of his or his department.
"I shot the sheriff." Does that admission entitle him to search his departments computer for evidence, thereof?





Who shot the deputy? It's nearly fifty years and still no answer.
EvenBob
EvenBob
Joined: Jul 18, 2010
  • Threads: 431
  • Posts: 24860
March 7th, 2021 at 12:04:23 PM permalink
Quote: ChumpChange

Robocop? Las Vegas Apartment Complex Deploys Human-Sized Robot To Fight Crime | ZeroHedge
https://www.zerohedge.com/technology/robocop-las-vegas-apartment-complex-deploys-human-sized-robot-fight-crime



Take about 30 seconds throw a blanket
over the thing and sell it for 500 bucks.
"It's not enough to succeed, your friends must fail." Gore Vidal
ChumpChange
ChumpChange
Joined: Jun 15, 2018
  • Threads: 46
  • Posts: 2084
Thanks for this post from:
CrystalMathOnceDear
March 7th, 2021 at 12:13:35 PM permalink
I'll be in my panic room for the rest of this thread.
rxwine
rxwine
Joined: Feb 28, 2010
  • Threads: 169
  • Posts: 10087
March 7th, 2021 at 1:52:55 PM permalink
Quote: OnceDear

Yes. I'd seen that, too, which is why I observed that parts of America had such a department. It doesn't even seem to be a State wide department, but what would I know?

As to typos... And not being pedantic or trying to bully someone for using Alexa. . .

Quote: HokusPokus

I have a bachelor's in phycology, what physiological skills are relevant to game design?


I would never expect to see myself misspell my own physics degree as a Bachelor's in psychics. One just doesn't spend years getting such a qualification and then spell it so wrong! Does one? Even with Alexa?
And to conflate psychology with physiology and phycology, let alone 'description' with 'discrimination' and 'aid' with 'aide'! You job is managing aid, for goodness sake.

Somehow HokusPokus's posts just don't seem right and I'm not the only one to notice that.

Now, if someone presented a benefit claim to HokusPokus and he thought it was fraudulent, then, sure, he'd be entitled and obliged to investigate with the tools at his disposal. If an anonymous person, or even an identified person made any claim whatsoever, unrelated to such submitted paperwork, then I cannot see what business it is of his or his department.
"I shot the sheriff." Does that admission entitle him to search his departments computer for evidence, thereof?



Okay detectives. If someone works somewhere other than home, they should be able to correctly describe the layout of the area they work in, including back offices, and such. A layout can be confirmed at some point. An true insider can provide specific details if asked.

(details that would be unknown to a casual visitor)
Quasimodo? Does that name ring a bell?
TumblingBones
TumblingBones
Joined: Dec 25, 2016
  • Threads: 28
  • Posts: 438
March 7th, 2021 at 2:00:24 PM permalink
I recognize that since the question was asked this thread has deviated into other areas but I'm going to assume there is still some interest in an answer.

Quote: Mission146

Honest Question: Why can't the best AI bots carry on a meaningful conversation? If they can't even learn how to converse properly, how might they learn how to program just by being given a thing to program?


I guess it depends on what you mean by "meaningful". Is the conversation's topic limited in scope or is it unrestricted? Would you consider an AI that can answer questions and carry out tasks based strictly on verbal interaction to be carrying out a meaningful conversation? For example, an AI travel agent that can book flights and hotels for you? Or, a better example, the talking computer from Star Trek. If so, these exist. If, however, you want something that passes a Turing Test then you will have to wait awhile.

Quote: Mission146

Next Question: Wouldn't the AI have to have a point of reference? I guess what I'm asking is wouldn't you have to program into the AI a definition of a thing in order to then ask it to program that thing? How would it program something it knows nothing about?


Very good question. Starting a Machine Learning (ML) from scratch is an active and complex area of research so what follows is a gross simplification. I think of it as a multi-stage process....
  1. clustering is just grouping things without any assignment of a label to a group. For example, an AI that looks at thousands of pictures of animals and groups them by species without any real understanding (or labeling) as "cat', "dog", "horse", etc.
  2. classification is when a meaningful label gets assigned to clusters. In other words, pictures of cats are correctly labeled as "cat". Or some non-imagery examples, labeling an AI-generated sentence as "good grammar" or a AI generated software module as "good code". How this gets done varies. In some cases a human provides a point of reference (e.g. "this is a picture of a cat"). The technical term for this is supervised learning. In contrast, with unsupervised learning there is no human input to provide the point of reference. As example, once it has the clusters of images, an AI could assign labels by crawling the web and analyzing the text in any captions or URLs it finds of matching images (e.g.http://i.huffpost.com/gen/1486888/images/o-GRUMPY-CAT-facebook.jpg).
  3. So now the AI can identify "cats" but has no real understanding of what a cat is (e.g., it's a "mammal" which is a "living thing" and is often used by "humans" as a "pet"). Putting a label into context requires an ontology. If this isn't being provided by the programmers (which in most situations is what happens) then we've got to provide the AI with the ability to do ontological learning.

All of the above can be dealt with by a sufficiently advanced AI. The final sticking point IMO is the issue of empirical knowledge. The AIs and robots of today have very limited sensory inputs. Furthermore those sensors don't necessarily align with human ones. SO how can an AI grok that a cat is soft and cuddly and makes a good companion for lonely old ladies?

Quote: Mission146

I know AI can do some pretty amazing things. My understanding is that LC0 (also called Leela Zero) beat Stockfish in the Top Engine Chess Championship in both 2019 and the first quarter of 2020...with LC0 being an open source neural network. As I understand it, LC0 started knowing nothing about the game except the basic rules and conditions for winning. I guess Stockfish has beaten LC0 the three most recent tournaments, though.



Now you're talking about the 3rd way that an AI learns. For something like playing a game, the approach is to use reinforcement learning. The idea is that the AI learns by taking actions and then evaluating how well it was "rewarded" for that action (e.g., what percentage of the time will making this move in that situation result in victory?). AIs that learn via reinforcement will start their education in a simulated environment before being turned loose in the real world. Self-driving cars for instance.

Quote: USpapergames

So I will happily answer all you questions but 1st could you just do some research on GTP3? I think you will learn a lot, AGI is earlier already here in its infancy state or GTP4 might be that breakthrough. But honestly my best is GPT3 is the 1st AGI program and it has already been proven to do incredible things, like programing any software from just simple a few sentences of description.


GPT-3 (not "GTP" as in the quoted text) is not an Artificial General Intelligence (AGI). It is a family of Artificial Neural Networks (ANN) that have been design specifically for Natural Language Processing (NLP) ad working with text. AGI (also called "hard AI) is very different. I like the definition they use in Wikipedia: "the ability of an intelligent agent to understand or learn any intellectual task that a human being can." AGI does not yet exist and may never be achieved but my colleagues and I are having a lot of fun working on it :)
My goal of being well informed conflicts with my goal of remaining sane.
USpapergames
USpapergames
Joined: Jun 23, 2020
  • Threads: 18
  • Posts: 807
March 7th, 2021 at 2:57:25 PM permalink
Quote: CrystalMath

Better watch out. I got a suspension for the B-word.



Wow, your looking out for me? I've always liked you and was hoping you would come around. My opinion of WoV has really been flipping 180 degrees lately.
Math is the only true form of knowledge
USpapergames
USpapergames
Joined: Jun 23, 2020
  • Threads: 18
  • Posts: 807
March 7th, 2021 at 3:04:16 PM permalink
So I only came back because I got a message from OnceDear wanting an explanation so here goes it. The last thing that I want is special privilege's since my life hasn't been filed with them. If Axel Wolf thinks its unfair that I not be suspended & I can't argue because he makes some logical points then suspending myself is what I should do. All I have been preaching since I've been a member here is that I value fairness. I wont make any more posts until I have AlexWolf's approval. It's like a poker play taking money off the table to give to their wife while still playing, its acceptable if everyone agrees but the second one person has an issue its a casino violation.

Also let me clarify something. I had to ask my self why I didn't just ask Dr. Jacobson to stop bullying me & the answer that I came up with is that it makes me look pathetic to ask someone to stop attacking you. And its more complicated then that, I have been dealing with self destructive behavior for a very long time. I remember as a kid playing soccer and caring so much about dribbling the ball past the other kids but for some reason I would just crack under pressure or just give up completely once I reached the goalie & this is by far not the only example of me shooting my self in the foot when I need to run. I have a serious problem in that i have an apatite for put downs. A part of me enjoys it when someone attacks what I care about because I use that energy to drive my passion farther. So what I'm saying is that a part of me is so messed up that I want Dr. Jacobson to continue with his outlandish claims about my skill set but I don't want him to later use the excuse that he didn't know what he was doing, which is why I just wanted to inform him that he was being a bully.

Nvm, I guess AlexWolf already said something
Last edited by: USpapergames on Mar 7, 2021
Math is the only true form of knowledge
USpapergames
USpapergames
Joined: Jun 23, 2020
  • Threads: 18
  • Posts: 807
March 7th, 2021 at 3:53:29 PM permalink
Quote: TumblingBones

I recognize that since the question was asked this thread has deviated into other areas but I'm going to assume there is still some interest in an answer.


I guess it depends on what you mean by "meaningful". Is the conversation's topic limited in scope or is it unrestricted? Would you consider an AI that can answer questions and carry out tasks based strictly on verbal interaction to be carrying out a meaningful conversation? For example, an AI travel agent that can book flights and hotels for you? Or, a better example, the talking computer from Star Trek. If so, these exist. If, however, you want something that passes a Turing Test then you will have to wait awhile.

Quote: Mission146

Next Question: Wouldn't the AI have to have a point of reference? I guess what I'm asking is wouldn't you have to program into the AI a definition of a thing in order to then ask it to program that thing? How would it program something it knows nothing about?


Very good question. Starting a Machine Learning (ML) from scratch is an active and complex area of research so what follows is a gross simplification. I think of it as a multi-stage process....
  1. clustering is just grouping things without any assignment of a label to a group. For example, an AI that looks at thousands of pictures of animals and groups them by species without any real understanding (or labeling) as "cat', "dog", "horse", etc.
  2. classification is when a meaningful label gets assigned to clusters. In other words, pictures of cats are correctly labeled as "cat". Or some non-imagery examples, labeling an AI-generated sentence as "good grammar" or a AI generated software module as "good code". How this gets done varies. In some cases a human provides a point of reference (e.g. "this is a picture of a cat"). The technical term for this is supervised learning. In contrast, with unsupervised learning there is no human input to provide the point of reference. As example, once it has the clusters of images, an AI could assign labels by crawling the web and analyzing the text in any captions or URLs it finds of matching images (e.g.http://i.huffpost.com/gen/1486888/images/o-GRUMPY-CAT-facebook.jpg).
  3. So now the AI can identify "cats" but has no real understanding of what a cat is (e.g., it's a "mammal" which is a "living thing" and is often used by "humans" as a "pet"). Putting a label into context requires an ontology. If this isn't being provided by the programmers (which in most situations is what happens) then we've got to provide the AI with the ability to do ontological learning.

All of the above can be dealt with by a sufficiently advanced AI. The final sticking point IMO is the issue of empirical knowledge. The AIs and robots of today have very limited sensory inputs. Furthermore those sensors don't necessarily align with human ones. SO how can an AI grok that a cat is soft and cuddly and makes a good companion for lonely old ladies?



Now you're talking about the 3rd way that an AI learns. For something like playing a game, the approach is to use reinforcement learning. The idea is that the AI learns by taking actions and then evaluating how well it was "rewarded" for that action (e.g., what percentage of the time will making this move in that situation result in victory?). AIs that learn via reinforcement will start their education in a simulated environment before being turned loose in the real world. Self-driving cars for instance.


GPT-3 (not "GTP" as in the quoted text) is not an Artificial General Intelligence (AGI). It is a family of Artificial Neural Networks (ANN) that have been design specifically for Natural Language Processing (NLP) ad working with text. AGI (also called "hard AI) is very different. I like the definition they use in Wikipedia: "the ability of an intelligent agent to understand or learn any intellectual task that a human being can." AGI does not yet exist and may never be achieved but my colleagues and I are having a lot of fun working on it :)



Ok, lets talk about AI TumblingBones. Let me just say that your comment is defiantly getting my attention & I'd like to be your friend so lets play.

1st, I make the claim that GTP-3 might be the 1st AGI program. Which means it barely passes for AGI as if it were in its infancy. We both know that in theory there are many different ways in which AGI can exist, so your saying that an AGI program can not be created from an ANN??? See my point was that in theory its very difficult to distinguish an AGI program that barely passes for AGI. Which is why I said we needed to watch for GTP-4 which hopefully would have so many improvements that it's AGI capabilities would be obvious if even Elon Musk even said GTP-3 appears to resemble an AGI program.
Math is the only true form of knowledge
USpapergames
USpapergames
Joined: Jun 23, 2020
  • Threads: 18
  • Posts: 807
March 7th, 2021 at 4:24:21 PM permalink
Quote: Mission146

Honest Question: Why can't the best AI bots carry on a meaningful conversation? If they can't even learn how to converse properly, how might they learn how to program just by being given a thing to program?

Next Question: Wouldn't the AI have to have a point of reference? I guess what I'm asking is wouldn't you have to program into the AI a definition of a thing in order to then ask it to program that thing? How would it program something it knows nothing about?

I know AI can do some pretty amazing things. My understanding is that LC0 (also called Leela Zero) beat Stockfish in the Top Engine Chess Championship in both 2019 and the first quarter of 2020...with LC0 being an open source neural network. As I understand it, LC0 started knowing nothing about the game except the basic rules and conditions for winning. I guess Stockfish has beaten LC0 the three most recent tournaments, though.



I'm not the AI expert, just an enthusiast. But I said I'm going to answer your questions and there's a chance my answers will be better than the AI experts. Meaningful conversations are not possible because that would require AGI and not some task oriented AI program which is only going to be focused on its prerogatives and not concerned for your feels. Think of Terminator the movie, Skynet is AI gone wrong. It's not considered sentient life but just software following pre-established programing. AGI is like David from Prometheus, a self learning program that rewrites its own code and stablished missions around its ever changing core values. AGI should be able to do everything that us humans can do & probably much more. We often think computers are cold machines but I think computers could actually learn to feel and process higher ranges of complex emotions than us humans.

So the answer is no, AGI should be able to comprehend anything abstract things and understand the meaning behind our words far better than any task oriented AI program. Already GTP-3 can figure out the meaning behind abstract commands which is why it can have conversations with humans without being programed anything about langue. Self learning algorithms are rather scary in how much simulation hours of learning they can achieve in real time. But the reason why GPT-3 insist a pro a language yet is because it can only learn as fast as how many hours it can spend talking to humans as opposed to learning how to play chess by playing itself over and over for 200 years simulated which could be a week in real time. There is an AI program that is close to beating the best players at table tennis but it cant get any better without playing in real time so 200 years of it learning is going to have to be 200 years in real time since it has to learn physics of how to hit the ball in the real world.

No, you don't need to program a definition of the thing, often you can take a program like AlphaZero and upload it into a simulation and it will learn how to play optimally just from losing over and over again until it figures out how to win. AlphaZero can currently beat any Atari & Nintendo game and the list is currently growing!
Math is the only true form of knowledge
USpapergames
USpapergames
Joined: Jun 23, 2020
  • Threads: 18
  • Posts: 807
March 7th, 2021 at 5:07:31 PM permalink
Quote: SOOPOO

I’d be willing to bet HP and USPG ‘live’ in the same zip code...
still is fun reading the posts of USPG and HP as long as you don’t take them seriously....



So people think I am Hokus???
Math is the only true form of knowledge

  • Jump to: