Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
16 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

BootinUp

(49,206 posts)
3. There is more than one meaning but
Wed Dec 4, 2024, 02:46 PM
Dec 4

In this context the G means general. So artificial general intelligence. IIRC.

Voltaire2

(14,999 posts)
7. and it is the point where one of these ai systems can intelligently react to the world in real time
Wed Dec 4, 2024, 03:19 PM
Dec 4

currently they obviously fail on both the 'world in real time' part, as their systems lag behind by months or years in some cases, and on 'intelligently react' as quite frequently they aren't.

Such a system would basically learn on its own instead of being directed to learn.

haele

(13,712 posts)
9. Probably G means Generative. And it's obvious he's illiterate.
Wed Dec 4, 2024, 03:29 PM
Dec 4

Should be GAI - Generative AI, otherwise known as "AI that can think for itself rather than regurgitate internet searches and auto complete prompt sentances".
GAI is the Unicorn they're trying to code for on off the shelf tech. That's a big problem leading to a cliff face.
Modern Tech Bros like him are all about maximizing time to market and profit with shortcuts leveraging off existing tech and code to bring something pretty to market.
Most of the time it's just better packaging of a product most 20 year olds could cobble together if they had indulgent parents giving them a man-cave, time and tech - and unlimited storage.
There's only so much one can do with recursive "if - and", "if-or", "if -then" , and "not -if" code. After that, we need to get quantum tech, or bio-computing (neural net computing), or some sort of compressed intuitive gap analysis widget that won't take 5 acres of server farms worth of computing and storage to come up with logical assessments that have enough "gut" to make weighted intuition for the unknown influences.

The "coding" might seem close to GAI, but the technology just isn't there.
By the way, Tech Bros never say GAI, they always say "Generative AI", because GAI sounds, well, gay.

Haele

thebigidea

(13,365 posts)
10. so go convince tens of thousands of people that AGI is the wrong term, should be a cinch for such a huge brain
Wed Dec 4, 2024, 03:43 PM
Dec 4

haele

(13,712 posts)
13. I can't convince them, and it's not because of "my big brain"
Wed Dec 4, 2024, 04:12 PM
Dec 4

It's because they don't want to use GAI in public, which they would otherwise use to describe Generative Artificial Intelligence, the long agreed upon (since at least 2009) descriptor for that particular field of coding.

When it was just coders trying to teach AI to be generative back in the early 2010's, there were sooo many jokes, especially when they could get the code to auto complete a prompt with a reasonable return paragraph instead of word salad.
Then the suits started looking at products, and decided those initials didn't scan so well. Altman decided on AGI.
What does AGI stand for? Artificial Generative Intelligence? (Awkward phrasing - English majors would blanch)
Advanced Generative Intelligence? Another Great Idea?

Weird, but I guess it scans better to him than AIG, which can still remind people of the crash of 2009...

Haele



Bernardo de La Paz

(51,759 posts)
15. No. GAI is here and it does NOT "think for itself". AGI is Artificial General Intelligence, that's the unicorn
Thu Dec 5, 2024, 06:31 AM
Dec 5

GAI, Generative AI is here now. It is a form of neural net that can run forwards and backwards. When run backwards, it makes text or images that it would recognize if it were run forward. That makes it prone to errors (so called "hallucinations", which is a ridiculous term for "error" ) and of limited use, but still useful enough for companies to invest billions in it and gain functionality.

GAI is NOT recursive "if statements". Nor is AGI recursive if statements. Even applying the term "recursive" to if statements is kind of illiterate. If statements do not recurse and cannot recurse. At most they can be nested. "if statements" is 1960s first steps in AI.

AGI is the unicorn.

Nothing available now or for at least the next few years and probably for a decade or two at least is "thinking for itself". That would be AGI, Artificial General Intelligence.

BootinUp

(49,206 posts)
5. Lost my chance to read it again paywall, but
Wed Dec 4, 2024, 03:13 PM
Dec 4

He just says it won’t change things much. He was being interviewed by the NYT. So we will see more of the interview later I assume.

Bernardo de La Paz

(51,759 posts)
6. I think that for some time it will not be as good as some people hope/fear.
Wed Dec 4, 2024, 03:17 PM
Dec 4

Less good meaning not as effective or efficient or useful or dependable as it would need to be to be truly general or truly intelligent.

Note that current "AI" is really very narrow and unreliable: if it "discovers" something like a new protein folding structure, it has to be double-checked. Generative AI (not AGI) works by essentially working backwards. Going forwards an AI can recognize a picture of a man holding two kittens in a basket as "a man holding two kittens in a basket". If you ask it to make a picture of a woman holding three puppies in a shopping bag, it will cook up something that it would recognize as fitting the description. But it could easily give the hand five fingers and a thumb because it would still "recognize" it, even though we might quickly notice the unreal aspects.

How long that "for some time" might be, I don't know. But synergistic effects might accelerate the pace. It might happen gradually. Or it might happen gradually and then all of a sudden.

Happy Hoosier

(8,604 posts)
8. I agree...
Wed Dec 4, 2024, 03:23 PM
Dec 4

I've been working with AI agents quite a bit lately. They can do somethings very well, such as sort data, or make connections between data sets based on braod parameters. What they are STILL not good at is jusdgement. We forget that human intelligence evolved over hundreds of millions of years... we have developed, for want of a better term, bullshit filters which allows us to identify nonviable solutions at a decent rate. It is by no means infallible, but it is reasonable effective. OTOH, most AI agents do a poor job of assessing decision, once made but before executed. Not that they CAN'T develop that capability, but it will take time and experience.

Latest Discussions»General Discussion»Sam Altman lowers the bar...