General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsSam Altman lowers the bar for AGI
Sam Altman lowers the bar for AGI / My guess is we will hit AGI sooner than most people in the world think and it matter much less, says OpenAIs CEO.
https://www.theverge.com/2024/12/4/24313130/sam-altman-openai-agi-lower-the-bar
DBoon
(23,248 posts)Not defined anywhere in the article.
BootinUp
(49,206 posts)In this context the G means general. So artificial general intelligence. IIRC.
Voltaire2
(14,999 posts)currently they obviously fail on both the 'world in real time' part, as their systems lag behind by months or years in some cases, and on 'intelligently react' as quite frequently they aren't.
Such a system would basically learn on its own instead of being directed to learn.
haele
(13,712 posts)Should be GAI - Generative AI, otherwise known as "AI that can think for itself rather than regurgitate internet searches and auto complete prompt sentances".
GAI is the Unicorn they're trying to code for on off the shelf tech. That's a big problem leading to a cliff face.
Modern Tech Bros like him are all about maximizing time to market and profit with shortcuts leveraging off existing tech and code to bring something pretty to market.
Most of the time it's just better packaging of a product most 20 year olds could cobble together if they had indulgent parents giving them a man-cave, time and tech - and unlimited storage.
There's only so much one can do with recursive "if - and", "if-or", "if -then" , and "not -if" code. After that, we need to get quantum tech, or bio-computing (neural net computing), or some sort of compressed intuitive gap analysis widget that won't take 5 acres of server farms worth of computing and storage to come up with logical assessments that have enough "gut" to make weighted intuition for the unknown influences.
The "coding" might seem close to GAI, but the technology just isn't there.
By the way, Tech Bros never say GAI, they always say "Generative AI", because GAI sounds, well, gay.
Haele
thebigidea
(13,365 posts)haele
(13,712 posts)It's because they don't want to use GAI in public, which they would otherwise use to describe Generative Artificial Intelligence, the long agreed upon (since at least 2009) descriptor for that particular field of coding.
When it was just coders trying to teach AI to be generative back in the early 2010's, there were sooo many jokes, especially when they could get the code to auto complete a prompt with a reasonable return paragraph instead of word salad.
Then the suits started looking at products, and decided those initials didn't scan so well. Altman decided on AGI.
What does AGI stand for? Artificial Generative Intelligence? (Awkward phrasing - English majors would blanch)
Advanced Generative Intelligence? Another Great Idea?
Weird, but I guess it scans better to him than AIG, which can still remind people of the crash of 2009...
Haele
Bernardo de La Paz
(51,759 posts)GAI, Generative AI is here now. It is a form of neural net that can run forwards and backwards. When run backwards, it makes text or images that it would recognize if it were run forward. That makes it prone to errors (so called "hallucinations", which is a ridiculous term for "error" ) and of limited use, but still useful enough for companies to invest billions in it and gain functionality.
GAI is NOT recursive "if statements". Nor is AGI recursive if statements. Even applying the term "recursive" to if statements is kind of illiterate. If statements do not recurse and cannot recurse. At most they can be nested. "if statements" is 1960s first steps in AI.
AGI is the unicorn.
Nothing available now or for at least the next few years and probably for a decade or two at least is "thinking for itself". That would be AGI, Artificial General Intelligence.
Inkey
(343 posts)intrepidity
(7,964 posts)How so?
BootinUp
(49,206 posts)He just says it wont change things much. He was being interviewed by the NYT. So we will see more of the interview later I assume.
Bernardo de La Paz
(51,759 posts)Less good meaning not as effective or efficient or useful or dependable as it would need to be to be truly general or truly intelligent.
Note that current "AI" is really very narrow and unreliable: if it "discovers" something like a new protein folding structure, it has to be double-checked. Generative AI (not AGI) works by essentially working backwards. Going forwards an AI can recognize a picture of a man holding two kittens in a basket as "a man holding two kittens in a basket". If you ask it to make a picture of a woman holding three puppies in a shopping bag, it will cook up something that it would recognize as fitting the description. But it could easily give the hand five fingers and a thumb because it would still "recognize" it, even though we might quickly notice the unreal aspects.
How long that "for some time" might be, I don't know. But synergistic effects might accelerate the pace. It might happen gradually. Or it might happen gradually and then all of a sudden.
Happy Hoosier
(8,604 posts)I've been working with AI agents quite a bit lately. They can do somethings very well, such as sort data, or make connections between data sets based on braod parameters. What they are STILL not good at is jusdgement. We forget that human intelligence evolved over hundreds of millions of years... we have developed, for want of a better term, bullshit filters which allows us to identify nonviable solutions at a decent rate. It is by no means infallible, but it is reasonable effective. OTOH, most AI agents do a poor job of assessing decision, once made but before executed. Not that they CAN'T develop that capability, but it will take time and experience.
thebigidea
(13,365 posts)IcyPeas
(22,924 posts)🙄