AGI is an AI that controls or can control an agent in the world that is superior to humans at pretty much any task. Note that being better than humans at all cognitive tasks is insufficient for this. Observation and manipulation of, and planning in, the 3D world is required.
Note that implementation of such an AGI for the military will almost inevitably lead to a Terminator like scenario.
Very interesting - I think the first part is easier to obtain (for example: reaching 100T parameter models) the utility part is super interesting to me. Getting into advanced reasoning with unknown variables, or self generation for coding new tool use etc! Thanks for sharing!
Not exactly answering what was asked, but what I am experiencing: anticipation, excitement, and fear. Reminding myself how this is accelerating towards us all.
I am building a "simple - full-stack app for a start-up with Vibe Engineering". A tiny speck in what is happening. But something that was NOT POSSIBLE only months ago. Following this all very closely since March of 2024 and finally beginning to ai code last ~ sept. Having that builder experience has helped my perspective I believe.
There are so many questions and such different views of what even the near future holds. As with many things we have to; read, listen, discuss, triangulate, re-calibrate, reflect, and then repeat in order to keep a handle on our changing reality.
I have always loved robotics. Suddenly any little robot can be C3PO - wow.
I think those are all really common and purposeful emotions. It’s incredible the exponential rate in which the industry is evolving, but all the more pertinent to have these conversations and collective dialogue!
But how do we define consciousness? Philosophers couldn't agree on terms of consciousness for hundreds of years and that was before we had the most powerful technology the world has ever seen...
Maybe in the same way of there being different schools of thoughts about philosophy and consciousness, we may have different schools of thoughts around AI ethics with different perspectives with their own merits.
From a benchmark perspective, we're likely already here - so I ask: is it just about passing intelligence benchmarks or does it have to do with the ability to be conscious and overcome unprogrammed pathways ie. create outputs that it hasn't been trained on before?
I think AGI is the ability for AI to be an expert in every domain. I think a big part of the conversion is capabilities - which maybe has more to do with agentic systems than AGI.
The way artificial intelligence processes information mainly involves input, digestion, and output. Automating these three stages is almost equivalent to achieving AGI. However, the digestion part is very important.
I completely love the idea of a culmination, not a singular occurrence.
I don’t disagree with any of the comments here ( there’s no reason to) but in keeping with your article AJ I think its simplest definition is societal prevalence. Does AGI have to be smarter than me? No. But it may make impossible to clearly appreciate where my interactions are human or machine, or when more than half of my daily tasks are driven by prompts or agents.
In essence, AGI is a system with broad intelligence, able to adapt and learn across a wide range of problems and tasks, not just specialized in one area...
AGI is achieved when a computer system can perform every intellectual task that the average human can perform with results similar to a human.
AGI is an AI that controls or can control an agent in the world that is superior to humans at pretty much any task. Note that being better than humans at all cognitive tasks is insufficient for this. Observation and manipulation of, and planning in, the 3D world is required.
Note that implementation of such an AGI for the military will almost inevitably lead to a Terminator like scenario.
I love how you mentioned that sheer intelligence alone isn’t the entire picture - the capabilities and real world modeling is absolutely essential…
The military angle is a whole another newsletter in and of itself!
AGI is the ability to memorize an INFINITE amount of data, learn from the data, and then respond to the data in INFINITE combinations.
Nothing will ever have the ability to "memorize an INFINITE amount of data". It would have to be bigger than the size of the universe itself.
Thank you for your thoughtful response, Elan. All the best.
Charlie
YES! Learn from the data is key, self iterative feedback and/or course correction mechanisms is super key
AGI is the ability to memorize INFINITE data and then to output INFINITE responses to the data.
My old boss Blaise made a reasonable attempt 1.5 years ago and still find it refreshing among all the speculation. https://www.noemamag.com/artificial-general-intelligence-is-already-here/
AGI is the ability to memorize an INFINITE number of data and then be able to use the information for an INFINITE purposes.
Very interesting - I think the first part is easier to obtain (for example: reaching 100T parameter models) the utility part is super interesting to me. Getting into advanced reasoning with unknown variables, or self generation for coding new tool use etc! Thanks for sharing!
Not exactly answering what was asked, but what I am experiencing: anticipation, excitement, and fear. Reminding myself how this is accelerating towards us all.
I am building a "simple - full-stack app for a start-up with Vibe Engineering". A tiny speck in what is happening. But something that was NOT POSSIBLE only months ago. Following this all very closely since March of 2024 and finally beginning to ai code last ~ sept. Having that builder experience has helped my perspective I believe.
There are so many questions and such different views of what even the near future holds. As with many things we have to; read, listen, discuss, triangulate, re-calibrate, reflect, and then repeat in order to keep a handle on our changing reality.
I have always loved robotics. Suddenly any little robot can be C3PO - wow.
I think those are all really common and purposeful emotions. It’s incredible the exponential rate in which the industry is evolving, but all the more pertinent to have these conversations and collective dialogue!
AGI is the next frontier of AI, categorized by the consciousness of AI systems to create generalized humanlike output into the world
But how do we define consciousness? Philosophers couldn't agree on terms of consciousness for hundreds of years and that was before we had the most powerful technology the world has ever seen...
Maybe in the same way of there being different schools of thoughts about philosophy and consciousness, we may have different schools of thoughts around AI ethics with different perspectives with their own merits.
How do we define consciousness? Because for the longest time it was the Turing test - which was just recently shattered!
I define AGI as the moment that AI surpasses human intelligence across all parameters - are we there yet? Perhaps.
I think we're already there - AI is smarter than 99% of the general public in just about every domain
From a benchmark perspective, we're likely already here - so I ask: is it just about passing intelligence benchmarks or does it have to do with the ability to be conscious and overcome unprogrammed pathways ie. create outputs that it hasn't been trained on before?
I think AGI is the ability for AI to be an expert in every domain. I think a big part of the conversion is capabilities - which maybe has more to do with agentic systems than AGI.
Exactly, it's almost as if it's two different conversations. One focused on intelligence, the other focused on capabilities and integrations/actions.
Not so hard. - An AGI is a system that can learn anything a human can learn and then perform at a Phd level in the task learned.
It is the mirror of humanity without the ‘human’.
The way artificial intelligence processes information mainly involves input, digestion, and output. Automating these three stages is almost equivalent to achieving AGI. However, the digestion part is very important.
I completely love the idea of a culmination, not a singular occurrence.
I don’t disagree with any of the comments here ( there’s no reason to) but in keeping with your article AJ I think its simplest definition is societal prevalence. Does AGI have to be smarter than me? No. But it may make impossible to clearly appreciate where my interactions are human or machine, or when more than half of my daily tasks are driven by prompts or agents.
In essence, AGI is a system with broad intelligence, able to adapt and learn across a wide range of problems and tasks, not just specialized in one area...