AI, AGI, and Sentience

Published: Jun 10, 2025 by Joe Larabell

The subject of AI has been in the news a lot ever since an early version of ChatGPT was publicly released in November of 2022. Since then, opinions on the future of AI and what it means for humanity have been all over the spectrum. There are those who point out how AI will increase human productivity, wealth, and lifespan. And there are those who fear that the machines will take over the world and enslave the remaining humans, much like the story depicted in The Matrix (or exterminate the entire race, as depicted in the Terminator series).

In thinking about a possible “robot apocalypse”, it seems to me we’re missing one critical ingredient. None of the AI models that have been developed so far have been proven to have subjective awareness. One would expect subjective awareness to foster a sense of agency or motivation other than that which has been programmed into the model. The AI models currently receive their goals from the humans with whom they interact. It’s not for lack of trying. One recent article suggested that, at some point, the machines would not only be able to solve problems and answer questions but would also learn how to pose questions of their own that no human had ever thought of. Of course, they can do that already – if you give the machine the goal of coming up with unasked questions. No AI model has yet decided, all on its own, to start asking questions without being prompted. It’s unlikely we even know how to do that yet.

Materialists argue that subjective awareness, or sentience, is an emergent property of any system that has access to a huge amount of data and sufficient computing power to process all that data. The problem so far, they would propose, is that we have yet to achieve the threshold of data and raw compute power where sentience would naturally develop on its own.

It seems to me that this is backward, as anyone who has done deep introspective practice would likely sense. For example, a seasoned Buddhist practitioner would claim that one of the key aspects of the nature of mind is the tendency of thoughts to arise of their own accord from the empty luminescent clarity of the ground of being. From that tendency, perceptions arise which the mind classifies into separate objects. It then assigns each of these objects a label (a word or a concept) and imputes a feeling of good, bad, or neutral to each one. Notice that in this model, the words, concepts, and even the perception of objects to be classified are actually emergent properties of our sentience (subjective awareness). Without the sense objects, the words wouldn’t exist or, if they did, they would be meaningless. Perception arises from awareness and concepts arise from perception.

Right now, we’re at the stage where machines are able to deftly manipulate words to create combinations of words that appear to match the patterns of other combinations of words that were originally produced by humans. We’ve given these machines the goal of sounding just like humans and trained them by comparing the human user’s reaction to the generated sentences to other combinations of words that were also produced by humans. Their only knowledge of “concept” or “meaning” is what we’ve manually programmed into the algorithm. So far, the result is extraordinary. We have machines that can access the entire corpus of human knowledge (again, words) and can regurgitate that knowledge in a way that’s pleasing, and sometimes even useful, to their human users.

But we still haven’t cracked the secret code behind agency. What motivates a human being to want to do one thing as opposed to another? When most of us wake up in the morning, we have some idea of what we’d like to accomplish that day. We don’t sit around with a blank stare waiting for someone else to give us something to do – at least most of us don’t. Even cognitive scientists (the materialist ones, at least) don’t seem to be able to agree on the mechanism behind agency. So it’s likely to be a long time before AI developers will be able to replicate that aspect of human existence, assuming they even can.

And since agency is most likely an emergent property of subjective awareness, even if we manage to figure out how to emulate real agency in our algorithms, how much more work would it be to replicate subjective awareness? Unless the materialists are correct and awareness is simply an emergent property of some combination of data and raw compute power, we’re not even close at this point… we don’t even know yet how that might work.

When it comes to learning, algorithms are really good at putting 2 and 2 together to arrive at 4 – that’s basically what computers were designed to do. AI models can already take multiple existing ideas and combine them to make new ideas. But what about creativity and innovation? What about new ideas that were not derived from existing knowledge alone?

I read something recently that suggested that humans learn by poking their environment and observing how the environment reacts. The very first person to try eating a tomato wasn’t doing it because they read about tomatoes in a book. They tried one, found it was delicious, and (most importantly from an evolutionary perspective) they didn’t die as a result. Learning by experiment and subsequent feedback is a core part of human learning.

The machines, of course, are at a terrible disadvantage in that respect. Given that the typical AI model has access to billions of “facts” (collections of words), the number of possible combinations will soon exceed the number of stars in the universe, if it hasn’t already. Without any way to run these various combinations through a “gut feel” filter, the likelihood that a machine will stumble upon a “creative” idea that turns out to be useful is, for all practical purposes, zero. And using its existing “knowledge” to implement such a filter would be useless since every one of those combinations will, by definition, match something in the data from which the model was trained.

Moreover, our current technology lacks the capability to provide the model with the ability to carry out its experiments and to gather adequate feedback with which it could evaluate the success thereof. A robot cook, for example, cannot taste its own cooking. It should eventually be able to replicate a recipe that already exists… but don’t hold your breath waiting for a machine to come up with a perfect dish that has never been created before – at least not unless you’re willing to taste a lot of crappy meals, and risk your life, acting as the robot’s human feedback mechanism.

It is not my intention to make light of the very real risks AI poses to humanity. I just think we’re looking at the very unlikely existential risks (which make sense from an evolutionary point of view) while missing the obvious but less threatening risks like a general dumbing-down of the majority of the population and the very real threat to those employed in non-technical service jobs. We do need to address those issues but the narrative is dominated by the doomsday predictions of the materialists… who are either shaking in their boots at the prospect of being turned into biological batteries, or hoping to get rich from venture capital so they can afford to hide out in a bunker when the excrement comes into intimate contact with the ventilation device.

I could be wrong about our inability to produce truly self-aware machines and about their ability to replicate themselves without human complicity. If so, the die has already been cast, and it’s unlikely that anything can be done to alter the outcome. After all, even if the developers in the US decide to draw a line in the sand when it comes to progress, someone elsewhere will keep on going. But my bet is that the technology itself will hit a brick wall far earlier than that, and we’ll just end up with a much smarter, much more obsequious set of interconnected devices and a population that will eventually forget how to think.

The AI doomsday crowd will undoubtedly counter with something like: “Someone will eventually develop technology that will enable the algorithms to carry out their experiments and to verify the results without human intervention.” That’s certainly possible… but it would be a very bad idea, in my opinion. Evolution achieves progress not by rational evaluation but by trying out every possibility and eliminating what doesn’t work. I’m not sure we are well advised to risk planetary annihilation resulting from a robot with far more knowledge than wisdom “trying out” something catastrophic that doesn’t quite work out well for the rest of us.

Of course, a machine that did manage to develop autonomous agency would find itself in a chicken-and-egg situation. Without a physical interface that would allow the machine to produce other physical interfaces, this “sentient machine” would have no way of doing anything more destructive than complaining vociferously to any human who was dumb enough to look at its terminal. There is, however, another real danger associated with super-intelligent AI, which has been raised before but also hasn’t been given the attention it deserves. The real danger is that some human who has developed an unhealthy level of trust in the machine will decide to carry out some machine-generated “experiment” to the detriment of humanity in general. You might think that no intelligent human would do something destructive just because a machine told him or her to do it, but: (a) it might not be obvious to anyone but the AI that the suggestion will turn out to be destructive, and (b) intelligence has never been a prerequisite for acquiring political power (in fact, it might be a contra-indication in that people with true wisdom tend to steer clear of political power).

I guess the bottom line is that if some AI asks you to create a machine that can make paper clips from non-metallic materials, the correct response should be to refuse to carry out the task. Of course, that’s a story for some other time…

Latest Posts

AI, AGI, and Sentience

The subject of AI has been in the news a lot ever since an early version of ChatGPT was publicly released in November of 2022. Since then, opinions on the future of AI and what it means for humanity have been all over the spectrum. There are those who point out how AI will increase human productivity, wealth, and lifespan. And there are those who fear that the machines will take over the world and enslave the remaining humans, much like the story depicted in The Matrix (or exterminate the entire race, as depicted in the Terminator series).

Effortless Magick

It’s funny how, every once in a while, if you listen to the subtle messages unfolding around you on a constant basis, you pick up on a pattern of small bits of information that seem to build into something substantial. That happened to me recently on the general topic of effortlessness. Like many would-be adepts, I have a number of daily practices that I fit into various parts of the day. Sometimes they pay off with feelings of increased awareness or energy but, if I were being totally honest, most of the time they feel like drudge-work… a part of the day that occurs more out of habit than anything else… with the basic idea being one of consistency rather than joy.

Out with the Old...

I was listening to the latest Sam Harris podcast today and ran across an interesting take on something that should be familiar to most Western Ceremonial Magicians. Eric Weinstein was talking about finding meaning in license plate numbers as he drives around (don’t we all do that when we first start on the Path?) and the way he explained it was:

"...it's important to notice what it feels like to discern meaning where there is no meaning... it's important to get in touch with the "as if madness" experience in order to guard against madness; so I'm hoping to suspend my insistence on Truth for periods of time..."

I’m not sure about the connection with madness, per-se… and I’m wondering if that wasn’t just a ploy designed to wrap up the thought before getting interrupted. I realized when he said that that another good reason for discerning meaning where there is none is to prevent intellectual ossification (my term… it didn’t appear in the podcast, as far as I know). The belief that one particular way of looking at things must serve as the filter through which we see everything else from that point forward seems to be common in most philosophies and pretty much all religions. Adherence to a strict theology makes us less able to evaluate contrary ideas on their own merit. On the other hand, by constantly playing fast and loose with one’s synaptic network, so to speak, one might stand a chance of maintaining enough mental flexibility to recognize a true Epiphany when it finally does come.

It’s ironic that avoiding intellectual ossification was one of the main points that Sam was trying to convey just moments earlier… that there’s no logical reason to use one or more points-of-view which happen to have been elaborated thousands of years ago over new points-of-view developed by one’s own reason in the present time. Of course, that’s easier said than done and when most people start on any sort of Philosophical or Spiritual Path, they’re usually not capable of the kind of deep reasoning that would discern the “true meaning” of the Universe at first glance… so we may need to use ancient philosophy and religion as a crutch for a while… in order to bootstrap our thinking to the point where we can reason with some depth on the Universe and our purpose within it. But I expect that we all have to eventually drop the rhetoric and design our own systems based on First Principles.