As we approach the final chapters of Part 1 of Book 2 of The Life Divine, Aurobindo lays out his taxonomy of knowledge, locating the phenomena of Ignorance with respect to the individual ego, and arguing that there are vast realms of knowledge outside both below and above our surface awareness—other forms of consciousness, including the subliminal, the circumconscient, and the superconscient (among others). Our conversation explores these others domains especially, with a highlight focusing on how poetry, lucid dreaming, and other ways of knowing could tune us into wider, deeper realities than the surface self can conceive. We end with a discussion of Artificial Intelligence (AI) compared to the kind of knowledge Aurobindo presents us with.
Book Two: The Knowledge and the Ignorance—The Spiritual Evolution
Chapter 10: Knowledge by Identity and Ignorance
Chapter 11: The Boundaries of the Ignorance
Chapter 12: The Origin of the Ignorance
Marco V Morelli
These chapters (and our previous Life Divine discussion on memory and the subliminal) have me quite interested in the continual practice of internal “integral” work, especially related to accessing awareness in the dream states (Lucid dreaming and Dream yoga). I have personally experienced some very curious occurrences since dedicating more time to dream work and meditative practice, but will spare you details here! Reports from oneironauts such as Steven LaBerge, Andrew Holecek, Robert Waggonner and Alan Wallace, who all practice some form of deep dream work, often demonstrate the power of utilizing this dream-time or dream-realm for true awakening, deep meditative experiences and insights. As @johnnydavis54 mentions, lucid dreaming might get a bad rap, seen as a way to live out your wildest fantasies, but once the true power of this subliminal realm is accessed, the possibilites are quite possibly infinitely endless.
The blog post below explores one measurable aspect of the subliminal self (memory). “The action of subliminal memory” mentions Edgar Cayce’s"memory naps, in which he could memorize a recently read text by taking a short nap. It also mentions the Croatian girl who learns German in a coma, modern sleep studies on memory and certain autistics with near perfect visual recall. I am personally interested in this as my four year old son has superb recall, with remembering exact phrases of conversations from weeks ago and with memorizing some children’s books after he reads them once.
The conversation/comment thread that follows the blog post (started by none other than our friend, @Don_Salmon!) references The Irreducible Mind, specifically the chapter on memory. Thought some of us would like to compare this chapter with these Aurobindo readings (though as a supplemental exploration, not a request!).
Hi! Perfect timing. I joined a Facebook Lucid Dream group a few weeks ago because I wanted support in my renewed lucid dream practice.
I’ve been lucid dreaming since childhood. I think I’m familiar with almost all the most well-known techniques and writers (LaBerge, Holocek, Waggoner, Wallace - my favorite). I did my masters thesis on lucid dreaming (taught 12 people “WILD” – how to go from waking to lucid dreaming without losing awareness) (I also agree with Johnny’s point – a lot of people get caught up initially with using LDs for as much pleasure as possible, but a remarkable number soon tire of it and realize the extraordinary depth and range of potentials)
I thought I was familiar with everything, but I learned a new “wrinkle” on the WILD technique from Brian Aherne, who has a great Facebook page on this and a book as well.
Here’s the basics:
This is the foundation for all WILD techniques. I find 61 points, available as a guided relaxation at www.swamij.com, to be the best. If you memorize it, you can get to the point where you almost know for sure that by about point 35 or 40, you will be in a sufficiently altered state to start generating hypnagogic imagery.
Some people will find drone or other deeply relaxing music very helpful. If you like music, imagine it vibrating through your body. A lot of proficient lucid dreamers describe a kind of humming or vibrating sensation as you’re about to shift into a lucid dream, so getting used to this sensation can help trigger the shift.
Observe spontaneously arising hynagogic imagery.
“Spontaneously arising” is key. Sometimes initially it can be a bit helpful to try to visualize, but for the vast majority, getting used to staying deeply relaxed and just allowing the imagery to emerge is important.
Focus in on one image and keep noting the details.
This is the one I never heard before. For at least 20 years, it’s been quite easy for me to get to the point where I’m seeing vivid, 3D hynagogic images, but it’s always been hit or miss whether that crucial POP! happens and instead of watching, I’m IN the dream.
This is the first technique I’ve ever heard of where you can develop the exact skill you need to enter the dream. I’ve had about a half dozen brief (just 5 to 10 seconds, not a big deal, but still) lucid dreams since learning the technique last month.
Here’s a more detailed description of WILD techniques from one of the internet’s best sites on Lucid dreams (I think she’s now in her 30s; she left a financial job in her mid 20s to move to New Zealand with her boyfriend, and within 3 years of starting her lucid dream site, she has become financially independent!)
And finally – speaking of impeccable timing – I just posted this comment this morning at the NY Times, mentioning, among other things, lucid dreams. Someone evidently liked the comment enough to google my name and wrote a letter via our website (www.remember-to-breathe.org). What a world:>))
1.The American Psychological Association, in just the last 2 years, has published: A textbook which, among other things, takes for granted that consciousness is the fundamental reality, and psi (paranormal) research is completely valid; and a special journal article also presenting an essentially positive view of psi research as valid.
One of the world’s leading neuroscientists, Christof Koch, is among a rapidly growing number of equally esteemed scientists seriously considering panpsychism (the co-existence of mind and matter throughout the universe) as superior to materialism or physicalism as the basis of our understanding the universe.
Scientific research on lucid dreaming (a state in which one is aware that one is dreaming; a state far far more vivid and potentially powerful for both psychological and physical healing than any form of virtual reality could ever be) is advancing to the point that it is likely that within 10 years, the ability to be lucid at will in one dreams will be widespread, leading to a transformation of the entire field of medicine, as well as having profound ramifications for education, sports, the arts, the entire range of sciences (including the ability to conduct and replicate successful parapsychological experiments), and more.
If we are wiling to let go of the incoherent philosophy of materialism - which as Pauli might have said - is not even wrong - we can birth a new era of unparalleled unity and freedom and love.
That is a very interesting demo. I wonder what it will be like when the receptionist in the salon is also an AI, so that an AI for the customer is talking to an AI for the business. And also the hair stylist might as well be an AI. I forget who said it, but somebody has said, regarding AI, that everything that can be automated, will be. (Or maybe he said, should be??)
Personally, I don’t like doing repetitive tasks. If a machine could do it, and there is no meaningful experience to be had through the activity, I would prefer the machine does it. I am not averse to boredom. Meditation, for example, is arguably (for many people) a repetitive, boring activity—sit still, breathe in, breathe out, not much to it. Yet, it would be absurd to have a machine meditate on one’s behalf, for in this activity the experience of boredom, or of whatever is arising—the experience of experience itself—is the point.
I guess my point is that automation in and of itself is not bad. Digital assistants are not bad. However, the dream was that automation would free people for more meaningful, enriching, engaging pursuits. Instead of working on an assembly line, people could be learning, traveling, making art. The way it’s turned out, however, millions and millions of people are still stuck in meaningless, repetitive tasks—David Graeber calls them “bullshit jobs”—merely in order to keep the machine of global capitalism humming. I hope this is not too cynical, but I fear the greatest beneficiary of AI will be corporate profits.
Having a Turing Test-passing digital assistant may be great, but still what matters is the life experience, and the intentions and purposes the AI is serving. Automation for automation’s sake, or for profit maximation’s sake, doesn’t appear to me as a great, noble, and worthy pursuit. On the other hand, if we could use AI to free our time and energy for consciousness exploration and creative work, I am all for it.
Dear all, I was too tired to make it for the session… hopefully I will for the next one (maybe waking up while lucid dreaming… …) and I don’t know what you discusses and how you came to the AI topic. However, I’m an AI skeptic for reasons that one can hardly put forth as a good argument in a context that can’t think beyond that of scientific materialism. Since the question about AI arises in the context of Sri Aurobindo I would like to make my point why AI will most probably turn out to be quite disappointing in the coming years/decades and be remembered more than a media hype and a fashion than a real technological breakthrough.
I suspect we are still much further away from the announced achievements of AI than the industry and the media would like to make us believe. But here I wouldn’t like to focus too much on the technical aspects but on the fact that modern AI is based on the materialist and monist mind/body assumption. Among philosophers it is a well known fact that the mind/body problem has never been solved and remains an open question more than ever. The debate if that what we call ‘mind’ and ‘consciousness’ is in the brain, in fact is an epiphenomenon of the brain, has never found a clear resolution and consensus among philosophers. However, among scientists the (more or less unaware) assumption is that mind and consciousness are nothing else than an emergent property of a huge complicated network of neurons, not much more than that. The whole industry and academia even don’t question this and take all this for an almost self-evident fact. It is this unaware assumption combined with some partial successes of the last years (Google Alpha playing Go, a bit smarter self-driving cars, the exponential integration of the hardware, etc.) that causes so much excitement. If they are right, we should see in the coming years the announced great revolution and the bright future of AI. I wish them good luck.
However, if we take seriously what, not only Sri Aurobindo is talking about but also the teachings of most spiritual traditions, according to which the brain is only a little superficial material blob of a much waster and greater existence which goes beyond the physical with all its planes and parts, then things must be seen from a bit different perspective. If a dualist mind/body perspective is correct, where consciousness, mind, the higher mind, the intuitive mind, Overmind, Supermind as also the subliminal, the psychic being, the subconscious mind, the subjective experience, etc. are non-physical planes, parts and states of our being, with the brain only a secondary instrument, a sort of ‘post-processor’, so to speak, but is not its source, then all this AI is possibly completely missing the point.
For example, the question is if it is possible to drive a car without being conscious and understand the meaning of what one is seeing? Well, to a certain degree we know it is, but I dare to challenge the common wisdom that fully autonomous self-driving cars will be among us soon. If mind is not the brain, but we try hard to simulate with powerful supercomputers and recreate the most sophisticated neural networks, will these AI machines then be able to perform functions we carry out as a subjective conscious mental entity that perceiving meaning? I doubt this. If a mental and conscious subject that perceives meaning is lacking, AI will never go beyond a certain limit. If we don’t know how consciousness, mind and its construction of meaning emerge (and science has no clue how this happens), also gazillions of neurons won’t do the job. Taking this spiritual perspective it becomes clear how, despite all the announcements, we won’t anywhere soon sit in a level five self-driving car and the prophecies of millions becoming jobless because of AI robots doing the jobs for us won’t become true either. Not to speak of scifi scenarios such as humanoid robots or Hal9000 computers becoming reality. I’m afraid that in 10-15-20 years we will discuss here why AI turned out to be disappointing compared to what it promised to achieve.
With this example I have also furnished a falsifiable statement that you can use against me, when the time has come.
I have gone through the last session and realize now that you had already dissected this AI topic. A pity that I missed it. Glad to hear that we mostly agree on this. Thanks for your further thoughts which were illuminating.
Thanks for the update, Marco. I am not a technologist, but I can imagine that it won’t be long until self-driving cars are the norm. It seems they are not yet feasible ‘in the wild’ but only along pre-determined courses, yet, there is much money to be made in driverless transportation and many players vying for early dominance, from the traditional automakers like BMW to start-ups like Tesla to ride-hailing services like Uber.
In my opinion, this use of AI is really promising and appropriate. Human error while driving motor vehicles is the cause of hundreds of thousands of deaths each year, plus much time and mental energy is wasted just getting from point A to point B. I look forward to being able to hail a driverless car whenever needed and using the time otherwise spent behind the wheel instead reading, writing, conversing, watching YouTube videos (I’m sure there will be much of that), etc.
I don’t think we need a human consciousness to drive a car…
But to appreciate a poem, compose a poweful symphony, love a child, care for an aging parent—these will always require real presence, in my opinion.
Kevin Kelly argues that AI will not become an all-powerful god, which he thinks is silly (as it sounds like you do, too) but an everyday utility, like electricity is today. We will come to take it for granted; we will not be dominated by it.
That said, I think there will have to be some transition period that allows us to get used to these new powers. At the moment, we are still drunk on them, addicted, delusional. I think that, over time, we will have to come to a sober assesment of the utilities of this new technology, and its limits. And perhaps, if we can ‘divinize matter,’ we can divinize AI???
Marco, a driverless car killed a woman walking on a sidewalk in Arizona a few months ago. I have yet heard about the explanation. I think there will be a lot of kinks that need to be worked about before an ecological practice of driverless cars will actually work. Maybe we need a new definition for road kill?
I agree but I find it painful that we are having to have this conversation. How deep into the shit are we that we have to remind ourselves of these obvious facts?
My guess is that Kelly will get this wrong, too. I haven’t see where he’s all that great in the prediction department. What if we’re already being manipulated in dominant ways with the crude AI that’s already being implemented? Isn’t that really what Jore’s film – at least in part – was trying to make us aware of?
The follow-up TED talk when I followed the link to YouTube to Lanier in the other thread was from Zeynep Tufekci, which while not a particularly powerful talk, was a particularly poignant one, which is precisely the point: at the moment, they’re only getting us to click ads, but we’re creating a dystopia in the process. Lanier, and Tefekci to a certain extent, appear to believe that this is somehow inadvertent, but given both the financial and power dimensions lurking in the background and the fact that the Defense Department is one of the biggest users of AI for anything but harmless or benevolent purposes should give us pause to think … deeply.
And, in reply to John’s reply to your own caveat about the domain:
I would say we have a lot more to think about here than what we ourselves think or believe. The technology is not harmless nor benevolent because the people who are producing it are not harmless nor benevolent. And, even if they are fundamentally harmless and benevolent by nature, too many others who are involved are too easily tempted by silly thing like money and power to trust that they will resist and we’ll all live happily ever after.
What we need – to my small mind – is a fundamental metanoia, and I don’t see that on the horizon (but I’m not better at predictions than Kelley is). I’m sure lot of folks like to think of, say, a shift to integral consciousness as that kind of step: we’ll be integral, we’ll have our shit together and a wonderful some-kind-of-positive-progressive future will be before us. Since I think of changes of consciousness more in Gebserian than Aurobindan terms, I don’t believe that a shift to integral consciousness is going to resolve anything. Disintegration is also a kind of integration, or we would have a different word for it. We have a compelling need to move beyond the obvious, and we need a much deeper conversation.
And , that Integral consciousness isn’t going to resolve anything…And a deeper kind of conversation…and when Integral consciousness isn’t going to resolve anything…what kind of deeper conversation is that deeper conversation?
I wonder what is between Gebser and Sri Aurobndo that the term Integral has been applied to both of them? I leave up it to the God(s) to make sense of all this…for I am radically confused.
Tufekci’s point is interesting because it captures, we might say, the banality of AI. Yes, of course, the military and various nefarious actors will (and do) use technology to manipulate and control the global population for dark reasons. And this may entail dramatic dystopian consequences…but if the most predominant uses of AI are simply to capture our attention for advertising purposes, then it seems to me we have some room to maneuver by examining the incentive structures of the platforms we use, and wisely choosing alternatives. Once we recognize them for what they are, we don’t have to participate in their toxic games. One of the benefits of greater awareness—such as provided by a film like “Stare into the lights…” or the recent spate of TED talks which are suddenly critical of Silicon Valley’s business model—is that it gives us the option to try withdraw our consent, and use the technology differently. If we forfeit the possibility of an alternative course, then most likely we will be stuck with the dystopia. But so long as this emerging tech is here, isn’t it up to us to imagine how it might be used for benefit, rather than harm?
And so we study Sri Aurobindo and Gebser and others to figure out if we have some “wiggle room” to develop alternate ways of knowing. This has happened before and it may happen again. And where is the intensity? Are we are moving towards a threshold or are we trying to block the intensity and maintain a status quo? I think we are doing a lot of heavy lifting, pulling up weeds and planting some seeds. Let’s not stray too far away from our best metaphors and analogies before we critique ourselves into oblivion.
I guess that a level five selfdriving car would be as reliable and safe, if not more so, than a human driver - but for that type of task, that is driving a car safely from point A to point B, an inner sense of beeing in the AI probably isnt requiered. So even if AI is no eqivalent to phenomenological first-person awarenes in humans, it would still be able to do this task.
I think that if it happens the thing that it will do is not necessary resolve but definitely soften the culture wars
Just like modern Nations dont seriously consider wars or expanding their territory anymore - France isn’t planing on expanding their colonies, or conquering Germany, as they did 200 Years ago - just like a possible end to physical war we could see an end, or at least a deep softening of cultural wars, in addition to all kinds of problems that will stay
I don’t think there is strong evidence for this. War and the threat of war is precisely what defines American foreign policy, and has in particular for our current century. That is what Afghanistan, Iraq, Syria, and Yemen are all about. What is more, France may not be trying conquer Germany, but NATO’s (the most seriously superfluous international organization in existence) entire raison d’etre is containment (if not elimination) of the Russian threat, by war, if necessary, which is the reason for all the sabre-rattling going on along the Russian frontier. And the same applies to Iran. We (that is, the USA and NATO) are as war-mongering as ever and there is no indication that this is going to change any time in the foreseeable future.
For one, Gebser called this most recent mutation-in-progress the “Integral”. It was pointed out and Gebser himself recognized that there were strong parallels between what he was saying and what Sri Aurobindo was saying. It is easy to see how the label gets spread around. The biggest contrast between Gebser and Aurobindo, as I see it, is that Gebser postulates – in consonance with the other structures – a deficient integral. It’s, I admit, a tough notion to swallow, and it’s perhaps even harder to accept that he said this structure could go deficient before it went efficient. I seriously think a case can be made that that is precisely what is happening. It may be a necessary step in the process, like the delirium before a fever breaks or the classical darkest hour before the dawn, but I would hazard to say that the vast majority of us who think about such things have an unstated and unchallenged presupposition of “progress” in our personal grab-bag of assumptions we carry around with ourselves, and it often drives our thinking.
On the other hand, and along those lines, while I haven’t personally followed the process in detail, from what I have gathered in regard to the Wilberian thread of development, “the Integral” is envisioned as something akin to “illumination” (spiritual or otherwise), but the Enlightenment was just a step along the way, it was certainly anything but an endpoint. I think too many are too willing to see whatever it is that “integral” is supposed to be as some kind of final stage of development. Gebser, it is clear, did not think so. And I just happen to agree with him.
One of the reasons for this could be that we think of the integral (just like we think of illumination or enlightenment) in terms of ourselves. And it could be this orientation which leads to the atomization and disintegration (the complete and utter separation of all of us into individual and singularly conceived entities … which is also how I see current AI developing, which in a sense makes it an “integral” technology, but a deficient one). And here we come back to a point of contact between Gebser and Aurobindo: both state very clearly that our primary task and challenge is to get over ourselves, to transcend our egos, and I just don’t see enough of that going on, even in the so-called integral community.
This whole thematic, it would seem, hasn’t been thought through yet. We think a lot about it, which is very mental-rational of us, I’d say, but that whole diaphaneity thing yearns for “through”.
I’m not an expert in AI either, but some things can be said as a matter of principle if we pay attention at how our mind works.
Well, along “pre-determined courses” I might be more willing to believe that. To some degree I can imagine highways or major roads where the human factor is minimized. Could be… and I’m not saying that AI has no future. I agree that a huge impact AI might have is in safety (such as a software that is able to detect if someone is falling asleep behind the driving wheel and takes the control over before crashing). Self-driving might be very useful when one has to follow slowly in a lane (might be especially helpful on German overcrowded highways). I can imagine it will have further applications in crime prevention (news report that recently facial recognition tech at an airport caught a man attempting to enter the US with false documents). We also know how much AI already stands behind our daily Google search and automatic translation has indeed made a progress (even though I’m not sure that human translators will become jobless soon, as many prognosticate). Not to mention of the robots in the automobile industry which have evolved staidly since the 1970s (and yet, humans working at the assembly lines are still needed).
But, by the way, as to pre-determined courses… why is AI not already working for railways? It should be much easier to implement it there. I know it is working in some subways, but can’t tell about railways. Moreover, nowadays airplanes can already use autopilots, also in the takeoff and landing phase. And yet, for some reason or another, human pilots are still preferred and I haven’t heard of any AI revolution coming anywhere soon in this regards. Anyway, yes, I can imagine that in the future AI will become an everyday utility. But the fully automated car without a driving wheel and that can drive ‘in the wild’ (the level five self-driving cars which, until few years ago, were predicted by some for not later than 2020) remains far from becoming a reality.
I believe that most of our daily activities are of a much more complex nature than we are aware of and which can’t be reduced to a pre-determined activity. I don’t know if a ‘conscious chauffeur’ with an ‘inner sense of being’ (whatever that might mean) is necessary to drive a car, but can’t imagine how one could drive without understanding meaning. And understanding meaning (I would say ‘perceiving’ meaning), is not just data crunching and, I’m afraid, even not a deep learning neural network physical state. A pocket calculator can easily add 1+1=2, but it is just a change of an inner physical state of an object, there is nothing that ‘knows’ and ‘perceives meaning’ that relates it to an everyday experience (such as take one apple and then another apple, put them together in a basket and you get two apples, with the awareness in the background what apples, trees and baskets are). On a street AI does not ‘see’ anything. AI does not ‘understand’, ‘perceive’, ‘see’ or ‘recognize’ things such as other cars, human beings, houses, trees, etc. It ‘sees’ only strings of ones and zeros that elaborates according to some rules or neural network input-output processes and classifies it, but it associates no meaning in the sense of ‘recognition’ humans do. Even a human could not drive safely in these conditions. One can see this easily in deep learning pattern ‘recognition’ neural networks. Even the best which are able to classify objects or living beings correctly in 99% of the cases, suddenly make blatant mistakes, say a human being mistaken for a plastic bag. The self-driving car would not brake… because it does not understand the meaning of what it sees. And for driving one needs also to understand and even predict the human behaviour itself. One can check this when driving. More or less unconsciously we base our driving behaviour by predicting how the other drivers are going to drive and react to the environment they drive through, we even have to take into account how they might potentially make mistakes. This is one of the most complex tasks that even humans have difficulty to perform.
Of course, AI might become a dangerous technology. I can’t think of any technology that can’t become dangerous if used against humanity instead for its wellbeing. Tufekci’s fear that it might be applied as an “infrastructure
of surveillance authoritarianism” is a real threat (even though, as a vegan, I vehemently disagree with her link between veganism and ‘going down the rabbit hole’ ). Actually however, IMO nuclear and biological weapons of mass destruction remain the most dangerous threats to humanity, still.
I don’t know Gebser and can’t tell the difference (I understand an integral consciousness also as a fundamental metanoia and beyond), but agree that there must be somehow a change in consciousness. Technology can help, but that alone can’t protect us from ourselves (as, just to mention some names, E. Musk & co. seem to believe). After all, precisely that was the aim of Sri Aurobindo. The shift of the human collective consciousness as the key to save humans from themselves and potential self-annihilation. Otherwise we will be surpassed by another species, as the Neanderthals were. According to him, the turmoil going on in this world is precisely the method Nature (or the Supermind behind the ‘veil’) uses to lead us towards this transformation.
So yes, AI is a wonderful (and/or dangerous) technology that might change in many ways our life. But IMO its impact in replacing human activity which needs higher levels of human cognition will remain limited. If someone accepts the dualistic perspective it is easy to see why that must be so. At bottom mind might not be physical as almost all scientists like to believe. Mind and consciousness are involved in the brain but are not the product of the processes of the brain. As Sri Aurobindo wrote in the LD: “the brain is not the creator of thought, but itself the creation, the instrument and here a necessary convenience of the cosmic Mind.” If that would be correct, then most of the AI approach is fundamentally flawed in principle from the outset.
If I might add another speculative doubt and skepticism on the top of all that: once we will have divinized matter we might have no need for any technology at all.