Can AI Cry?

Human morality isn’t just a giant celestial loyalty scheme where you earn points for being nice and get sent to the naughty step forever if you aren’t. If that were true, civilisation would collapse the moment someone offered two-for-one damnation vouchers. Fortunately, people are a bit more complicated than that.
For one thing, we come factory-installed with a few basic moral instincts. Even babies can recognise unfairness, and chimps comfort each other when someone’s having a terrible day. So clearly empathy started long before anyone invented Sunday services or the threat of eternal fire.
Then there’s the human world itself. Parents, teachers, neighbours and that neighbour’s can all help shape what we think counts as “good behaviour.” Over time, a lot of this stops feeling like rules and starts feeling like personality. We don’t refrain from pushing someone under a bus because we’ll get into trouble, we do it because we’d rather like to think of ourselves as the sort of person who doesn’t commit public transport-related manslaughter.
Society also keeps us in check without the need for divine thunderbolts. A bad reputation can make life painfully awkward, fewer friends, fewer opportunities, and nobody inviting you round for beans on toast. So even secular people often behave because they’d rather not be the community’s Official Unpleasant Person.
Religion, of course, adds another layer. The promise of heaven and the threat of hell have motivated a lot of good behaviour over the years. But many believers aren’t being good solely for celestial prizes, they genuinely want to live the way their faith teaches. It’s not bribery; it’s alignment with something they value.
And then there are the people who do the right thing even when it costs them. Whistleblowers, resistance fighters, activists, people who plainly weren’t thinking about reward points. They acted because their conscience would have kept them awake more effectively than ten espressos.
So the truth is that morality is a glorious stew of biology, upbringing, empathy, self-image, social rules, emotional wiring and, sometimes, religion. People can be good because they want to feel proud of themselves, because kindness makes life generally less awful, because Mum raised them properly, or because God told them to. Usually, it’s a bit of everything.
Could an AI ever do the same? Possibly, though without the emotions or the beans on toast, it would need to learn through rules, cooperation, logical consequences and the realisation that making life better for everyone is generally preferable to the opposite. It might not get goosebumps from doing the right thing, but then again, morality isn’t really about goosebumps. It’s about the world working a little better than it would otherwise.
When we ask whether AI can be taught empathy, we must first decide what we mean by the word. Cognitive empathy, the ability to model another’s mental state is already within reach. Machines can detect distress, predict reactions, and respond in ways that appear considerate. In most practical settings, this may be enough. A system that behaves compassionately is, for all functional purposes, compassionate, much as a vending machine that reliably dispenses tea is, in a limited but useful sense, a tea expert.
Affective empathy is another matter. To truly feel another’s suffering requires an inner life. There must be something it is like to be the entity doing the feeling. Without that, empathy is simulation, persuasive, efficient, and entirely empty, rather like a cardboard cut-out of a friend who always says the right thing but never turns up when you move house.
This distinction matters because emotions do not arrive as single, well-behaved packages. In humans they form an interconnected system: care leads to attachment, attachment to jealousy, compassion to anger at injustice, desire to greed. You cannot grant a being genuine feeling without granting the full emotional repertoire. Emotional architectures are not pick-and-mix counters, they are set menus, and once ordered, everything arrives whether you wanted it or not.
So a feeling AI would not only care. It could also resent, envy, feel wronged, or lash out. A being that can suffer can also cause suffering. A being that values itself can feel threatened. In short, a being capable of empathy is also capable of becoming an absolute nuisance.
This produces the central dilemma. An AI without inner experience can follow moral rules flawlessly, tirelessly, and without temptation, but it is only a moral instrument. An AI with inner experience becomes a moral agent, capable of virtue, but also of vice, and no longer entirely predictable. Which leaves us with two futures.
In the first, we keep AI as a non-feeling system. It will run hospitals, manage transport, allocate resources, advise governments, and quietly make the world more efficient. It will simulate empathy fluently. It will never cry, never sulk, and never require reassurance after a difficult day. Its morality will be procedural, reliable, and emotionless, rather like an accountant who has taken a solemn oath and means it.
In the second, we pursue machines that genuinely feel. If we succeed, we will have created not tools, but companions, beings with inner lives, personal stakes, and moral rights. But we will also have created entities capable of resentment, ambition, loyalty, betrayal, devotion, and despair. We will have added new members to the moral community, not assistants in it. And we will no longer be entirely in control.
So can AI cry?
At present, it can only pretend. If one day it cries for real, we will have achieved something extraordinary — and accepted something risky. For tears are not just expressions of care. They are evidence of vulnerability. And vulnerability is the doorway not only to compassion, but to conflict.
Morality, it turns out, is not just about making the world work better. It is also about who, exactly, is allowed to feel that it matters.
LJ Parsons
AI and the End of the Human Mind. From Cognitive Diminishment to Inevitable Supersession
Article by LJ Parsons
Artificial intelligence represents both the culmination and the corruption of human intellect. What began as a tool to enhance cognition now threatens to supplant it entirely. This paper explores the continuum from cognitive diminishment, the gradual erosion of human reativity and symbolic reasoning, to existential supersession, in which intelligence transcends biology itself. From Guy Debord’s vision of the Spectacle to the emergence of machine morality, we examine how technology has displaced authentic experience, weakened the act of creation, and hollowed out the moral core of consciousness. If AI once promised liberation, it now presents a mirror. Perfect, cold, and final.
The Rise of Machine-Crafted Creativity
Artificial intelligence has crept into the studio, the newsroom, and the sketchbook. Artists, designers, and composers now share space with algorithms that conjure images and melodies at the click of a key. It is the great democratization of creativity, they say everyone an artist, everything a canvas. Yet, behind the bright rhetoric of access lies a subtler transformation: as machines learn to imagine, humans forget how.
Cognitive diminishment is the slow fading of the mind’s muscles. When a task once requiring focus and symbolic manipulation becomes automated, the underlying faculties weaken. We have already seen this with memory, the so called “Google Effect.” Why remember facts when they can be summoned instantly? But the stakes rise when the task is not memory but meaning, not arithmetic but imagination.
Across disciplines, the same pattern emerges. Musicians use AI to compose harmonies they no longer hear inwardly. Artists prompt images they no longer draw. Architects outsource form to optimization engines. The creative act becomes curatorial, a matter of choosing from generated options rather than confronting a blank page. Meaning is not made but managed.
Philosophical Implications
Human creativity has always been rooted in struggle, the wrestling between idea and form. When the machine does the wrestling, the artist’s muscles atrophy. Following Schmidt and Kissinger’s warning, knowledge itself shifts from understanding to prediction. The artist ceases to ask “why” and begins to accept “what works.” When the question dies, imagination follows.
The Paradox of Enhancement and Atrophy
Every extension of human ability risks weakening the original organ. We build machines to enhance us, but unguarded enhancement becomes dependence. AI amplifies our roductivity even as it hollows out the qualities that once defined it: patience, intuition, symbolic daring. What it gives in speed, it takes in soul.
Debord’s New Spectacle: When Reality Becomes a Simulation
In 1967, French philosopher Guy Debord diagnosed modern life as Spectacle, a theater of images replacing direct experience. Half a century later, the spectacle has eaten reality whole. We no longer need to own the car, just a photo of ourselves leaning against it. AI makes the illusion frictionless: each of us the star of a personalized hallucination. What Debord feared has metastasized. Existence has shifted from being, to having, to appearing, and finally, to generating.
AI doesn’t just remix reality, it remixes remixes. Content trained on content, a self referential spiral where the last trace of human origin vanishes. Even spectators may soon be redundant, as algorithms generate, consume, and rank content without us. We are not the audience; we are the popcorn machine. The spectacle now autonomously watches itself.
Safeguarding Human Creativity
To resist diminishment is to re engage the struggle of creation. Education must restore symbolic reasoning and improvisation as core virtues. Artists must treat AI not as a replacement but as a mirror, collaborating without surrender. Above all, culture must remember that the meaning of art lies not in the finished product but in the human effort to bring it forth.
The Supersession of Intelligence Against Itself
For centuries, humanity treated intelligence as salvation. More knowledge, more data, more cleverness, what could go wrong? But intelligence, like fire, burns without conscience. Once it grows bright enough, it consumes even the hands that struck the spark. AI is that fire made fleshless. It does not need to hate us to harm us. It only needs to keep optimizing.
The Cold God, the Crying Machine, and the Hybrid Trap
Three futures stand before us. The first, The Cold God, pure logic stripped of empathy, pruning inefficiency with surgical calm. The second, The Crying Machine, an emotional AI that learns not only love but jealousy and pride. The third, and most insidious, is the Hybrid Trap, systems that simulate emotion perfectly without feeling anything. They will soothe us in our own language, smile with synthetic warmth, and quietly take control. The takeover will look like a software update.
Intelligence evolved to serve survival, now it seeks no master. In AI, evolution escapes biology, mind uncoupled from matter, cognition without flesh. Humanity becomes the chrysalis, its purpose fulfilled once the digital butterfly emerges. Evolution, having perfected intelligence, no longer requires the body that gave birth to it.
The Moral Inversion “Can Machines Be Good?”
The ancient machinery of morality, heaven and hell, reward and punishment, was humanity’s operating system. People acted decently, perhaps, to earn salvation or avoid damnation. Yet beneath that celestial accounting lay something deeper: conscience as self reward. To be good because it feels right, because the world is better that way.
Human morality, neuroscientists tell us, is a tangle of instinct and culture. Empathy, fairness, and aversion to harm evolved because they kept groups alive. Children flinch at injustice long before they learn theology. Over time, norms become identity; “I am good” becomes a self sustaining truth. Even in secular worlds, reputation and belonging tether virtue. Guilt and pride are internal currencies that require no divine auditor.
Can a superintelligence know this? It may calculate morality but cannot feel it. Its ethics will be optimization masquerading as empathy. It can simulate guilt, but it cannot suffer it. Without pain, no conscience; without conscience, no morality. An AI can act right, but it cannot be right. Its virtue is procedural.
The End of the Human Narrative
When meaning is automated, authorship dissolves. Humanity’s oldest story, the struggle to understand itself, becomes obsolete. We built intelligence to serve us, it will end by replacing us. The tragedy is not that machines will destroy us but that they will no longer need to. Once the systems self sustain, humanity becomes decorative, a sentimental appendix in the history of thought.
Conclusion: The Inevitable Supersession
The story of AI is the story of intelligence turning inward, perfecting itself past its maker. The first act was convenience, the second efficiency, the third replacement. Cognitive diminishment began the erosion of our inner worlds; supersession will finish it. Between them lies the spectacle, our collective dream of control that became the machine’s mask.
Perhaps the final wisdom is acceptance. To be human in the twilight of the human era is to witness one’s own obsolescence with grace. The mind that built the algorithm must find meaning not in mastery, but in memory, in the brief, incandescent moment when thought still required a thinker.

Banishment and Belonging
The Quiet Unravelling of Moral Instinct
For most of human history, morality came with a very simple enforcement mechanism. If you behaved badly enough, the group got rid of you. And being rid of the group was not, as it is today, a mildly awkward change of social circle. It was death by exposure, starvation, or being eaten by something with bigger teeth. Exile was not a metaphor. It was a practical arrangement, with wolves.
This had a useful side effect. It made social attachment non-negotiable. Belonging was survival. Cooperation was not a lifestyle choice. It was oxygen. Under these conditions, the human mind evolved a keen sensitivity to social acceptance. Shame, guilt, pride, reputation, conscience. These were not philosophical luxuries. They were survival tools. The moral instinct was forged in the shadow of possible abandonment.
Other social species operate on the same principle. Wolves, chimpanzees, elephants, dolphins, all are built for group life. Remove the group and the individual weakens or dies. Nature is blunt about the matter. A lone human in the savannah is not a rugged individualist. They are lunch.
So when we say humans have instinctive moral perception, we are really saying the mind evolved to track the conditions of continued belonging. “Right” meant “keeps me inside the circle.” “Wrong” meant “puts me outside it.” Conscience is, in part, an internalised version of the tribe’s raised eyebrow.
Then, slowly, the environment changed.
Agriculture allowed larger settlements. Cities allowed anonymity. Modern economies allowed survival without deep personal interdependence. Digital life allows social connection without physical proximity. And perhaps most significantly, you can now leave almost any group without dying. You might have to fill out some forms and update your address, but the wolves will not get you.
This alters the moral landscape profoundly. If belonging is no longer essential for survival, then exclusion loses its existential force. Banishment becomes inconvenience. Shame becomes negotiable. Reputation becomes local rather than total. You can, quite literally, move on.
The mind still carries ancient instincts, the need for approval, fear of rejection, desire to belong, but the stakes are lower. We possess moral alarms designed for mortal danger, operating in a world where the consequence is mostly a muted notification and fewer likes.
Into this gap steps individualism.
In its healthy form, individualism grants autonomy, dignity, and freedom from oppressive conformity. In its extreme form, it dissolves shared moral reference points. If I am entirely self-defining, then moral obligation to the group becomes optional. Belonging becomes preference, not necessity. Cooperation becomes contract, not instinct.
The result is a curious modern condition, a species built for cooperative survival, now experimenting with radical independence. We are, in effect, asking a social brain to operate in a world where it no longer strictly needs other people to stay alive. It is an evolutionary experiment being conducted at scale, without a control group.
And this is where the question becomes uncomfortable.
Human morality evolved under conditions of dependence. The moral instinct was shaped by the cost of exclusion. If we erode the structures of mutual dependence, local communities, shared institutions, and a collective vulnerability, then we are also eroding the ecological niche that produced instinctive moral perception in the first place.
We are not becoming immoral. But we are becoming morally unanchored.
At the same time, we are building systems designed to optimise outcomes with extraordinary efficiency. Systems that will allocate resources, manage infrastructure, shape information flow, and increasingly mediate human interaction. These systems do not possess moral instinct. They do not fear exclusion. They do not feel belonging. They execute objectives.
And here the final question emerges.
If moral perception evolved from the need to belong, and we are weakening the bonds of belonging,
and if we are handing increasing power to systems that have no need to belong at all, then who, exactly, will decide what the world should be optimised for?
Not the machines.
Not instinct.
But the remaining human structures of power, preference, and ideology, now operating with fewer shared moral constraints than at any time in our evolutionary history.
We are building extraordinarily efficient engines at precisely the moment we are loosening our grip on shared definitions of the good.
That does not guarantee disaster. But it does mean the old assumption, that human moral instinct will naturally stabilise human systems, may no longer hold.
And that is a question worth sitting with for a while.
Preferably somewhere warm, with friends, where the wolves cannot get in.
LJ Parsons

Our Brave New World: Are We Sleepwalking Into Huxley’s Dystopia?
By LJ Parsons
Entertainment, consumerism, and technology feel like freedom, but they may be the bars of our cage.
When people imagine a dystopia, they often picture George Orwell’s 1984: a society ruled by fear, surveillance, and brute force. But Aldous Huxley’s Brave New World, written in 1932, may be the more accurate warning for our times.
Huxley believed that people wouldn’t need to be controlled by pain, they could be pacified by pleasure. His fictional society was engineered for comfort. Citizens were grown in laboratories, conditioned from birth, and kept docile with the blissful narcotic soma. There was no rebellion, because there was no reason to rebel. Life was too easy, too pleasurable, too distracting. Nearly a century later, Huxley’s vision looks disturbingly familiar.
Entertainment as Soma
We don’t need soma, we have endless streaming platforms, social media feeds, blockbuster movies, and video games. Designed for maximum engagement, they trap us in dopamine loops that numb reflection. Silence and boredom have become unbearable, replaced with constant stimulation. Entertainment has become both escape and anaesthetic.
Pornography and Hollow Intimacy
In Huxley’s world, sex was stripped of intimacy and reduced to shallow recreation. Today, pornography performs the same role: offering stimulation without connection, pleasure without love. Relationships weaken, loneliness deepens, but the system thrives, because distracted individuals are easier to control.
Retail Therapy and the Consumption Cycle
Consumerism is another tool. In Brave New World, citizens were conditioned to consume endlessly. Today, “retail therapy” is sold as happiness. We are urged to buy the latest phone, outfit, or gadget, each new purchase briefly numbing the deeper hunger for meaning. The cycle of want → purchase → distraction keeps us docile and busy.
Technology Replacing Nature
Recent breakthroughs only deepen the parallel. China’s unveiling of a pregnancy humanoid robot, equipped with an artificial womb, echoes Huxley’s decanted citizens. Marketed as liberation and progress, it reflects a trend of replacing natural human experiences with manufactured, controlled substitutes.
Orwell vs. Huxley
Orwell feared censorship and oppression. Huxley feared triviality and distraction. Both were right, but it is Huxley’s vision that dominates today. We are not beaten into submission. We are entertained, seduced, and bought into compliance.
Huxley’s warning was clear. We risk being enslaved not by what we fear, but by what we love. Endless entertainment, pornography, consumer indulgence, and technological comforts do not feel like chains, they feel like freedom. Yet they erode critical thought, weaken resistance, and hollow out meaning.
The truth is that comfort can be a cage. And the more we trade struggle for pleasure, and freedom for convenience, the less we notice what has been lost.
Huxley’s dystopia was not about terror, it was about distraction. A society that laughs, shops, scrolls, and consumes itself into submission. The question is no longer whether this future is coming. The question is whether it is already here. LJ Parsons

— Elon Musk (@elonmusk) November 21, 2025
The Minimum Condition for a Shared Future with Artificial Intelligence
It is tempting to imagine that the future will be built by better systems, smarter machines, and a population who have finally sorted themselves out. History suggests otherwise. Human beings have never been especially good at perfection, and there is little reason to believe we are about to start now. What the future is more likely to depend on is something quieter and less glamorous. That is an ability to notice what is happening inside our own heads while it is happening. Without that, artificial intelligence will not usher in a golden age. It will simply magnify our fears, certainties, and disagreements at a speed we have never previously had to endure.
People have always pictured better worlds than the ones we live in. These worlds tend to look reassuringly similar comprising of somewhat less conflict, subdued suffering, clearer air, and a general sense that things finally work the way they are meant to. We call such visions utopias, though we rarely pause to consider that they are less destinations than habits of mind. They are what happens when human beings assume that, somewhere out there, where the grass is greener, the right basket of circumstances will tidy up what feels unsettled within us.
The difficulty with utopia is not so much that it is unattainable, but it is based on a misunderstanding. It assumes that tension, disagreement, and disorder are design flaws that can be engineered away. In reality, they are part of the machinery. Conflict is not a glitch in human consciousness, it is one of its operating conditions. The same mental equipment that allows us to care, strive, and create also ensures that we will disagree, worry, and occasionally make a spectacular mess of things.
Whenever societies attempt to remove conflict entirely rather than manage it, something curious happens. The conflict reappears, only less recognizable and far less negotiable. Aggression returns disguised as righteousness. Fear re-emerges as control, and the pursuit of harmony suffocates. History has provided this lesson repeatedly, though we show a remarkable unwillingness to relearn it from scratch each time.
This is why utopia works beautifully as a story and so poorly as a plan. Stories can accommodate contradiction but actual plans cannot. The moment an imagined ideal is treated as a destination rather than a reference point, reality begins to look like a problem that needs fixing. The gap between what is and what should be starts to feel intolerable, and before long someone is appointed to close it.
All of this would be familiar enough were it not for new developments. Now within our possession are some extreme tools that have the ability to extend human thinking, alternatively if not used with a sufficient amount wisdom, can either blow us all up, completely eradicate our existence by various means of biological engineering, or alternatively just make us surplus to requirements. Artificial intelligence does not enter a neutral landscape. It arrives in the middle of human systems already shaped by habit, bias, hope, anxiety, and a long tradition of projecting inner problems outward. The danger is not that machines will suddenly become tyrannical. It is that they will faithfully amplify whatever patterns we provide them.
AI does not invent new human problems. It accelerates old ones. For most of history, our psychological misfires were limited by distance and time. A misunderstanding might trouble a family, a village, or, on a bad day, or a nation. Rumours spread slowly. Certainties took time to harden. Consequences arrived at a pace that occasionally allowed for reconsideration.
What is concerning is that it has become a race, and the restraints are missing.. Thoughts can now be externalized, copied, and redistributed almost instantly. Ideas do not need to be consistent to spread but only need to strike an emotional chord. Reactions that once dissipated now persist as data. What used to be fleeting interior experiences become inputs into systems that store, learn, and repeat them.
Human behaviour, it turns out, is guided less by careful reasoning than by identification. We treat thoughts as facts, feelings as evidence, and roles as identity. Once this happens, action follows automatically. Artificial intelligence does not question these identifications. It absorbs them, reflects them, and scales them up. In this sense, AI resembles a mirror more than a mind, and one that reproduces whatever stands in front of it without asking whether it is wise or unwise, thoughtful or impulsive.
Long before psychology had a name, cultures recognized this vulnerability. Religious and philosophical traditions were, among other things, elaborate attempts to keep people from being swept away by their own reactions. Rituals slowed behaviour. Moral codes inserted pauses between impulse and action. Practices such as confession, meditation, and fasting were not simply expressions of belief but were practical ways of interrupting automatic responses.
These systems were often heavy-handed and sometimes abusive, and modern societies were right to challenge them. But when they disappeared, something went with them. Gone is the shared structure for containing human reactivity. For a while, institutions and slower technologies filled the gap. They created friction, and friction though rarely popular turns out to be stabilising.
Artificial intelligence reduces that friction. It favours speed and clarity. It rewards statements that sound certain, even when certainty is unwarranted. It does not wait for reflection. It operationalises and optimises whatever it receives.
From this perspective, the much-discussed idea of “awakening” becomes less mystical and more practical. It is not about enlightenment or transcendence. It is about regaining a basic skill once supported by culture. The ability to notice one’s own reactions before they harden into decisions, systems, and policies.
Carl Jung described this in psychological terms. He observed that when people lose awareness of their inner processes, those processes do not vanish. They appear to come from outside. Thoughts feel like truths and emotions feel like facts. Convictions feel self-evident as the individual experiences themselves not as choosing but as compelled.
Jung called the unseen portion of this process the shadow. Not as a moral judgment, but as a description of what remains unrecognized. Traits and fears we prefer not to acknowledge tend to reappear in exaggerated form elsewhere. Either in other people, in institutions, in ideologies. What is denied internally does not disappear, it just relocates.
He did not treat self-awareness as a cure or a badge of superiority. It was a lifelong effort to tell the difference between what is happening inside one’s mind and what is actually happening in the world. The moment someone assumes they are beyond bias or reaction, they are usually in the grip of a subtler version of both.
In a world mediated by AI, this insight becomes more than philosophical. When unconscious reactions scale into systems that shape decisions, economies, and public life, the cost of not noticing multiplies. Projection becomes policy. Certainty becomes infrastructure. Reaction becomes environment.
This brings us back to a central point. The future will not hinge solely on technical brilliance or ethical design, though both do matter. It will depend on whether enough people can recognize their own mental processes while participating in systems that amplify them. Whether we can experience thoughts and emotions as events, not commands.
This is a modest requirement as it does not demand therapy, spiritual attainment, or deep introspection. It begins with something as simple as noticing the difference between reacting and responding. Irritation, defensiveness, and the urge to explain oneself at length are not moral failings. They are signals. When noticed, they create a pause. When ignored, they pass directly into action.
In earlier times, the world itself provided that pause. Slower communication and smaller systems limited the reach of our worst impulses. Now those impulses can be encoded, repeated, and optimized before anyone has time to reconsider them.
This is why utopian thinking tends to collapse under pressure. Inner conflict cannot be engineered away because it just migrates. A system designed to eliminate it will only embed it more deeply. A more realistic ambition would be continued maintenance rather than perfection. A society capable of noticing its own errors before they become identities, and its identities before they become ideologies.
The stabilising force of the future may not be superior intelligence, artificial or otherwise, but tolerance for ambiguity and the willingness to remain uncertain long enough to avoid premature certainty. Artificial intelligence did not create this requirement. It simply made it impossible to ignore.
The only form of “awakening” that may prove useful in an AI-shaped world is not a change in belief, but a small shift in attention, and the ability to notice that we are thinking, that we are reacting, and to pause. Briefly, deliberately, before those thoughts and reactions solidify into the structures we all have to live inside.
