🇫🇷 Français

Human and Machine: Entropy, Intelligence, and What Still Sets Us Apart

When thermodynamics reveals why AI may never be like us

By Angelo Lima

Introduction: A Universal Law Nobody Explained to You

There’s a physical law you apply every day without knowing it. It explains why your desk gets messy, why your coffee cools down, why your projects drift, why your teams become disorganized, and perhaps even why you sometimes feel overwhelmed by your emotions.

This law is the second law of thermodynamics: the entropy of an isolated system can only increase. In other words: without effort, everything tends toward disorder.

But entropy isn’t really “disorder” in the sense of a messy room. It’s about the number of possible configurations [2]. An assembled puzzle has only one correct configuration—low entropy. A scattered puzzle can be mixed in billions of ways—high entropy. And if you shake the box, the puzzle will never solve itself. Things spontaneously move toward the state with the most possible configurations. Always.

This idea has profound implications. On how we work, think, and feel. And above all, it sheds new light on the question that obsesses our era: what separates human intelligence from artificial intelligence? The answer might well lie in entropy itself.


Life: The Anti-Entropic Anomaly

In a universe inexorably marching toward total disorder, life is a spectacular anomaly. From a few simple molecules, it builds organisms of staggering complexity—cells that divide, organs that cooperate, brains that think. Life creates order where there shouldn’t be any.

But it doesn’t violate thermodynamics. It circumvents it. A living organism creates order locally by increasing disorder around it. You eat ordered food, you release heat and waste. The global balance remains positive in entropy. The universe continues its march toward chaos, but life has found how to swim against the current—temporarily, locally, at the cost of constant energy expenditure.

Erwin Schrödinger understood this as early as 1944 in his book What is Life? [1]: a living organism literally feeds on negative entropy. It draws order from its environment to maintain its own structure. As soon as it stops—that’s death. The return to thermodynamic equilibrium. The return to disorder.

This idea is dizzying: to live is to resist entropy. Every heartbeat, every breath, every thought is an act of resistance against the universe’s natural tendency toward chaos.

And it’s this resistance that underlies everything we do—including what we call work.


Human Work: A Universal Struggle Against Disorder

If life is resistance to entropy, then work is its primary tool. And not just intellectual or technical work. All professions, without exception, are forms of struggle against disorder.

The Surgeon

A sick body is a system whose entropy is increasing—cells become disorganized, organs malfunction, biological processes derail. The surgeon intervenes to restore order. They repair, restructure, realign. Every suture is an anti-entropic act. But the operating room itself generates disorder: energy consumed, materials used, staff fatigue. Order is always local, and it always has a cost.

Today, AI systems assist surgeons—image analysis, surgical planning, robotic surgery. They reduce uncertainty upstream. But at the critical moment, when the patient’s body presents an unforeseen anomaly, it’s the surgeon’s intuition—nourished by thousands of hours of embodied practice, by the sensation of tissue under their fingers, by that ability to “sense” that something is wrong—that makes the difference. AI optimizes the plan. The human navigates the chaos.

The Farmer

A field left to itself returns to wilderness. Weeds invade, soil depletes, biodiversity simplifies. The farmer imposes order: selecting species, structuring rows, controlling irrigation. They transform a chaotic ecosystem into an organized productive system. But again, at the cost of enormous energy expenditure—fuel, fertilizers, physical labor.

Precision agriculture now uses drones, sensors, and algorithms to optimize irrigation and treatments. AI excels at regularity: analyzing satellite data, predicting yields, detecting diseases in images. But the farmer who senses a storm coming by looking at the sky, who knows soil needs rest by taking it in their hand, who adapts practices to a microclimate no algorithm models—they operate in a dimension the machine doesn’t yet touch. Their intelligence is embodied in the earth [8].

The Teacher

An untrained mind is a space of high informational entropy [3]. Ideas are fragmented, connections random, understanding of the world fuzzy. The teacher reduces this entropy: structuring knowledge, creating logical links, building frameworks of understanding. They transform noise into signal.

Adaptive learning platforms and AI tutors already do part of this work—and sometimes better than humans for pure information transmission. But a teacher does much more than transmit knowledge. They perceive the discouraged look of the student in the third row. They sense the exact moment when a class disengages. They adapt their speech not to performance metrics, but to a collective energy they physically feel in the room. AI can personalize a learning path. The teacher creates the desire to learn—and that’s emotional entropy transformed into motivation.

The Manager

A team without direction is a high-entropy system. People work in their own directions, priorities are unclear, efforts scatter. The manager creates order: aligning objectives, clarifying roles, coordinating actions.

AI tools can already analyze a team’s velocity, suggest resource allocations, identify workflow blockers. But management, in its most essential dimension, is emotional work. It’s knowing that a colleague is going through a divorce and adjusting expectations without saying it explicitly. It’s sensing tension between two people before it explodes. It’s inspiring a collective to give their best when everything seems lost. The machine manages flows. The human manages souls.

The Developer

Code without architecture is chaos. Overlapping functions, circular dependencies, incomprehensible variable names. The developer imposes structure: patterns, conventions, abstractions.

This is perhaps the profession where the human-machine boundary is thinnest today. Code AIs generate functions, entire modules, sometimes complete applications. But as I detailed in my critical analysis of AI and developer replacement, generated code is often “almost correct”—and the “almost” hides hours of debugging. The developer’s true value isn’t in writing code. It’s in understanding the problem—that ability to translate a fuzzy human need into a coherent software architecture. AI writes code. The human understands why that code must exist.

The Artist

One might think art is pure chaos, pure entropy. But it’s exactly the opposite. Facing a blank canvas—a space of infinite possibilities, maximum entropy—the artist makes choices. Every brushstroke reduces possibilities. Every composed note eliminates alternatives. Art is the imposition of a singular order on a space of infinite chaos.

Generative AIs now produce stunning images, music, texts. Technically, they reduce entropy: they converge toward structured outputs from statistical noise. But something fundamental is missing. A human artist paints because they suffer, because they’ve seen something that shook them, because they want to say something words cannot express. AI generates because it’s asked to generate. The difference isn’t in the result—it’s in the intention. And intention is born from inner chaos. From lived experience. From emotional entropy.

The Lawyer

Law is a monumental attempt to reduce social entropy. Without laws, human interactions are unpredictable—high entropy. The legal system imposes rules, procedures, consequences that reduce the number of possible behaviors.

Legal AIs already analyze jurisprudence, predict trial outcomes, generate contracts. But law, in its most critical moments, is an exercise in structured empathy. To plead is to understand a jury’s emotions. To judge is to weigh the human intention behind an act. To legislate is to anticipate a society’s passions. Law without humanity is mere bureaucracy—dead order, without the breath of chaos that makes it just.

The Chef

Raw ingredients are a system of high culinary entropy—they can be combined in an almost infinite number of ways, the overwhelming majority of which will be inedible. The chef selects, doses, transforms, assembles according to precise principles to produce an ordered result.

AIs generate recipes, optimize flavor combinations, analyze taste profiles. But a great chef doesn’t follow a recipe—they feel it. They taste and adjust in real-time, guided by a sensory memory no database reproduces. Cooking, at its highest level, is a dialogue between body and matter [9]. Between a chaos of flavors and an order that emerges from intuition.


The pattern is universal: in every profession, AI excels in the anti-entropic dimension—structuring, optimizing, predicting, reducing uncertainty. But humans bring something the machine doesn’t have: the inner chaos that gives order its meaning, its direction, and its depth. AI is a canal. The human is a river.


Intelligence: The Ultimate Anti-Entropic Weapon

If work is the struggle against entropy, intelligence is the most sophisticated weapon in this arsenal.

Physicist Alex Wissner-Gross proposed an elegant idea in 2013 [4]: intelligent behavior is what maximizes future options. An intelligent system acts to keep as many doors open as possible. It’s paradoxical: it looks like maximizing entropy (more possibilities), but in reality it’s controlling which possibilities remain open. It’s directed entropy.

Take a chess player. A beginner reduces their own options with each move without realizing it. A grandmaster maintains a position that leaves them a maximum of viable future moves. They don’t create disorder on the board—they preserve a structured space of possibilities.

This definition applies to all domains. A good doctor keeps therapeutic options open rather than rushing to a diagnosis. A good investor diversifies rather than betting everything on one position. A good software architect designs extensible systems rather than rigid ones. In each case, intelligence consists of resisting the premature closure of possibilities.

Neuroscience suggests the brain operates at criticality [5]—that boundary between order and chaos where information propagates most richly. Too much neural order and the brain becomes rigid, unable to adapt. Too much chaos and it can no longer process information coherently. Intelligence emerges at the edge of chaos.

And this is where the comparison with AI becomes revealing.


AI: Anti-Entropy Without Chaos

A language model is fundamentally an informational entropy reduction machine [3]. You ask a vague question—there are millions of possible answers (high entropy). AI gives you a relevant answer—it drastically reduces uncertainty (low entropy). This is literally what an LLM does with each generated token: it chooses among thousands of possible words the one most probable in context.

AI is extraordinarily efficient at this entropy reduction. Often more efficient than a human. It processes more data, faster, with fewer cognitive biases. For purely anti-entropic tasks—classifying, sorting, optimizing, predicting—it’s already superior in many domains.

But there’s a cost, and not just energetic. As I explored in my article on AI’s ecological cost, the datacenters running these models consume massive amounts of energy and produce heat [12]. AI creates informational order by increasing physical disorder elsewhere. Thermodynamics won’t be fooled.

And there’s an even deeper difference. A living organism maintains itself [1]. It actively fights against its own degradation. It repairs, adapts, evolves. AI does none of that. Without humans to supply energy, maintain it, correct it, it stops. It doesn’t resist entropy—it’s a tool in humanity’s resistance to entropy.

AI is anti-entropic, yes. But it’s anti-entropic without the counterweight of chaos. And that’s perhaps where its fundamental limit lies.


AGI: The Dream of a Machine Like Us

The question everyone asks: could artificial general intelligence bridge this gap?

In theory, an AGI would be an incredible concentrate of order. It could understand, plan, adapt, create in any domain. It would be anti-entropic at a scale we can’t even imagine.

But to be truly intelligent in the human sense, it would need something nobody yet knows how to give it: something to lose.

A human thinks with urgency because they’re mortal. They create because they suffer. They innovate because they’re afraid. They love because they know it can end. All this existential entropy—the chaos of being a fragile body in an unpredictable world—isn’t a bug in human intelligence. It’s its engine.

An AGI, even infinitely more powerful than any human brain, would be intelligent differently. It could optimize, solve, structure better than us. But could it write a poem that makes you cry? Not because it doesn’t know how to assemble words—it already knows that. But because behind a touching poem, there’s someone who lived the chaos they describe. Someone for whom these words aren’t probabilistic tokens but scars.

There’s also the body argument—what philosophers and neuroscientists call embodied cognition [8] [9]. Our intelligence isn’t just in the brain. How you think is shaped by having hands, by feeling hunger, by having back pain, by smelling coffee in the morning. All this sensory experience feeds thought in a way a purely informational system cannot reproduce.

You can’t simulate the fear of death if you can’t die. You can’t understand the joy of a shared meal if you’ve never been hungry. You can’t grasp the beauty of a sunset if you don’t have a retina that vibrates.

It’s the difference between a river and a canal. Both carry water. The canal is more efficient—no meanders, no floods, no surprises. But only the river can carve a canyon. Because the canyon is born from chaos—from the brute force of water that follows no plan, that erodes, that overflows, that destroys to create.

AGI would be a perfect canal. The human is a river. And the world needs both.


Emotions: The Necessary Chaos the Machine Will Never Have

This is where the human-machine comparison becomes sharpest.

If intelligence is anti-entropic, emotions seem to be the exact opposite. Anger, fear, love, jealousy—they don’t follow logic, they don’t maximize options, they overwhelm rational thought. In thermodynamic terms, emotions introduce disorder into the ordered mechanics of intelligence.

But is this a design flaw?

Neurologist Antonio Damasio studied patients with lesions in the ventromedial prefrontal cortex [6]—the area connecting reasoning to emotions. Their IQ was intact. Their logic worked perfectly. But without feeling, they became incapable of making decisions. They could analyze the pros and cons of a restaurant for hours without ever choosing. Their pure intelligence, deprived of emotional “noise,” was paralyzed.

Read that last sentence carefully. Pure intelligence, without emotions, is paralyzed. This is exactly the condition of an AI. It doesn’t choose—it calculates the highest probability. It doesn’t decide—it optimizes a cost function. It doesn’t act—it executes. The difference between choosing and calculating is created by emotional chaos.

Emotions are a form of functional entropy—disorder the system uses as fuel. Fear makes you flee without thinking—which saves your life when a truck is heading toward you. Love pushes you to protect your children at the expense of your own safety—which is “irrational” but biologically brilliant. Enthusiasm pushes you to start an impossible project—which sometimes leads to discoveries pure reason would never have allowed.

If we return to the idea of neural criticality [5], emotions are precisely what prevents the brain from becoming too rigid. They inject just enough chaos to remain adaptive, creative, alive.

That’s why an AI, however powerful, lacks something fundamental. It lacks that entropic engine that pushes one to act without calculated reason, to create without guaranteed results, to take irrational risks—in short, to be alive. AI has no guts. And guts, in thermodynamics as in philosophy, matter.


Emotional Intelligence: Taming the Chaos the Machine Doesn’t Know

Then comes a modern concept: emotional intelligence. Observing your emotions, naming them, regulating them, using them wisely. It’s applying anti-entropy to what is fundamentally entropic. It’s a fascinating meta-level: intelligence turning against the chaos that feeds it.

Every human culture has attempted this domestication. Greek Stoicism proposed apatheia [11]—not the absence of emotions, but their mastery through reason. Buddhism aims for detachment through meditation. Cognitive-behavioral therapy restructures dysfunctional emotional patterns. Modern personal development promises to “manage your emotions” like managing an investment portfolio.

But here’s the troubling observation: it never fully works. You can meditate for ten years, and a betrayal or bereavement brings you back to raw chaos in a second. You can be the calmest, most structured manager in the world, and an unexpected crisis awakens panic. Emotional intelligence is a constant effort that never ends.

This is exactly what thermodynamics predicts. Maintaining order requires permanent effort. As soon as you stop, entropy returns. Emotional intelligence isn’t an achievement—it’s a permanent conquest against our own entropy.

And this may be our ultimate competitive advantage over the machine. AI doesn’t need emotional intelligence—it has no emotions to manage. But it’s precisely because we must navigate our own inner chaos that we’re capable of empathy, compassion, deep understanding. Emotional intelligence isn’t a human luxury—it’s the byproduct of a struggle only living beings can wage. And this struggle produces something no algorithm can manufacture: wisdom.

But individual wisdom wasn’t enough. Facing the immensity of chaos—death, suffering, the absurd—humanity needed a larger framework. Long before emotional intelligence, long before philosophy, long before science, it had found another tool to fight existential entropy: religion.


Spirituality: The Oldest Response to Chaos

Look at religions through the lens of entropy, and something striking appears: every spiritual tradition, whatever it may be, is fundamentally an anti-entropic system applied to human existence [15].

Existential chaos—awareness of death, suffering, injustice, the absurd—is perhaps the most vertiginous form of entropy the human mind faces. Why am I here? Why suffering? What happens after death? These questions are chasms of uncertainty, spaces of infinite possibilities where the mind can get lost. High existential entropy.

And religion, in all its forms, proposes exactly what any anti-entropic struggle does: reduce the number of possible configurations. It provides a framework. A narrative. An order.

Monotheism imposes a single God, a plan, a direction to the universe—where chaos suggests the absence of meaning. Buddhism proposes the Four Noble Truths and the Eightfold Path—a methodical structure for navigating suffering. Hinduism offers the concept of dharma—a cosmic order assigning everyone a role in the greater whole. Animist and shamanic traditions create a network of links between human, nature, and spirits—a mesh of meaning where there could be only void.

In each case, the mechanism is the same: facing a universe that seems chaotic and indifferent, spirituality creates a structuring narrative that reduces the anguish of infinity. It transforms “everything is possible and nothing has meaning” into “here is the path, here is the reason, here is your place.”

The Anti-Entropy of Rituals

Religions don’t just provide intellectual answers. They also structure time and behavior [16]—two dimensions where entropy constantly threatens us.

Prayer five times a day in Islam. The weekly Shabbat in Judaism. Sunday Mass in Christianity. Meditation cycles in Buddhism. Fasts, festivals, pilgrimages. All these rituals do exactly the same thing: they impose an ordered rhythm on the chaotic flow of existence. They are anti-entropic metronomes.

Without ritual, days blur together. Time loses its structure. Meaning crumbles. This is what many non-religious people are rediscovering today: the need for secular rituals—morning meditation, journaling, exercise routines—to maintain inner order. The form has changed, but the anti-entropic function remains the same.

And collective rituals add an additional dimension: they synchronize the inner chaos of thousands of individuals into a shared ordered experience [15]. A collective prayer, a shared song, a silence observed together—these are moments when social entropy is temporarily reduced to almost zero. Everyone feels the same thing, at the same moment, in the same place. It’s extraordinarily powerful. And it’s something no technology has managed to reproduce.

Faith: Accepting Chaos to Transcend It

But there’s a fascinating paradox in spirituality that distinguishes it from all other forms of anti-entropic struggle.

Science reduces entropy by eliminating uncertainty: it measures, proves, verifies. Technology reduces entropy by controlling: it automates, optimizes, predicts. Religion does something radically different: it reduces entropy by accepting the incomprehensible.

To believe is to recognize that you don’t understand everything—and find peace in that acceptance. It’s living with mystery without being destroyed by it. It’s the most paradoxical form of anti-entropy: creating inner order not by eliminating chaos, but by giving it a status. Mystery is no longer a threat—it becomes sacred.

This is exactly what AI cannot do. An AI facing uncertainty does one of two things: it calculates a probability, or it signals that it lacks data. It cannot accept mystery. It cannot find beauty in what it doesn’t understand. It cannot be moved by infinity.

The spiritual quest may be the ultimate proof of our entropic nature. We seek meaning because we live in chaos. We pray, meditate, believe because our inner chaos—our fears, our awareness of death, our visceral need to understand—pushes us there. The machine doesn’t have this quest because it doesn’t have this chaos. It doesn’t need God because it doesn’t need meaning. And perhaps that’s the deepest difference between human and machine: not intelligence, not emotions, but the ability to search for something we may never find—and keep searching anyway.


Human Duality: What the Machine Cannot Be

The human being isn’t a creature of order. Nor is it a creature of chaos. It is the battlefield between the two. And it’s this tension—this permanent oscillation between structure and disorder—that produces what we call the human experience.

Creativity is born from this tension. A musician who knows only music theory (pure order) composes technically perfect but soulless pieces. A musician who knows nothing about theory (pure chaos) produces noise. The music that moves us is born exactly at the boundary between the two. Enough structure to be understandable. Enough chaos to be surprising. This is exactly what AI doesn’t do: it produces structure without the lived chaos that makes it moving.

The same principle applies to innovation. The most revolutionary advances rarely come from pure method or pure chance. They come from that intermediate zone: a structured mind that accepts being surprised by the unexpected. Penicillin, Post-its, microwaves—all discoveries born from the meeting between an ordered system and a chaotic accident. An AI can optimize a research process. But it can’t have the happy accident that changes everything, because it has no body that stumbles, no hand that slips, no attention that drifts toward a fascinating anomaly.

Even our human relationships obey this logic. A relationship that’s too ordered—too predictable, too controlled—suffocates. A relationship that’s too chaotic—without landmarks, without commitment—exhausts. Relationships that last are those that find this dynamic balance between stability and surprise. AI can simulate a conversation. It can’t live a relationship, because a relationship implies the risk of emotional chaos—and that risk is what gives it value.


The Future: Neither Replacement Nor Opposition, But Complementarity

If AI is the canal and the human is the river, then the future isn’t in replacing one with the other, nor in their opposition, but in their complementarity.

AI excels at what we do poorly: processing massive volumes of information, maintaining constant attention, eliminating cognitive biases, optimizing complex systems. It’s an unprecedented anti-entropic amplifier.

Humans excel at what AI cannot do: giving meaning, feeling, understanding others, navigating ambiguity, creating from lived experience, making decisions in total uncertainty. They are the meaning generators in a world the machine can order but not understand.

Tomorrow’s surgeon will use AI to plan operations with superhuman precision—and make the difference when the plan fails. The teacher will use AI to personalize each learning path—and inspire students through passion, humanity, their own doubts. The developer will use AI to write code faster—and bring deep understanding of the human problem the code must solve. The artist will use AI as an extraordinary brush—and provide the vision, pain, and beauty that give the work its reason for existence.

In every profession, the same dynamic: the machine reduces entropy, the human gives it meaning.


What Entropy Teaches Us About Meaning

If everything tends toward disorder, if every act of order is temporary and costly, what’s the point?

That’s perhaps the wrong question. The right question would be: doesn’t meaning emerge precisely from this struggle?

Philosopher Albert Camus imagined Sisyphus happy [10]—this man condemned to push a boulder up a mountain only to watch it roll back down eternally. The absurdity of the task doesn’t make it meaningless. It’s in the struggle itself that Sisyphus finds his dignity.

We are all Sisyphus. The surgeon heals patients who will fall ill again. The teacher forms minds that will forget. The developer writes code that will become obsolete. The manager organizes teams that will reorganize. The artist creates works that time will alter. The farmer cultivates fields that seasons will ravage.

And yet, we continue. Not because we ignore entropy, but because the struggle against it is what defines us.

AI cannot be Sisyphus. It has no boulder. It has no mountain. It has neither awareness of the task’s absurdity nor the dignity of pursuing it anyway. It can push harder, faster, longer than us. But it doesn’t know why it pushes. And it’s the “why” that makes Sisyphus a hero.

Entropy tells us nothing lasts. But it also tells us that everything beautiful, true, and good that exists was torn from chaos by conscious effort. And that may be the most honest definition of meaning—a definition only a being who knows chaos can understand.


Conclusion: What the Machine Teaches Us About Ourselves

It’s ironic that it’s the advent of artificial intelligence that forces us to ask the most human question of all: what makes us irreplaceable?

The answer, seen through the lens of entropy, is clear. It’s not our intelligence—the machine is already faster. It’s not our memory—the machine is already vaster. It’s not our logic—the machine is already more rigorous.

What makes us irreplaceable is our chaos. Our emotions, our mortality, our body, our contradictions, our fears, our irrational hopes, our quest for meaning in the face of absurdity. All that inner disorder we spend our lives trying to tame—through intelligence, emotions, spirituality, work—is precisely what gives our intelligence its depth, our creativity its power, our existence its meaning.

Entropy isn’t our enemy. It’s the backdrop against which everything valuable is drawn. Without it, there would be no need to heal, teach, build, create, love. Meaning exists only because disorder threatens it.

The machine is our ally in this struggle. It amplifies our ability to create order. But it can’t fight the battle for us, because the battle isn’t only against external disorder—it’s also against inner disorder. And it’s from this battle—intimate, painful, permanent—the same battle waged by the monk in prayer, the artist before their canvas, the parent consoling their child at three in the morning—that everything that makes us human beings is born.

So the next time you feel overwhelmed by chaos—at work, in your head, in your life—remember this: you’re not failing. You’re doing exactly what life has been doing for 3.8 billion years. You’re resisting entropy. And the simple fact that you’re there, conscious, reading these words, capable of feeling something while reading them—that, no machine can do. And that’s already everything.


This essay was born from an exploratory conversation between a human and an artificial intelligence about the nature of entropy. The intuitions, doubts, and unexpected connections are human—creative chaos. The structuring, development, and writing were assisted by Claude (Anthropic)—informational anti-entropy. Neither would have produced this text alone. And perhaps that’s the future: not human against machine, but human with machine, in a dance between order and chaos that produces something neither could create separately. The sources below were verified by the author.


Sources and References

Thermodynamics and Entropy

[1] Schrödinger, E. (1944). What is Life? The Physical Aspect of the Living Cell. Cambridge University Press. — The concept of negative entropy (negentropy) as the engine of life.

[2] Boltzmann, L. (1877). Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung. — The statistical formulation of entropy (S = k·log W).

[3] Shannon, C.E. (1948). “A Mathematical Theory of Communication.” Bell System Technical Journal, 27(3), 379–423. — The link between informational and physical entropy.

Intelligence and Entropy

[4] Wissner-Gross, A.D. & Freer, C.E. (2013). “Causal Entropic Forces.” Physical Review Letters, 110(16). — Intelligence as maximization of future options.

[5] Beggs, J.M. & Plenz, D. (2003). “Neuronal Avalanches in Neocortical Circuits.” Journal of Neuroscience, 23(35), 11167–11177. — Neural criticality: the brain at the edge of chaos.

Emotions and Decision-Making

[6] Damasio, A. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. Putnam Publishing. — The somatic marker hypothesis: without emotions, no decision.

[7] Damasio, A. (1999). The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Harcourt Brace. — The role of emotions in consciousness.

Embodied Cognition

[8] Varela, F.J., Thompson, E. & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press. — Intelligence isn’t just in the brain, it’s in the body.

[9] Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. MIT Press. — Body and environment as active participants in cognition.

Philosophy and Meaning

[10] Camus, A. (1942). The Myth of Sisyphus. Gallimard. — The absurd and the dignity of struggle.

[11] Epictetus. Enchiridion (Handbook). ~125 CE. — The foundations of Stoicism.

AI and Ecological Cost

[12] Strubell, E., Ganesh, A. & McCallum, A. (2019). “Energy and Policy Considerations for Deep Learning in NLP.” Proceedings of the 57th Annual Meeting of the ACL. — The energy cost of large models.

Religion and Anthropology

[15] Durkheim, É. (1912). The Elementary Forms of Religious Life. Félix Alcan. — Religion as a structuring social fact.

[16] Eliade, M. (1957). The Sacred and the Profane. Gallimard. — Sacred time (ordered) vs. profane time (chaotic).

Tags: IA Personnel
Share: