Frequently Asked Questions - answered by Yuval Noah Harari

On this page, Yuval Noah Harari answers some of the questions he gets asked frequently – explaining his position on topics like fiction vs. reality, technological determinism, post-humanism, religion, and more.


  • Do you think that everything is just fictional stories? Is nothing real?


I don’t think everything is a fictional story. Yes, money is a fictional story, corporations are fictions, and so are nations, gods and the laws of football. All of these things were invented by humans, and exist only in our own shared imagination. However, there is still a reality, too. The most real thing in the world is suffering. If we hear a story, and we want to know whether the hero of the story is a real entity or a fiction, we need to ask one very simple question: “Can this hero suffer?”.

When people burn down the temple of Zeus, Zeus doesn’t suffer. When the dollar loses its value, the dollar doesn’t suffer. When a bank goes bankrupt, the bank doesn’t suffer. When a country suffers a defeat in war, the country doesn’t really suffer. It’s just a metaphor. Zeus, the dollar, banks and countries don’t have a nervous system, a brain, or a mind. They cannot feel pain or sadness. They cannot suffer. 

In contrast, when a soldier is wounded in battle, he really does suffer. When a famished child has nothing to eat, she suffers. When a cow is separated from her newborn calf, she suffers. This is reality. Of course, suffering might well be caused by our belief in fictions. Take for example the numerous wars fought for the city of Jerusalem. I lived much of my life in Jerusalem, so I know it well. Physically, it is an ordinary place. There are stones, trees, buildings, people, dogs, cats. But then people imagine that it is a very special place, full of gods and angels and holy stones. They then start fighting over this place – not over the real stones and trees, but over the fictional stories in their minds. The cause of the war is fictional, but the resulting suffering is 100 per cent real. The blood is real, the pain is real, the grief is real. This is exactly why we should strive to distinguish fiction from reality. 

I don’t want to imply that all fiction is bad. It isn’t. Fiction is vital for our survival. Without commonly accepted stories about things like money, states, corporations and laws, no complex human society can function. For example, in order to play football you must first get 22 people to believe in the same rules, despite the fact that these rules exist solely in our imagination. Playing football is great fun, but if some hooligan starts beating up fans of the opposite team, he is taking the story a bit too seriously. Similarly, to have a functioning country we must get millions of people to believe in the nation, its flag, its currency etc., despite the fact that all of these exist only in our imagination. Nations are a wonderful invention. They enable people to care about millions of strangers, and provide for their health, safety and education. But if we forget that nations are fictions that we created to help people, we might begin killing millions of people for an imaginary thing like “the honor of the nation”. 

So in brief, suffering is the real yardstick that people should use to evaluate whether the stories we invent are beneficial or harmful. If belief in a story reduces suffering, that’s a good story. If belief in a story causes suffering, it is harmful. Better change that story.


  • Do you believe in technological determinism?


Technology is never deterministic. In the twentieth century, some societies used the powers of electricity, trains and radio to create totalitarian dictatorships, while other societies used exactly the same powers to create liberal democracies. Just think of North Korea and South Korea – they have had access to exactly the same technology, but chose to build very different societies. The new technologies of the twenty-first century can also be used to create either Heaven or Hell – it depends on the choices we make. 

The worst-case scenario is that AI will push hundreds of millions of people out of the job market and into a new “useless class”. People will lose their economic worth and their political power. At the same time, bioengineering will make it possible to upgrade a small elite into super-humans. Resistance to this superhuman elite will be almost impossible due to a total surveillance regime that constantly monitors not just what every individual does and says, but even what every individual feels and thinks. 

A related danger is that governments and corporations might acquire the ability to hack human beings. To hack human beings means to understand humans better than we understand ourselves. In order to do that, a government or corporation needs a lot of biological knowledge, a lot of data, and a lot of computing power. Until today, nobody could do it. Even in Nazi Germany or in the Soviet Union the government could not know what every person was doing, thinking and feeling. But soon, some governments and corporations might have enough biological knowledge, enough data and enough computing power to monitor all the people all the time, and know what each of us is doing, thinking and feeling. Once a government or a corporation understands us better than we understand ourselves, it can predict our feelings and decisions, manipulate our feelings and decisions, and create the worst totalitarian regimes in history.

So that’s the worst-case scenario. But it isn’t a prophecy. It is just a possibility. And there are alternatives. The best-case scenario is that the new technologies will liberate all humans from the burden of disease and hard labor, and enable everyone to explore and develop their true potential. Bioengineering will focus on curing everyone and not on upgrading a small elite. Artificial intelligence will indeed eliminate many jobs, but the resulting profits will be used to provide everyone with better services and better education, and to allow everyone the opportunity to pursue their dreams, whether in the field of art, sports, spirituality or community-building. State-of-the-art surveillance will be used to spy not on the citizens, but on the government, to make sure there is no corruption. 

Which of these scenarios will come true? That depends on us. 


  • Are you a post-humanist? Do you encourage people to start using bioengineering and AI to create super-humans?


I am definitely not a post-humanist, and I think using bioengineering and AI to change humans is an extremely dangerous idea. Humans have always suffered from a big gap between our power and our wisdom. The gap between our power to manipulate systems, and the wisdom needed to understand these systems deeply. Unfortunately, it is much easier to manipulate than to understand. It is easier to build a dam over a river than understand the impact it will have on the ecosystem. Therefore we humans often start manipulating things long before we understand the consequences of our actions.

In the past, we humans have learned to manipulate the world outside us. We learned how to control the rivers, the animals, the forests. But because we didn’t understand the complexity of the ecological system, we misused our power. We unbalanced the ecological system, and we now face ecological collapse. 

In the twenty-first century we might learn to manipulate not just the world outside us, but also the world inside us. Genetics and AI might enable us to redesign our bodies and minds, and to manipulate our emotions, thoughts, and sensations. But because we don’t understand the complexity of our internal mental system, we might misuse that power. We might unbalance our bodies and minds, and we might face an internal human breakdown paralleling the external ecological crisis. In particular, governments, corporations and armies are likely to use new technologies to enhance skills that they need, like intelligence and discipline, while having far less interest in developing other skills, like compassion, artistic sensitivity or spirituality. The result might be very intelligent and disciplined humans who lack compassion, lack artistic sensitivity and lack spiritual depth. We could thereby lose a large part of our human potential without even realizing that we had it. 

Indeed, we have no idea what the full human potential is, because we know so little about the human mind. And yet we hardly invest much in exploring the human mind, and instead focus on increasing the speed of our Internet connections and the efficiency of our Big Data algorithms. I hope that for every dollar and every minute we spend on developing artificial intelligence, we spend another dollar and minute on exploring and developing our own mind. 


  • You often use very provocative terms like “the useless class” or “hacking humans”. Why did you coin these terms, and are you in favor of creating a useless class or of hacking humans?


I have been warning about the dangers of “hacking humans” and the rise of “the useless class” since around 2014, long before these subjects became popular. I think that while AI has a lot of positive potential, if this technology is misused it will pose an existential danger to humanity. AI might make it possible to hack not just our smartphones, but also our brains. And AI might take our jobs and push many of us into a new “useless class”. I coined deliberately-provocative phrases like “hacking humans” and “the useless class” to draw people’s attention to these dangers.

I am glad to see that many people are now worried about these dangers. I am less glad to see that instead of humans uniting against our common threats, we are fighting and blaming each-other. Some people are obviously doing dangerous stuff, but I don’t think we should see any particular group of people as our mortal enemies and as the source of all our problems. Rather, the source of the problem is the dangerous potential of new technologies like AI, and we should unite with as many people as possible to solve the problem together. Hate will destroy our species. Cooperation can save us. Do you prefer spending your time on spreading hate, or on working together to solve the problem?


  • What’s your view on religion and spirituality? Do they have a role to play in the 21st century?


I make a distinction between religion and spirituality. Religion is a deal, whereas spirituality is a journey. Religion offers us a well-defined contract: ‘God exists. He told us to behave in certain ways. If we obey God, we’ll be admitted to heaven. If we disobey Him, we’ll burn in hell.’ We are usually not allowed to question or change this contract – we just need to believe in it and follow the rules. 

Spiritual journeys are nothing like that. They usually take people in mysterious ways towards unknown destinations. The journey begins with some big question, such as “who am I?”, “What is the meaning of life?” or “ What are good and evil?”. Whereas most people just accept the ready-made answers provided by religious establishments, spiritual truth-seekers are not so easily satisfied. Spiritual seekers question everything, and often challenge the beliefs and conventions of dominant religions. In Zen Buddhism it is said that ‘If you meet the Buddha on the road, kill him.’ Which means that if while walking on the spiritual path you encounter the rigid ideas and fixed laws of institutionalised Buddhism, you must free yourself from them too. 

For religions, spirituality is a dangerous threat. Religions typically strive to rein in the spiritual quests of their followers, and many religious systems have been challenged not by laypeople preoccupied with food, sex, and power, but rather by spiritual truth-seekers who expected more than platitudes. For example, the Hindu religious establishment was challenged by Buddha, the Jewish religious establishment was challenged by Jesus, and the Protestant revolt against the Catholic Church was ignited by a devout monk, Martin Luther. Luther wanted answers to the existential questions of life, and refused to settle for the rites, rituals and deals offered by the Catholic Church. 

I think that in the 21st century, spirituality is more important than ever before. For most of history, most people had no wish to embark on spiritual journeys, and tended to ignore the big questions of life. But now technologies like AI and bioengineering are forcing all of us to confront very old and very deep spiritual questions like “what is consciousness?”, “what is humanity?”, and “is there free will?”.


  • You say that humans don’t have free will. Isn’t this a very negative view of humans?


Freedom is not something you have. Freedom is something you must struggle for. People who believe that their decisions reflect their “free will” are the easiest people to manipulate. People certainly have a will and they make decisions all the time. But most of these decisions are not made freely. They are shaped by various biological, cultural and political forces. The belief in “free will” is dangerous, because it cultivates ignorance about ourselves. When we choose something – a product, a career, a spouse, a politician – we tell ourselves “I chose this out of my free will”. If so, there is nothing further to investigate. There is no reason to be curious about what’s going on inside me, and about the forces that shaped my choice. 

Since corporations and governments are acquiring powerful new technologies to shape and manipulate our choices, the belief in free will is now more dangerous than ever. On the other hand, I am not advocating giving all power to the algorithms to make decisions for us. I would recommend that you take a middle path: Don’t just believe that you have free will. Explore yourself. Understand what really shapes your desires and decisions. That’s the only way to ensure that you do not become a puppet of either a human dictator or a super-intelligent computer. The more you question the naïve belief in free will – the more real freedom you actually enjoy.

This, of course, is the oldest advice in the book. From ancient times, sages and saints repeatedly advised people to “know thyself”. Yet in the days of Socrates, Jesus and Buddha you didn’t have real competition. If you neglected to know yourself, you were still a black box to the rest of humanity. In contrast, now you have competition. As you read these lines, governments and corporations are striving to hack you. If they get to know you better than you know yourself, they can then sell you anything they want – be it a product or a politician.


  • Some people might see you as a prophet, or a guru. How do you feel about that?


I am definitely not a prophet or a guru. I don’t predict the future, and I don’t think anybody can predict the future. History is not deterministic, and nobody has any idea how the world would look like in 2050. All I do is use my historical knowledge in order to raise questions about the future and draw a map of possible scenarios, highlighting the most dangerous scenarios in the hope that we can prevent them. Which scenarios will actually be realized depends to a large extent on our own decisions. The whole point of talking about the future is to be able to do something about it. What’s the benefit of making prophecies about things we cannot possibly change?

Of course, there is always the danger that some people might begin to see me as some kind of guru. It is good to appreciate knowledge and to listen to the opinions of scholars, but it is dangerous to idolize anybody – including scholars. Once a person is idolized, that person might actually begin to believe what people say about him or her, and this can inflate ego and make you crazy. As for the fans, once they believe somebody knows all the answers, they renounce their freedom and stop making efforts themselves. They expect the guru to provide them with all the answers and solutions. And even if the guru provides them with a wrong answer and with a bad solution, they will just accept it. So I hope people read my books as books of questions more than as books of answers, and will see me as somebody who is seeking truth alongside them, rather than as an all-knowing seer.