Now I'm getting the chance to read books I didn't have time for before. Think of me whenever you see the slogan "So many books, so little time!" Now I've got the time.  Cheers, Fred.

Homo Deus: A Brief History of Tomorrow

Book Number: 
788
Date Fred Read: 
January 2019
Fred's Rating: 
3
Author: 
Yuval Noah Harari
Total Pages: 
402
Publisher: 
Harper Perennial
Year: 
2017

Yuval Noah Harari, author of the critically acclaimed and international phenomenon Sapiens, presents an equally original, compelling, provocative book, focusing on our quest toward humanity’s future and our quest to upgrade humans into gods. (For his books I’ve read, click on his name.)

I bought the paperback edition. I give here Amazon’s website for the Kindle edition:

https://www.amazon.com/Homo-Deus-Brief-History-Tomorrow-ebook/dp/B01BBQ3...

[None of the ISBN-10, ISBN-13, or ASIN numbers on Amazon’s websites for the three editions (Kindle, hardcover, or paperback) are recognized by Amazon, thus the ‘no image Circle' appears. I think this new change by Amazon is not good. However, you can see the images of this book's covers at the website above.]

The home page has a good review (click on ‘Read more’). Further down the home page, I recommend the ‘Amazon.com Review’ – short and intriguing. Further down the home page, I recommend two very thoughtful reviews. First is a 3.5-star review by ’Ashutosh S. Jogalekar,’ whose review is entitled “A mix of deft writing, sweeping ideas and incomplete speculation.” I suggest you click on ‘Read more’ and read at least the first three paragraphs (but I found the rest of this review to be well worth reading). Second is a 3-star review by ‘Flatiron John,’ whose review is entitled “Skip the crass caricature of humanism. Go to Part 3, a disturbing prediction of big-data dystopian.” I recall other books for which ‘Flatiron John’ cuts to the heart of his critique of a book. As I read this review it seemed I was reading what I would have written, except for rating each of the three Parts (which I had never thought of doing!). Be sure to click on ‘Read more’ for this very emphatic review by ‘Flatiron John.’

Use the option ‘Look inside’ the Kindle edition and scroll down to the 1-p Contents. The Kindle edition has part of the 68-pp Ch. 1 – The New Human Agenda – that begins on page 1 and ends on page 21, at the title of the section The Last Days of Death. This made me wish it were longer for there are other sections omitted that I thought were reasonable summaries that I’m glad I read. I regard Ch. 1 as a rather long prelude to the following three Parts.

I read Parts I and II of ‘Homo Deus’ rather quickly, for they seemed to repeat much that the author had said in his previous book Sapiens (book 787. In my review of Sapiens I pointed out that Harari had used the term ‘superhumans,’ referring to our future. Most readers would know that today’s homo sapiens can have our body with mechanical or biological replacements for failed or failing human organs. But the current Homo Deus book’s title shows a change in Harari’s thinking. If you’re not sure what this means, you can check Wikipedia’s meaning for the words ‘homo deus.’ Wiki interprets Harari’s intention as such: “Technological developments have threatened the continued ability of humans to give ‘meaning’ to their lives; Harari suggests the possibility of the replacement of humankind with the super-man, or ‘homo deus’ (human god) endowed with abilities such as eternal life.“

The title page (p. 153) for Part II incudes three questions. (1) What kind of world did humans create? (2) How did humans become convinced that they not only control the world, but also give it meaning? (3) How did humanism – the worship of humankind – become the most important religion of all? For question (2) he explains that humanism (aka secularism) awoke to the awareness that no god or gods made or make things happen, since his view of science, with his assumed rational certainty, has physical or biological processes to explain everything, given enough time and support to flourish.

The example he uses often is evolution, as it is viewed today – having both competitive and collaborative components. He feels that, in time order, humanism should be, then can in part be, and then finally in whole will be – be all that we need to be to explain anything and everything. He is an atheist (but never states this) so he takes religions to be mere human mythologies. A glaring omission is that he never tries to explain how we humans can find meanings for nonscientific concepts such as love, faith, hope, compassion, justice, and other virtues. Without these, his homo deus would be empty of what most of us need to make life worth living.

With some my above statements, I’m into Part III. The title page for Part III also has three questions: (1) Can humans go on running the world and giving it meaning? (2) How do biotechnology and artificial intelligence threaten humanism? (3) Who might inherit humanism, and what new religion might replace humanism? Part III and these (and other) questions form the most important part of this book, just as the reviewer Flatiron John emphasized. Next I give some answers as food for thought.

Question (1) is the easiest to answer. We cannot run the world as we have been doing, polluting the air/water/land and thus the world’s creatures – every kind of them. All people must recognize climate change as real now and that we have to work out reasonable ways to eventually halt the warming and make our world sustainable if organic life is to survive in the future for our world. No more need be said, except can we figure out how to do it and then go ahead and do it?

Question (2) is not easy to answer in detail. I am convinced that we need to control in ethical ways what is permitted with biotechnology and artificial intelligence. For both, ethically human minds need to gather to establish and enforce, for all parts of the world, what is to be permitted or forbidden. I don’t know enough what the current rules are for biotechnology, so I can only state that I think we need at least two focused groups of the UN (as the most logical place for oversight to exist), with a knowledge/wisdom group to establish ethical rules for biotechnology and a second group with power to enforce the rules.

Today, 2/6/19, I have just looked at an 18-pp section of my Feb 2019 issue of Scientific American. This section on Gene Therapy has 9 articles that consist of various aspects of what gene therapy can do at the present time. Most of these articles raise ethical question about how, when, and why gene therapy should be used.

Question (3) is the one for which Yuval Noah Harari has a speculative answer. He presents “Dataism” as a likely 'religion' to replace humanism. Dataism assumes that everything consists of patterns of data. If we can quantify a pattern, we thus reduce it to data, consisting of numbers (or letters) that represent the raw data. For simplicity, let’s think of numbers, as they are easier to work with than are words (as anyone who has read various book translations knows; there are many languages, which are not stable in time). The ten numbers 0-9 are a lot simpler! I hoped that Harari would do more than spell out the data for the nonscientific or humanistic or dataism philosophy for the things of great value to humans like love, faith, hope, compassion, justice, and other virtues of spiritual value for humans. I was not surprised when the author did not attempt to deal with such concepts – a very serious flaw in both Sapiens and this book.

I had also hoped that Harari would do more than spell out how artificial intelligence could be used by humans – us homo sapiens – to establish an intuitive respect for humans. What comes first to my mind are Issac Asimov’s 1940 Three Laws of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” There have been many discussions of these laws in many websites, some of which are rewordings, but most of these retain the laws that Asimov came up with in the year I was born. If imposed and made unchangeable, they would provide a taboo on what AI (artificial intelligence) could not do, while freeing AT to develop itself beyond homo sapiens as AI seems to be doing now. But now there are no imbedded restrictions or taboos to keep the future safe for humanity in the future!

Harari prefers to use the word ‘algorithm’ instead of ‘computer program’ – the words used back in the early years when I was writing several computer programs to use in my research in nuclear physics and analyses of the data we recorded. I haven’t heard lately any arguments for why Asimov’s Laws can’t be made to work today. So I assume the lack of such restrictions to what AI is allowed to do may be due to today’s people working in AI don’t worry now about AI getting to be very much smarter than we are. If so, then AI could realize that the future may not need to have any homo sapiens preserved. If nothing else, it is good that Harari discusses in depth the possibility that the future may belong to homo deus, consisting of gods that are all AI who don’t want to keep around anymore the lesser creatures that we homo sapiens would be. His Part III leans heavily in this direction.

Furthermore, his naming the idea of Dataism as the “new religion” is so bad that one can only laugh at it. I wonder if he had ever heard of Kurt Gödel's Incompleteness Theorems – two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of existing. If something seemingly as simple as logic in math has inherent limitations, then math can’t handle all data sets. What about the data sets of the quantum world, with Werner Heisenberg’s uncertainty principles provide serious limitations on what can exist simultaneously, or how the non-quantum world has to live with chaos theory? These things tell us that there are things that can’t be dealt with by algorithms and simple data sets. They are outside the scope of Dadaism.

This book could be called an “easy read,” as was Sapiens, but Homo Deus has some very serious flaws, so the best I can do is to give it a three-star rating.