The last thing I wanted after a six-hour dinner discussion was a conversation with a chatty taxi driver. But his style was of a well-read man. His views on Middle Eastern politics or the clubbing scene in East London were not particularly interesting. But his conviction that humanity is nearing extinction caught my attention.
His idea, apparently based on the insight of an astrophysicist, goes as follows. The sheer number of solar systems and planets in our galaxy, let alone the other tens of billions of galaxies in the known part of the universe, indicate a high mathematical likelihood of the emergence of intelligent life somewhere. Even if a tiny fraction of these millions of billions of planets could support life, then millions of life forms should have arisen in the universe’s known history. And as evolution progressed on Earth, it should have taken place in many – probably millions of – other planets as well. Another given is that after four decades of sending radio signals to the outer space, we (humans) have not heard back from anyone. The conclusion is: there is a high likelihood that there is no one out there. His, and the astrophysicist’s, reflection is not that we have always been the only intelligent existence in the universe, or in the parts close to us. But that the other intelligent forms of life that had existed disappeared before (or shortly after) they had reached the technological threshold of being able to answer a call – a radio signal – coming from outer space. The small problem here is that we – our civilisation – have just reached that threshold. And so, he believes, like other civilisations, we are close to extinction.
Eight hours of sleeping, and a strong Saturday morning coffee erased the argument from my mind. Until Sunday evening. I was reading Stanford Professor Ian Morris’ new book “War: The Role of Conflict in Civilisation”, and a key theme in the book is intriguing. Humanity is reaching the point of “technological singularity”: the threshold after which artificial intelligence exceeds human intelligence. At the same time, advances in robotics indicate that future wars, perhaps in few decades, will be undertaken by robots, and managed through space-based analytical systems. So, might we (humans), at one point, lose control of wars? Are we moving into a future in which super-advanced weapons – the dramatic destructive force that humans have created in the last hundred years – will be under the control of machines that are more intelligent (but certainly less emotional) that we are?
Interestingly, we are in 2014, exactly a century since Europe (then and now, the world’s wealthiest continent) “sleep-walked” into a war that claimed millions of lives (to use the term Christopher Clark chose for his book on World War 1). The thinking and behaviours of sophisticated humans in 1914 hardly assuage the fears that a London cab-driver have triggered.