The online, digital, world can represent both testing opportunities and crises depending on one’s perspective. The first notion that “digital” could change the world came with the code-breakers at Bletchley Park during World War II, although their role in the war only came to light in recent years as official secrets were opened up to public scrutiny as part of the normal workings of the British government. It was, of course, the critical role played by pioneering mathematician Alan Turing during that period of conflict that led to the modern notion of the computer (the proto-steampunk computers of Ada Lovelace and Charles Babbage aside) and all that has ensued.
From the whirling dials and relays to the integrated circuit and Moore’s Law and thence the transistor radio, the pocket calculator, the digital watch, the video recorder and the smart devices like phones and tablets in everybody’s grasp today that pack the punch of multi-billion-dollar Moon shots in a space smaller than a fingernail
Denis Loveridge of the Manchester Institute of Innovation Research, Alliance Manchester Business School, Manchester, UK, discusses in the current issue of the International Journal of Foresight and Innovation Policy how “the digital world quickly became ubiquitous acquiring an unstoppable momentum, especially once parallel developments in software created the so-called ‘killer’ applications.” What we mean by killer applications has evolved considerably over the last decades, but fundamentally it always boils down to creating, acquiring and sharing information whatever point of view you adopt and whether you are talking about textual information, photographs, audio, video, software (more apps) or other digital entities and the whole “internet of things” manifesto.
There has been a feeling, suggests Loveridge in his paper, that the advent of the digital world and its ubiquitous penetration of human societies around the globe raises the perennial existential crisis that has plagued the human psyche for as long as we’ve had a psyche to plague. The doom-mongers and naysayers have always hyped our personal existential angst to the tribal level; to the national stage and to the whole world. The digital world is just the latest switch to flip in our collective concerns about the future.
“The crisis takes many interconnected forms perceived as a cascade of situations containing many turning points involving multiple and interconnected themes,” says Loveridge. He sees crises rather than challenges as inherent in the clash between human emotion and judgement and an ever-accelerating rate at which computation takes place. Computation is a ubiquitous and growing feature of the digital world which exists because of it and for it. Nature computes, but daffodils that bloom in spring, migratory birds, the hormonal cycles of the human race seem gentle and slow-paced compared to the frenetic terabytes and petabytes of digital computation done each day across millions if not billions of computers and “things”. “The clash between computation and human judgement stems from a mismatch between human cognition and behaviour, and the frenetic pace at which computation now proceeds,” suggests Loveridge. The true crisis is whether or not the digital world is part of the same natural world in which we eat, drink and make merry (or otherwise) and if it is not does that mean we might preclude our continued existence and face an extinction wrought by the nightmarish artificial intelligence predicted by Moravec, Kurzweil, Bostrom and many other people.
The bottom line, perhaps, is do we want to engineer ourselves into a world managed entirely by algorithms or do we want to retain that spark of humanity, that weird mediaeval notion of making decisions with the heart rather than the head or is it all the same thing; humans make the machines that make the decisions in our name?
Loveridge, D. (2016) ‘The digital and natural worlds: crisis or challenge?’, Int. J. Foresight and Innovation Policy, Vol. 11, Nos. 1/2/3, pp.148–166.