I sometimes wonder if there is an inverse relationship between the amount of attention that a ‘new’ IT/IS idea or breakthrough gets and the value that it actually generates – and in value I include both the social and financial impact.
Now, being of mature years, I can, roughly, chart the breakthrough history of the last half-century or so…. (younger readers may need to look some of this up on a browser!)
On ‘Tomorrow’s World’ in the 1970s there was talk of the ‘Paperless Office’ yet, when I started work in 1976, my employer, a bank with 300+ branches, had just decided that a ‘facsimile machine’ was not needed in every office (so if we wanted to send a fax we first had to send a letter…..)
That bank was ‘online’ (every branch connected to the mainframe on a private network – yes, 50 years ago – an investment made in the 1960s). Using mechanical typewriters it had not begun to think about electronic versions. Word Processors followed around 8 years later accompanied by the introduction of ‘personal computers’ for specialist activities ‘but not everyone will need one’.
The phenomenal growth in the volume of such machines was staggering, we had one at home from around 1988, and they became quite common with rudimentary spreadsheets (Lotus 123 anybody?), word processing programmes and games.
The effect of these machines in the typical office environment was not to deliver the paperless office but to displace work from ‘the typing pool’ to the author – instead of ‘dictating a letter’ people now typed their own, often badly.
Far from releasing people from routine work, these machines relocated and amplified it, increasing rework (to correct errors and presentation), creating work (‘do another draft’), liberating ‘information’ and giving it a life independent of its authors.
Driven by exponential growth in the availability and capability of integrated chips was a dramatic fall in their unit cost and a proliferation of ‘versions of the truth’. The real cost of liberating the production of ‘information’ is the creation of a Borgesian Library of alternative truths (uncountable copies of the same thing with only marginal differences and a raft of meaningless copies).
Emerging from this was the need for ‘knowledge management’ – seemingly a means of capturing and propagating the essential information about an organisation while, perhaps, mainly selling more data storage to the IT department! ‘Knowledge,’ stored by whatever means, is not the same as knowledge applied; more often it is used to restrict and constrain the organisation to a different, sometimes but not always better, version of ‘now’.
Smartphones emerged from 2007 which, coupled to substantial reductions in the cost of data storage and transmission, enabled the huge shift to online everything and an upsurge in ‘activity’ some of which must be productive, but it can be hard to tell. Watching skateboarding dogs and pandas falling off tree stumps is entertaining – but is it really progress?
The shift to ‘self check outs’ enabled by the same progress in applications of technology has not reduced the workload but again displaced it so that the customer does more of the work, meanwhile…………‘self checkout is not available today due to staff shortages’….
Somebody, somewhere, is laughing all the way to the bank!
What’s the point? We are in mid 2025 being told that AI will finally deliver that paperless office, eradicate mundane work, manage our knowledge, enable online providers to predict our needs and fulfil them before we realise and, of course, make most of us redundant. The history of technology might wish to differ!
Big, powerful systems process phenomenal quantities of data to find and replicate patterns. Large Language Models learn to recognise regularities and key elements in data (in documents, audio files) and both ‘summarise’ and replicate them remarkably well.
They save some time on certain activities (if anybody actually reads the minutes of the meeting) but they do not originate or ideate. What they do is consume substantial amounts of energy in server farms so maybe what they are doing is displacing cost not saving it?
More importantly, as I have found recently, AI specialists are using machine learning capability to generate solutions to problems which have already been solved. Replicating history reinforces rather than changes it, inhibiting innovation and development. Meanwhile non-AI specialists are using the technology as a playground. It is fun to get the machine to take the notes, produce the summary of decisions and the minutes of the meeting, but is it adding value? Increasing productivity? Reducing waste? Enhancing human well being? Or, is it just providing an alternative means of being distracted from the tasks we should be focused on; the 2025 equivalent of using Excel for lottery numbers, Secret Santa and Fantasy Football!
I am no Luddite, I appreciate the power of the technology, the benefits it can bring, the problems it might help us solve. However, we are commonly pursuing the trivial, the unimportant, in pursuit of personal novelty rather than investing in and developing the real power of AI and, in doing so, risk, in many cases, condemning it to the box of cast aside ideas whose potential to improve human well-being was never realised.
The trouble with waves is that collapse is inherent in their existence.