Get started
Get started

Business Value

  • 24 months faster delivery
  • £2.5 million saved in development cost
  • Efficient working process
Book a demo

In the words of science fiction writer Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” Certainly, in this analogy there are parallels to be drawn between developer and magician. Both suspend disbelief through prodigious pedalling underfoot, working with unique creative methods to hide evidence of their working and deliver a flawless performance.

However, it is precisely these efforts that engender mistrust between both magician and audience, and in modern times, artificial intelligence (AI) and user. Trusting a person or entity you don’t understand requires a degree of blind faith, in not only their process but their integrity. If we can’t fully comprehend the situation, how can we be sure we’re not being manipulated?

Science fiction’s aspersions aside, AI’s copy sheet boasts some whiter-than-white credentials. In the field of oncology, even in its infancy AI has multiple applications and beneficial outcomes, from prognosis prediction to diagnosis to tracking tumour development. In IT, AI is able to identify malware with greater accuracy, improving computer security and data protection. Fraud detection in financial services is getting a shot in the arm through machine learning systems, as companies like Paypal use it to compare transactions and identify exchanges with fraudulent features. And as uses proliferate, so does uptake, with the global artificial intelligence market is set to reach $267 billion by 2027, according to Gartner.

So with all these green shoots of positivity, why is AI not being heralded as Gen Z’s golden goose? The answer may be that it’s being judged less on the benefits to the end-user, as to fears of how it will impact the employee – creating radical process revisions, potential redundancies, and an ‘intellectual emasculation’ as professionals find their expertise bested. However, these concerns tend to be disproportionate. In the above examples, it is the human user who identifies the need for AI, and implements and interprets it. AI works alongside the employee, in the same way as other softwares and technologies do currently.

An odd musing is that the reticence over AI is far more concerned with the ‘I’ than the ‘A’. Artificiality now spouts through modern life like a burst bank. Evolution has been usurped by artificial selection, healthcare miracles are humdrum, and we salivate at the sound of plastic. Conversely, intelligence is rare, and has been used throughout history to subdue, gatekeep and oppress. This may explain why we view a creation with superior intelligence with suspicion, stung by legacy allegory.

However, AI of course has no inherent morality, so all perceived trust – or mistrust – pertains to the human creator. Therefore we can put AI aside and instead question our trust in science, in business and in progress. As a whole living standards have continually risen, society has become more inclusive and compassionate, and technology has been a driver of efficiency and comfort. With this in mind, while it pays to stay attentive to new initiatives, could it be a reasonable assumption that its net gains will outweigh any nefarious negatives?

It is human nature to fear what we can’t understand. AI represents one of the greatest scientific leaps ever seen, making our reticence understandable. However, in working alongside AI, we will see it position itself as co-worker, not competitor. And in extension of this, in directing this insentient technology, we will come to appreciate that it is – as ever – we who hold the moral responsibilities, we who define the principles of AI.

Share this

Subscribe to our mailing list

Insights direct to your inbox. No spam. We value your privacy.*