Opportunities, misgivings, and future uncertainties of Artificial Intelligence

0

While Artificial Intelligence (AI) systems like ChatGPT may struggle to create truly innovative ideas or relate to humans’ pleasures, interests and concerns on a more profound level, it certainly has the ability to collect information and produce an essay highly passable as human-written. Writes Alexander Bushroe

The concept of artificial intelligence has long been one that has fascinated the human mind. Ever since the notion first entered the zeitgeist in the mid-20th century, countless theories, speculations, fantasies and panics have run rampant throughout any discussion of its development and the future of its role in society and the world.

This perhaps began in earnest with what is often referred to as the “Golden Age of Science Fiction” in the 1930s and 1940s, spearheaded by authors like Arthur C. Clarke, Isaac Asimov, and others. Hypotheses and conjecture emerged regarding what the future of robotics and automated technology would look like and how humans would interact with it. Some supposed that it would solve all of humanity’s problems, and others foresaw a darker, more dystopian future.

The truth, as we have learned decades later and is becoming more clear with each passing year, lies somewhere in between.

In 1950, British mathematician Alan Turing, certainly one of the forefathers of computing and AI, postulated that if humans were able to make decisions and create output based on available information and inductive and deductive reasoning, then machines could have that same capability.

From this idea, a further cascade of science fiction works arose in the following decades, both in literature as well as in film and television – Star Wars or The Jetsons, for example – that appealed to much broader audiences.

Of course, at the same time, the technology itself was rapidly advancing, as well. Without detailing the entire early history of computing, suffice to say that its capabilities were becoming more robust, cost-effective, and broadly utilizable during this period, a trend that continues today.

However, despite overarching optimism regarding AI and computing technology amongst both the scientific community and the public, there has always existed an undercurrent of skepticism, trepidation, and even fear. It takes time for us humans to adapt to new ways of life, and if they seem to have the potential to create an uncertain and uncontrollable – or exploitable – future, we often respond with apprehension and concern.

Look no further than classic Golden Age-era novels such as “Nineteen Eighty-Four” and “Brave New World” for evidence of that.

These fears, though, are certainly not unfounded. Despite its seemingly limitless potential to improve human life and grease the wheels of prosperity and luxury, this potential exists amidst a minefield of potential pitfalls, both foreseeable and not, and both minor and drastic.

So, currently, when we discuss artificial intelligence, to what are we actually referring? Certainly not traffic light systems, personal computers, or Roombas. In its simpler forms, we are generally alluding to more complex automated technology like self-driving vehicles and robotics used in manufacturing. The convenience, efficiency and, perhaps in the future, safety, of these technologies do hold a great deal of promise and are over time becoming more cost-effective options for businesses to “employ.”

The painfully apparent downside here, though, is the replacement of people in the labor force who, compared with machines, don’t require insurance, benefits or days off. The human cost is already manifesting itself and threatens to increase in severity as time passes.

In recent weeks, though, the release of the much-ballyhooed ChatGPT’s program has spooked many in other fields of employment and highlighted the fact that blue-collar workers might not be the only ones under threat.

To date, the ability of ChatGPT to produce things like songs and speeches and respond to human conversation has raised many eyebrows. AI graphic art generators have also been increasingly able to produce astonishing montages for use in a variety of spaces. But, when examined in further detail, the output from these machine sources is regulated by information that already exists as input as well as the algorithms, however complex, with which they were programmed.

AI is very likely to increase efficiency in the near term by better compiling information and data and presenting it in a more practically useful way that surpasses human intuition and our brain’s mathematical and organizational capabilities. Programs for storing and analyzing data have existed for decades, but more powerful AI could be able to better observe and express how to put that data to more immediate practical use.

But, while AI systems like ChatGPT may struggle to create truly innovative ideas or relate to humans’ pleasures, interests and concerns on a more profound level, it certainly has the ability to collect information and produce an essay highly passable as human-written. This poses an obvious problem for academia, as students can – and already are – submitting work created by AI and representing it as their own, and teachers and professors are struggling in many ways to combat the problem, as many of their tools to identify and combat plagiarism are ineffective.

So for many, artificial intelligence poses a present or imminent threat to their employment. For others, the peril is more distant and abstract; perhaps further across the horizon or only partial in nature. Others still may feel no danger whatsoever over any time scale that might be called the near-term.

Technology does, though, improve with time, and exponentially so. Even if any individual’s or group’s careers may not be under threat in the near future or even during their lifetimes, no industry or line of work is completely shielded from the specter of advancements in technology over a sufficiently long timeframe.

Moreover, as the speed of technological development accelerates, generational cohorts, which are often defined by the technological paradigm of our times, have shortened in length. No longer does a grouping of 25 to 30 years fit to describe an age group, as the life and educational experiences of children born ten years apart in the current age are, in many ways, starkly different.

The oft-repeated trope about suggesting out-of-work ticket-takers and receptionists just “learn to code” comes to mind as a frivolous solution to the issue.

So what’s to be done?

Over the long term, perhaps in a half-century’s time or longer, it seems as though we as a society may have to have a difficult conversation about how the resources that AI produces and the revenue it generates are allocated. It seems unsustainable to suggest that the fruits of AI labor ought be concentrated in the hands of a select few patent holders and asset controllers if automated technology truly does supplant the majority of the global workforce in the future.

But that aside, in the short term, it’s unlikely that AI in the vein of ChatGPT will replace most employment now but instead increase efficiency and productivity. AI can produce simpler output but, at least for now, only can work with the existing information available to it, and lacks the capacity to truly innovate. It turns out that Clarke’s prediction in his 1968 sci-fi novel titled “2001: A Space Odyssey” that there would exist machines with intelligence that matched or exceeded that of humans by the book’s eponymous year was off by two decades and counting.

That isn’t to discount the issue, though, and there’s no silver bullet to fix it. We all must strive to develop our skills to best position ourselves in the ever-changing, unpredictable technological era.

At least we can take solace in the fact that the plots of “The Terminator” and “I, Robot” haven’t come to pass.

Yet.

This article is republished from Shine.

LEAVE A REPLY

Please enter your comment!
Please enter your name here