Now Reading
A Question of Intelligence
Dark Light

A Question of Intelligence

Is AI changing the world as we know it, and is that change for the better? While many people see only positives in the form of instant efficiencies and capabilities, others are not quite so hasty in their assessments.

By Eric Herman

Unless you’ve been living under the proverbial rock, you’re probably familiar with the terms AI (artificial intelligence) and ChatGPT, the platform that is changing the world as we’ve known it.  

Seemingly overnight, the technology is transforming the ways we process and utilize information, and in many cases threatening the livelihood of many who have spent their working lives developing methods of using specialized skills. In just a few months, the subject has moved from the world of science fiction to what looks a lot like a new normal.

Fact is, AI is here to stay. For many people, that’s a good thing. After all, programs like ChatGPT, the now world-famous “chat bot” can instantly turn seemingly anyone into an accomplished writer, computer programmer or graphic artist. Not only can you find facts and data of any kind in a matter of seconds, these remarkable platforms just as easily turn that information into useable forms.


What could possibly be wrong with that?

Many people are positively giddy about this new set of instant talents, and some are already making staffing decisions based on it. I personally know of two companies that have fired marketing and administrative staff in favor of AI, and I’ve personally lost two ongoing freelance gigs to the technology. One of my lost clients, a good friend, told me that she believes writers and editors will soon be entirely obsolete, if we’re not already. After all, “the machine” can do what we do in a fraction of the time, and does it for free.

Not everyone is quite so enthusiastic and many of those voicing concern are those who stand to gain the most from the proliferation of AI technology. One consortium of “thought leaders”, the Center for AI Safety — a group that includes many top AI developers — is sounding a decidedly cautionary, if not outright alarmist note. In a recent online statement the center said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Warnings don’t come with language stronger than that! Is it a commonly held view? In a New York Times’ article, CAIS executive director Dan Hendrycks said: “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.”

The group further describes a host of specific concerns including the generation and spread of misinformation and most frightening of all, the “enfeeblement” of human thinking. “Enfeeblement can occur if important tasks are increasingly delegated to machines,” the CAIS posted. “In this situation, humanity loses the ability to self-govern and becomes completely dependent on machines.”

The sheer scope of such statements is chilling, and frustrating given the vast spectrum of unknown outcomes. “The only thing I am sure of is that there is no way of knowing how many jobs will be replaced by generative AI,” Carl Benedikt Frey, future of-work director at the Oxford Martin School, Oxford University, told BBC News. “What ChatGPT does, for example, is allow more people with average writing skills to produce essays and articles. Journalists will therefore face more competition, which would drive down wages, unless we see a very significant increase in the demand for such work.”


Benedikt goes on to draw an analogy of an industry that has already been transformed by a familiar form of AI. “Consider the introduction of GPS technology and platforms like Uber. Suddenly, knowing all the streets in London had much less value, and so incumbent drivers experienced large wage cuts in response, of around 10% according to our research. The result was lower wages, not fewer drivers. Over the next few years, generative AI is likely to have similar effects on a broader set of creative tasks.”

As a writer/editor, am I personally worried about my job? Not exactly in that technology has constantly changed how I go about my business. When I started out in the mid 1980s, editorial professionals were still working on typewriters. Looking back, every stage of my career has been attended by technological advances, from word processing to email to online publishing platforms. Through it all, the one thing that technology could not replace was the ability to generate manuscripts. Clearly, that has changed.

I do certainly worry how the technology will negatively impact younger people trying to build a career in editorial work. And, in a much broader sense, I’m terrified about the general dumbing down of society. After all, there remains value in actually knowing stuff, or at least I’d like to think so. 

Until now, when you write about something, you must to some extent know about it. That’s no longer the case. The same is true of numerous pursuits that involve advanced knowledge and know how. The value retaining information and skills is at risk of being vastly devalued. Who needs to study architecture, finance, or computer programming when AI can do it for you?

Experience has taught me that technology always finds its own level and usually favors those people who are best able to adapt to its presence. How that plays out amongst professionals in highly specialized niche markets like watershaping in the company of AI depends on endless sets of variables.

The unknowns can be maddening, but one thing we know for sure is that AI has already insinuated itself into our lives; and, odds are, we’ll be contending with its presence and adapting from this point forward. What that ultimately means is anyone’s guess.

The above discussion was generated entirely by the brain and fingers of the author.

© 2021 WaterShapes. All Rights Reserved. Designed Powered By GrossiWeb

Scroll To Top