Why the Grumpy Designer Isn’t Sold on the AI Hype Machine

It’s like clockwork. When a new technology emerges, you can count on a couple of things: some will hail it as society’s greatest achievement, while naysayers will bemoan its existence. Artificial intelligence (AI) simply continues the trend.

As ever, the reality is more nuanced. For every potential good thing to come from the tech, it seems there are as many drawbacks. And finding balanced expertise on the subject has been a challenge.

I’m personally on the fence. From what I’ve seen so far, AI can do some very useful things. And I think it can be incredibly helpful to web designers.

Yet some aspects give me great concern. In part because I feel like I’ve seen this movie before. An innovation that is predicted to change the world. Only it’s not necessarily the Utopian dream some are selling (see Zuckerberg, Musk, et al.).

With that, maybe we should take the hype regarding AI with a grain of salt. And, to keep up with that sassy Bing chatbot, add in a dash of grumpiness.

AI Solves Problems and Creates New Ones

It’s no surprise that some of the biggest proponents of AI are those that stand to gain the most. And it’s hard to blame them. The job of a salesperson is to accentuate the positives – not dwell on the negatives.

And tools like ChatGPT have several positive aspects. They’re capable of increasing efficiency and knocking down technical barriers. This benefits everyone from writers, and medical researchers, to auto mechanics.

Even better is that these tools don’t have to do everything for us. Something as small as a gentle nudge in the right direction can save us precious time.

Using web design as an example, a few brave souls (and me) have used ChatGPT to generate a WordPress plugin. But it could just as well help write CSS or JavaScript. You might even use it to help explain a complex concept to a client. Imagine the burden this could lift off our shoulders.

On the other side of the coin, several potential negatives could also come into play:

Inaccurate Content and Built-in Biases

AI tools are built by humans. Therefore, they’re imperfect. They’re also subject to the inherent feelings, biases, and motivations of their creators.

In practice, each tool is only as good as the people who created it. The quality of the content they feed into the app makes all the difference.

So, what are the downsides? One example seems to demonstrate the possibilities: a social media company’s algorithm.

Let’s imagine a social media network that tilts its algorithm towards showing inflammatory or inaccurate information. That’s what users are going to see most often in their timelines.

Similarly, there’s no guarantee that an AI tool will provide the correct answers. And even if its creator is reputable, there still might be biased information being returned.

Large companies may use the opportunity to share their point of view, in addition to facts. Or it may have inadvertently been fed content that perpetuates stereotypes.

Thus, AI isn’t immune to human fallibility.

Artificial intelligence is only as good as its creators.

Humans May Put Too Much Trust in AI

If an AI app produces an inaccurate response, there’s no guarantee that a user will catch on. Like so much of what’s written on the internet, some may simply believe what they read.

And this doesn’t just affect fact-based content like news or medical information. It could also find its way into the code these tools produce.

Take an AI-generated WordPress plugin, for example. You might test the results and find that it indeed works. That’s awesome! But how do you know that best practices were followed?

The code could contain a gaping security hole. Even worse, it might be hiding something malicious. Once deployed, this generated plugin could cause a whole lot of trouble.

Sure, a human coder might do the same thing. But that’s the point. Quality and accuracy aren’t assured in either case.

Information returned by AI tools can't be guaranteed as accurate.

Creative Works Could Be Used Without Permission

It wasn’t so long ago that we saw a wave of legal threats from stock photography services. When an (allegedly) unlicensed version of a copyrighted image was found, the provider came down hard on the offending website owners.

AI is already providing those same lawyers with plenty of work. Getty Images, for example, filed a lawsuit against a tool that generates artwork.

At first glance, this may appear to be a battle between corporations. But the impact could trickle down to end users as well. It’s fair to wonder about the risk of using AI-generated images on your website. Will it leave you open to legal threats?

Code is also a potential trap. Perhaps open-source materials are fine to use. But what if a tool indexes a snippet from a private repository? And how can an end user tell the difference?

Without processes in place that allow creators to opt-in to feeding AI tools, we’ll never know where content comes from. That’s a risk.

The Stable Diffusion AI art generator is the subject of controversy.

There’s Always More Than Meets the Eye

Those touting the game-changing potential of AI aren’t necessarily wrong. There’s every reason to believe that this technology can reshape entire industries – if not the world.

But the skeptic in me feels like we’re only getting a partial story. After all, any tool that can be used to do good can also be used to harm. And the final judgment is truly in the eye of the beholder.

The bottom line is that individuals, organizations, and governments will use AI in ways that benefit them. Their interests won’t always align with everyone else’s. We must be smart enough to gauge their motives and determine whether they’re acceptable.

That’s worth keeping in mind as we’re inundated with grand proclamations about this nascent technology.