Site icon Travel Research Online

Learning to Live with Generative AI

Jim Cramer, the host of CNBC’s Mad Money, recently said: “I know there’s a lot of hype here, and in some individual cases it is overblown, but anybody who tells you that AI is pure hype, that person is only fooling herself.”

That seems to be a reasonable middle ground somewhere between utopia and oblivion. It’s a safe generalization. It would be absurd to write off all of AI because ChatGPT doesn’t quite live up to some of the predictions.

It does seem that for this particular wave of AI, the language models, the magic carpet is descending toward Earth a bit. People are beginning to understand it better: what it is capable of, how to use it, what to beware of, and its limitations.

 

 

A recent article in The Hill reported that “A Google Trends search from early August shows searches for ChatGPT have fallen by half since its peak in April. An accelerated adoption rate also seems to bring the ‘trough of disillusionment’ phase of the hype cycle just as quickly.”

The author, William Beutler, founder of Beutler Ink, a digital creative agency, suggests that people are losing interest now because people are discovering its limitations. “Early adopters of generative AI tools quickly became acquainted with its limitations, such as its lack of nuance and an inability to explain its decisions. Most perplexing is its tendency to ‘hallucinate’ data — a fancy way of saying it just makes things up. This realization came too late for two lawyers at Levidow, Levidow & Oberman, who in June were forced to pay $5,000 in fines after they submitted a ChatGPT-written brief that cited nonexistent cases.”

A report in Slate recently pointed out that, in its first year, ChatGPT has not yet “ignited the job apocalypse that so many predicted.”

The transformation is not as simple as the predictions were.

“Legal work, for instance, was supposedly squarely in AI’s sights,” said the article, “but law firms enthusiastically incorporating AI aren’t using it to replace lawyers. Allen & Overy, a firm that employs more than 3,000 lawyers worldwide, started working with a generative AI tool called Harvey last year and hasn’t replaced a single person with it.”

At first, ChatGPT was like a magic lamp. It was a black box with mysterious powers. How could it possibly do all that? It can write, like a human. It can converse with humans, like a human. It’s been said to be able to do almost anything. It easy to see that it can do a lot that no technology has done before, but learning how to employ its capabilities is a challenge.

I started experimenting with generative AI as soon as I discovered systems that were free to use. I experimented a lot with image-creating software of Stable Diffusion, Dall-e, and others. It was mind-blowing what they could do.

I experimented methodically with the prompts, varying them slightly and comparing the results to try to get a feeling for what it was doing and how best to use it.

When ChatGPT was released, I experimented with it as well to see how generative AI works with words instead of images. I learned a lot hands-on. I read and listened to everything I could find about it.

Generative AI’s capacities with words are also incredible. The developers in that line of history deserve a lot of credit. It does appear, however, that as the novelty and surprise wear off, many of the more extreme speculations were overblown.

We tend to personify our machines and treat them like people. ChatGPT may talk like a human, but it is just an algorithm—not a sentient being.

Once, while cleaning up, I found a sneaker that had been pulled up into a reclining chair. The shoe had been caught in the underworkings of the chair for years. That became The Chair that Eats Shoes. I didn’t really believe that the chair had intentionally consumed the sneaker. But it was easy to think of it that way, with a face on it, like the cars in the animated feature Cars.

Cars have personalities or seem to because that’s how we are. “Please, oh please, start! I know it’s freezing, but please. Don’t strand me here!”

I read of an experiment that involved little electric cars with light sensors in front. They could be wired to steer away from light, or by reversing the wiring, they would steer toward the light. Human observers began to regard the ones that turned away from light as “shy” and the ones the steered into the light as “aggressive.” It was the simplest imaginable feedback device, but people ascribed human emotions and intentions to it.

When you’re looking at ChatGPT from outside, as an opaque box, it’s easy enough to imagine a person on the other side of that screen; but it’s really an algorithm that is trained to take your prompt, and deliver you the most probable selection of words based on the frequency at which those words appear in the colossal amounts of data it scrapes off the web.

Just as when it feels like your car or your laptop is messing with you, it’s hard not to ascribe intention and sentience to the device.

I experimented a lot with ChatGPT. I tried it in different ways to try to learn what it could do, and how I might be able to use it. I learned some useful things from trying it, though I haven’t found much practical use for it personally.

Once, when I had just finished writing an article, I instructed ChatGPT to write an article to those specifications, a 1200-word article about river cruising in France, including some advantages of seeing Paris that way.

The system coughed up a piece that was eerily on point. It wasn’t remotely what I had written, or would have written, but it was unsettlingly close to many articles and brochures I had read. ChatGPT had managed to clone the voice of those kinds of articles and marketing copy and put it into a structured article in English that sounded human.

It was disturbingly good in the sense of how easily it could replace writers doing that kind of writing. I had a boss once who insisted that his writers write several articles a day, because each should only take 20 minutes. All you have to do is make a list of five things to do in Sydney, for example, and knock them out in 20 minutes. People don’t read more than a few paragraphs anyway, he said, according to the stats on websites. Why should you write 1500 words, when no one reads more than 700?

He insisted articles be done that way and he sure as hell wasn’t going to pay anyone for more than that. I couldn’t sustain myself doing that day to day. It was too boring. So, we parted ways. Now with the rise of ChatGPT, I’m glad I didn’t follow his instructions because that’s the kind of writing ChatGPT can do in the blink of an eye.

The algorithm will come up with the most probable words in relation to your prompt. It will give you the Great Mean, the average. There is no human spark there. Its version of creativity is a machine function, and that might include giving you false information.

ChatGPT showed me what not to do as a writer. I don’t want to write average, mediocre things that could easily be scraped together by an algorithm averaging all the articles and brochure copy about a given subject.

Now, more than ever, writing needs to have that spark of human life, imagination, and love. Same with graphics. The systems I experimented with produced some amazing images, but there was something strange and unsettling about them. It was hard to put my finger on what was so creepy. The way they portray human beings rarely looks real, or alive. Sometimes they have extra fingers or extra rows of teeth. They look like strangely soulless humanoid things and the different systems each have their own styles, which are recognizable. Same with the generative text. Some professors now say they are starting to recognize the use of ChatGPT in the work of students.

Yes, generative AI will be a powerful force in our world. It will change a lot of things in ways that no one can yet imagine. No developers intentionally built a program that would make up false stuff. We will see many more unintended results as these effects ripple through our world.

We’ll need to be alert, but the whole thing is coming down to earth a bit now. And that’s a good thing. We are beginning to understand it. We’ll need that, so that artificial intelligence doesn’t generate too much artificial stupidity.

 


headshot of David CogswellDavid Cogswell is a freelance writer working remotely, from wherever he is at the moment. Born at the dead center of the United States during the last century, he has been incessantly moving and exploring for decades. His articles have appeared in the Chicago Tribune, the Los Angeles Times, Fortune, Fox News, Luxury Travel Magazine, Travel Weekly, Travel Market Report, Travel Agent Magazine, TravelPulse.com, Quirkycruise.com, and other publications. He is the author of four books and a contributor to several others. He was last seen somewhere in the Northeast US.

Exit mobile version