AI, Misnomers and Misinformation | Travel Research Online

Image
Image

AI, Misnomers and Misinformation

Few events have generated as much hype and controversy as the introduction of ChatGPT last October. When its developer, Open AI, provided open access to the generative AI program, the site set a record for the fastest growing user base in history. It grew to 100 million users in two months. At the same time, it generated a tsunami of wild speculation and panic.

It would be bad enough that artificial intelligence is predicted by some to take over the jobs of travel professionals and virtually everyone else. But since the arrival of ChatGPT, many of the pioneer developers of artificial intelligence have issued ominous warnings that AI may lead to the end of civilization itself. Some are predicting that it will lead to human extinction. Stephen Hawking, often called the smartest man in the world, said that AI could be the “worst event in the history of our civilization.”

 

Chatgpt Chat with AI or Artificial Intelligence technology, business use AI smart technology by inputting, deep learning Neural networks to understand, respond to user inputs. future technology

 

Hyperbole anyone? It’s open season. Technological utopians are also shooting the moon with their predictions.

Max Tegmark, professor of physics and an AI researcher at the Massachusetts Institute of Technology, wrote that, “We can cure all diseases, stabilize our climate, eliminate poverty, etc. …”

Ray Kurzweil, computer scientist and inventor, said, “Everything’s going to improve. We will be able to cure cancer and heart disease, and so on… More intelligence will lead to better everything. We will have the possibility of everybody having a very good life.”

Arguably that possibility already exists. But increases in intelligence or technological capacity have not necessarily led toward “better everything” or to “everybody having a good life” before, and it’s not realistic to expect it to happen with AI.

I’ve been hearing these kinds of predictions all my life, only to see them collapse. When 1 percent of the world’s population hold as much wealth as the bottom half, we’re not even inching toward that utopian vision.

Much of the panic, as well as the utopian fantasizing, is based on imprecise language. The hype surrounding ChatGPT is rife with misnomers and misinformation.

There are always some in the media of any period who refuse to let facts get in the way of a good story. The end of civilization is too good a story to miss. What better clickbait could there be than to predict the end of everything that gives us support and comfort, everything we love and cherish?

The pile of misinformation and BS surrounding AI is so deep it’s hard to know where to begin to untangle it. Most of the hype is based on something that does not exist. You may have to skip to the fine print to discover this, but the fears of human extinction are based on artificial general intelligence, or AGI. That is defined as computer intelligence that can do anything humans do, and do it better. AGI is not the AI technology that is designed for specific tasks, much of which has already proven itself to be enormously beneficial. It refers rather to a general intelligence that can do anything humans can do only better. This does not yet exist.

The use of the term “artificial intelligence” tends to be general and vague. Artificial intelligence refers to computer technology that produces results that seem to have been produced by humans.

ChatGPT certainly fills that bill. It passes the Turing test, which is when a computer can convince you that you are communicating with a human. ChatGPT may be able to convince you that it has human intelligence, but it is not thinking at all. It is merely an algorithm that is good at predicting the most likely series of words to come up in response to any given proposition. It is also astonishingly good at producing language that sounds like a person.

ChatGPT is a highly complex extension of the auto-complete software that tries to finish your sentences when you are writing in Gmail. It’s predicting probabilities and, like auto-complete, there is a chance of it providing inaccurate information. If it were human, you might say that it is making a mistake. Some AI developers say that it is “hallucinating,” a term used for receiving incorrect results that are vastly out of alignment with reality or do not make sense in the context of the provided prompt.

Whatever you call it, you’d better not rely on ChatGPT for accurate information. It may give it to you, but it may not. What it can do, and how closely it emulates human intelligence is amazing. But when you break it down, it’s searching vast amounts of data and giving you the most likely sequences of words that relate to your question.

This will work for many functions, such as writing a routine email, just as auto-complete does. Any writing job that is done repeatedly and requires no originality or accurate information can be automated with ChatGPT. AI shines when applied to limited areas of inquiry and limited data sets.

AGI

Artificial General Intelligence is a whole other order of intelligence. A reasonably good definition of AGI appears on the TechTarget site:

Artificial general intelligence (AGI) is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. The intention of an AGI system is to perform any task that a human being is capable of.

This means computer intelligence that can do whatever humans can do only better. This does not exist. It has been predicted to be imminent for decades. But the goal continues to be elusive. That’s not to say it can’t happen but, at this point, it is still science fiction. The introduction of ChatGPT renewed predictions that AGI would be arriving shortly. But ChatGPT is not AGI.

This is one of those areas where history seems to periodically get wiped away and forgotten, like messages written in sand on the beach.

For example, in 1965, Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work a man can do.”

In 1970, Marvin Minsky said, “In from three to eight years we will have a machine with the general intelligence of an average human being.” These claims represent the general beliefs of AI developers over the decades.

Whenever the date of a prediction arrives and the prediction hasn’t come true, the goal posts are moved. No one can say that it can’t happen, but until now it has not happened, in spite of countless predictions.

Computers already exceed human intelligence in many ways, and have for decades. Try to match the power of a computer to add a column of figures and you can’t even come close. But general intelligence? That’s another matter. In that area, the predictions have always been over-optimistic. ChatGPT, impressive as it is, does not bring us to AGI. If a frog jumps half way to a wall with every jump, it never reaches the wall.

Now eight months into ChatGPT’s public life, its user frequency has dipped. The data analysis company Similarweb, reported that ChatGPT’s traffic decreased by nearly 10 percent from May to June. Sensor Tower, another data analyst, reported that ChatGPT’s iPhone app downloads have been steadily dropping since they peaked in early June.

Now Threads, the new app from Meta that emulates Twitter, has buried ChatGPT’s record for speed of new user adoption. While ChatGPT rose to 100 million in two months. Threads rose to 70 million in two days.

This downward trend may be evidence that the novelty is wearing off, that people are reaching the limits of what they can do with ChatGPT and are drifting away from it.

I’ve experimented at length with generative AI to try to discover what I could do with it. It is fascinating, but after a while it became dull. The most probable results are not the most interesting.

It certainly will be useful in many ways. AGI it is not, however. Though it may take away some jobs, transform others and enhance the powers of those performing many others, we are still nowhere near a computer that can do everything humans can do only better. When it comes to making my travel plans, I will still prefer human counseling, with the help of AI, for a long time to come.

I saw a report recently that claimed that not only will AI do everything humans can do, but that “computers don’t make mistakes.” I remember when I believed that. It was a long time ago. Call them mistakes, errors, what you will, computers can produce disastrous outcomes.

I’m not laying odds on how long civilization will survive. But I don’t believe that ChatGPT will be what brings it down, at least not right away. Meanwhile, if your job requires human originality, you probably won’t be replaced by a computer or a robot for a long, long time.

 


headshot of David CogswellDavid Cogswell is a freelance writer working remotely, from wherever he is at the moment. Born at the dead center of the United States during the last century, he has been incessantly moving and exploring for decades. His articles have appeared in the Chicago Tribune, the Los Angeles Times, Fortune, Fox News, Luxury Travel Magazine, Travel Weekly, Travel Market Report, Travel Agent Magazine, TravelPulse.com, Quirkycruise.com, and other publications. He is the author of four books and a contributor to several others. He was last seen somewhere in the Northeast US.

Share your thoughts on “AI, Misnomers and Misinformation”

You must be a registered user and be logged in to post a comment.