LLM stands for Large Language Module. The best-known example of an LLM today is probably ChatGPT. Many have probably heard ChatGPT and similar tools called “artificial intelligence”. This term stirs up the context for most of us of things like Cortana from HALO, the computer Eddie from the Hitchhikers Guide to the Galaxy, or HAL from the movie 2001. These characters are intelligent in the same way people are. They can interpret information, draw logical inferences from it, and ultimately make informed decisions. This is VERY different from how CHatGPT and other LLMs work.
An LLM works by absorbing large amounts of data. In the case of ChatGPT this data is pretty much everything ever posted on the internet. From this data, the LLM learns how certain words are commonly combined. It learns to mimic certain styles of writing. And it creates a huge database of information. With these resources, it can do some amazing things like respond to the request “Tell me a joke like Jerry Seinfeld” with,
Sure, here’s a joke that’s Seinfeld-esque:
“I don’t understand why people say ‘no pun intended’ after making a pun. If you’re not intending to make a pun, then what are you doing? Accidentally rhyming words? That’s not a pun, that’s just a coincidence. You don’t hear people say ‘no alliteration intended’ after using the same letter multiple times in a sentence. It’s like they’re apologizing for being clever.”
And yes, that is a real example from ChatGPT. It’s incredibly impressive and it can be extremely useful in many cases. For example,
But, as with any tool, it is important to understand how the tool works. And what its limitations are. So here are a few very important things to know about ChatGpt and other LLMs
- These tools are not intelligent in the way humans are or the way advanced AIs in movies are. They cannot understand the information they have access to or provide.
- An LLM cannot tell the difference between true and false information. Remember that LLMs like ChatGPT have access to everything ever written online. That is everything from reputable scholarly sites but also everything from the darkest, scariest conspiracy theory sites as well. And it has no concept of truth or fiction. So it cannot really tell the difference between a scholarly paper about COVID, a fictional story about COVID, and an outright conspiracy theory about COVID that no sane person would believe.
- ChatGPT’s main purpose is to write in a natural way. So it sounds more like a human. Because of this, it will sometimes “make up” information if it doesn’t know the right answer.
- In the end, ChatGPT is simply regurgitating information it has found on other websites. In this sense, it is no different than a search engine like Google. The difference is that ChatGPT is not going to provide any context about what site the information it is providing comes from. So it is very hard for you as a consumer to know how much trust to put into anything you get from the service.
- Finally, anything you tell ChatGPT is added to its trove of data. Researchers have already demonstrated that it is possible to get ChatGPT to return the input queries of other users by asking the right questions. These so-called “training data extraction attacks” can recover verbatim text sequences, personally identifiable information (PII), and other information from ChatGPT
As the technology behind LLMs continues to advance, these limitations will change. New concerns will be uncovered and new ways to exploit the technology will be developed. For now, ChatGPT and other tools like it should be considered as interesting and potentially useful tools. But you should carefully research any information you get from an LLM before accepting it as true.
Leave a Reply