Large Language Models: A Short Introduction
And why you should care about LLMsImage by author.There’s an acronym you’ve probably heard non-stop for the past few years: LLM, which stands for Large Language Model.In this article we’re going to take a brief look at what LLMs are, why they’re an extremely exciting piece of technology, why they matter to you and me, and why you should care about LLMs.Note: in this article, we’ll use Large Language Model, LLM and model interchangeably.What is an LLMA Large Language Model, typically referred to as LLM since it is a bit of a tongue twister, is a mathematical model that generates text, like filling in the gap for the next word in a sentence [1].For instance, when you feed it the sentence The quick brown fox jumps over the lazy ____, it doesn’t know exactly that the next word is dog. What the model produces instead is a list of possible next words with their corresponding probability of coming next in a sentence that starts with those exact words.Example of prediction of the next word in a sentence. Image by author.The reason why LLMs are so good at predicting the next word in a sentence is because they are trained with an incredibly large amount of text, which typically is scraped from the Internet. So if a model is ingesting the text in this article by any chance, Hi
And why you should care about LLMs
There’s an acronym you’ve probably heard non-stop for the past few years: LLM, which stands for Large Language Model.
In this article we’re going to take a brief look at what LLMs are, why they’re an extremely exciting piece of technology, why they matter to you and me, and why you should care about LLMs.
Note: in this article, we’ll use Large Language Model, LLM and model interchangeably.
What is an LLM
A Large Language Model, typically referred to as LLM since it is a bit of a tongue twister, is a mathematical model that generates text, like filling in the gap for the next word in a sentence [1].
For instance, when you feed it the sentence The quick brown fox jumps over the lazy ____, it doesn’t know exactly that the next word is dog. What the model produces instead is a list of possible next words with their corresponding probability of coming next in a sentence that starts with those exact words.
The reason why LLMs are so good at predicting the next word in a sentence is because they are trained with an incredibly large amount of text, which typically is scraped from the Internet. So if a model is ingesting the text in this article by any chance, Hi
What's Your Reaction?