What are the use of a model weight and parameters?
A model weight is used to decide what matters more when the model makes a decision.
That’s the only job of a weight.
Before we go further, one important clarification.
When people say weights or parameters, they are talking about the same thing.
They are just numbers learned by the model. Different words, same idea.
Now let us see one everyday example
You are deciding:
Should I carry an umbrella?
You look at three things:
Is it raining?
Is it cloudy?
Is it windy?
But you don’t treat them equally.
Rain matters a lot, Clouds matter a little, Wind barely matters
So in your head, you automatically do this:
Rain → very important
Clouds→ somewhat important
Wind → not very important
Those “importance levels” are weights.
They help you reach the right decision.
Now the same idea in LLMs
An AI model constantly answers one question:
What should come next?
Example:
“I love ___”
Possible next words:
- you
- food
- rain
- bugs
The model assigns numbers to each option:
you → high number
food → medium number
rain → low number
bugs → very low number
The word with the highest number is chosen.
👉 That is the use of weights:
to score options and pick the most likely one.
A weight is not a rule or a sentence stored in memory.
It is just a number — a small nudge — that makes one word slightly more likely than another.
Why weights are necessary
Without weights:
- every word would be treated the same
- the model would guess randomly
- language would make no sense
With weights:
- common patterns matter more
- rare or wrong patterns matter less
- predictions improve over time
How weights get their correct values
At first, weights are random → wrong answers.
Each time the model is wrong:
- it compares its answer with the correct one
- slightly adjusts the weights
- tries again
After millions of examples:
useful connections get stronger weights and useless ones get weaker weights.

This preference to adjust weight didn’t come from reasoning or understanding.
It came from practice — seeing similar sentences millions of times before.
Here, learning doesn’t mean understanding like a human.
It simply means numbers being adjusted based on past patterns.
So, If you remember just one thing from this blog, let it be this:
A weight, also called a parameter, is a number the model learned because it helped reduce mistakes.
All the impressive behavior we see later emerges from those numbers.
With this, it becomes clear why people say a model has 7 billion parameters — it simply means there are 7 billion learned numbers shaping its behavior.
So at its core, an LLM is not storing answers.
It is carrying a vast set of learned numbers that gently push it toward one word instead of another.
I hope I was able to explain this in a clear and simple way. Happy Learning!.







