On this article, I’ll present you use the Modelfile in Ollama to vary how an current LLM (Llama2) behaves when interacting with it. I’ll additionally present you save your newly custom-made mannequin to your private namespace on the Ollama server.
I do know it could get a bit complicated with all of the totally different ”llamas” flying round. Simply keep in mind, Ollama is the corporate that allows you to obtain and regionally run many alternative LLMs. Whereas, Llama2 is a specific LLM created by Meta the proprietor of Fb. Aside from this relationship, they aren’t linked in another manner.
When you’ve by no means heard of Ollama earlier than I like to recommend that you simply take a look at my article under the place I’m going into depth on what Ollama is and set up it in your system.
What’s a modelfile?
In Ollama, a modelfile
refers to a configuration file that defines the blueprint to create and share fashions with Ollama.
The modelfile accommodates info comparable to,
- Base Mannequin Reference. All modefiles should have a mannequin that they use as the idea for any new mannequin
- Parameters. These specify issues such because the
temperature, top_k and top_p
that ought to be utilized to the brand new mannequin. We’ll discuss extra about these afterward. - Template. This specifies what the ultimate immediate can be that’s handed to the LLM.
- System. We are able to use this command to find out how the system behaves general.
There are different properties the modelfile could make use of however we’ll solely be utilizing those above. There’s a hyperlink to the Ollama documentation on the finish of the article if you wish to discover out extra about this.
The bottom mannequin
The very first thing we have to do is establish an current mannequin so we are able to study its properties and make the adjustments we wish to it. For that, I’m going…