Running LLMs locally is great for privacy and olline access.
My current setup is to use Jan.ai and Ollama. However, instructions on this setup seem to elude others, so I figure I'd share how I did it.
Since Jan.AI works with OpenAI compatible APIs. You can hook them up by creating a new Engine.
- Open Jan
- Click the settings icon in the lower left corner.
- Under "General" select the "Engines" tab.
- Select "Install Engine" and fill out the following details, leaving any extra fields blank. Engine Name: Ollama Chat Completions URL: http://localhost:11434/v1/chat/completions Model List URL: http://localhost:11434/v1/models API Key: ollama
- Click "Install"
Now when you create a new thread, you should be able to select "Ollama" from the cloud tab, and then select the pre-downloaded model from the list.
Enjoy!