My Project: Building a Local Conversational Chatbot with Ollama and Multiple LLMs
In this project, I built a local conversational chatbot similar to ChatGPT using Ollama and several open-source large language models (LLMs). Below is a step-by-step outline of how I accomplished this:
1. Development Environment
I used Visual Studio Code (VSCode) as my development environment. I have over eight years of experience using VSCode, and it remains my go-to tool for writing and managing code.
2. Installing Ollama
I installed Ollama on my local computer. Ollama is a powerful platform that hosts a variety of open-source LLMs and allows you to run them locally. I downloaded and installed the Ollama runtime (ollamd) directly onto my machine.
3. Downloading Llama 3.2
To begin, I downloaded the Llama 3.2 model using the following command:
Llama 3.2 is approximately 2.0 GB in size. This command ensures the latest version is pulled. I verified the installation using:
This command displays all the models currently installed on my system.
4. Setting Up Ollama Web UI
To give my chatbot a ChatGPT-like interface, I installed the Ollama Open Web UI. This required Docker, as the UI runs inside a Docker container. The instructions and two setup commands (one for CPU, the other for GPU) are available on the Ollama Web UI GitHub page. Since my computer doesn’t have a GPU and runs on 16GB of RAM, I used the CPU version.
5. Running the Web UI
Before starting the Web UI, I ensured that Docker was running. Once started, the Ollama Web UI becomes accessible locally at:
This is where I could interact with my Llama 3 chatbot.
6. Exposing the Application Online with Ngrok
To make my application accessible over the internet, I used Ngrok, a secure tunneling tool. I installed the free version and followed the setup instructions from the Ngrok website. Once configured, Ngrok provided an HTTPS domain that exposed my locally hosted chatbot to the world.
7. Hosting the Application Online
I created a simple one-page website on my hosting server and embedded the chatbot interface. You can access the online version of the chatbot here:
8. Expanding with Multiple LLMs
To enhance my application, I added support for multiple LLMs, transforming it into a multi-model AI platform. The additional models include:
-
Phi (by Microsoft)
-
DeepSeek-R
-
Mistral
-
TinyLlama
Each of these models was downloaded and integrated using Ollama, enabling users to switch between different language models within the chatbot interface.