Introduction
Welcome to today’s detailed tutorial on leveraging the power of Open WebUI to run AI models like ChatGPT on your local machine. This guide is part of our ongoing journey into artificial intelligence, focusing on making complex technology accessible and practical for developers and AI enthusiasts alike.
Open WebUI is a user-friendly graphical interface (GUI) that allows you to utilise AI modules on your local setup efficiently. It supports integration with popular AI models such as LLaMA3, Gemma, GPT-4 and GPT-3, offering flexibility through API keys. This post will walk you through setting up necessary environments using Anaconda and Docker, followed by employing Open WebUI for your AI projects.
Table of Contents
- Introduction
- Why Use Open WebUI?
- Preliminary Setup: Anaconda Installation
- Creating and Managing Anaconda Environments
- Integrating Open WebUI with Anaconda and Docker
- Running Open WebUI on Localhost
- Creating and Configuring a User Account
- Demonstrating Open WebUI Features
- Next Steps and Future Videos
- Conclusion
Why Use Open WebUI?
Open WebUI provides an intuitive platform to harness the capabilities of AI models locally and private chat on your local machine. It simplifies interacting with large language models by presenting options through a graphical interface rather than complex command-line operations. Additionally, it can integrate seamlessly with various environments, including those set up through Anaconda and Docker, providing versatile and scalable solutions for your AI development needs.
Preliminary Setup: Anaconda Installation
Step 1: Downloading Anaconda
To start with, download Anaconda from the official Anaconda website.
- Navigate to the download section.
- Enter your email address as required and submit.
- Download the script suitable for your Linux system.
Step 2: Installing Anaconda
Once downloaded, the installation script will typically be located in your ‘Downloads’ folder.
Copy the Anaconda install script to the location where you want to run it from I recommend /home/user
cd ~/Downloads
ls
# Verify you've got the installer script, e.g., Anaconda3-2022.11-Linux-x86_64.sh
Make the script executable and run it:
chmod +x Anaconda3-2022.11-Linux-x86_64.sh
./Anaconda3-2022.11-Linux-x86_64.sh
Follow the on-screen instructions, pressing Enter to proceed through the license agreement. Type yes
to accept the agreement and follow the prompts to finalise the installation.
Step 3: Setting Up the Environment
After installing, you’ll need to initialise Conda:
conda init
Next, close your terminal and reopen it to activate the changes. The base environment will now be active by default.
Creating and Managing Anaconda Environments
Creating a specific environment for Open WebUI ensures that all necessary dependencies are contained within a dedicated space, preventing conflicts with other projects.
conda create -n openwebui python=3.11
conda activate openwebui
See full code at the bottom of page
Verify the environment by listing the installed packages:
conda list
Keep this environment activated for all subsequent steps.
Integrating Open WebUI with Anaconda and Docker
Step 1: Cloning the Repository
First, clone the Open WebUI repository from GitHub.
git clone https://github.com/your-repo/openwebui.git
cd openwebui
Step 2: Setting Up Node.js and NPM
Within the Anaconda environment, install Node.js and its package manager:
conda install -c conda-forge nodejs
npm install npm@latest -g
Step 3: Installing Project Dependencies
Navigate to the project directory and install dependencies:
npm i
npm run build
Step 4: Configuring Environment Variables
Copy the provided example environment file and adjust as needed:
cp .env.example .env
# Edit the .env file to include your specific configuration, such as API keys.
Running Open WebUI on Localhost
Step 1: Starting the Backend Server
From the project back-end directory, initiate the server:
cd backend
cd ./backend
pip install -r requirements.txt -U
~/locationof openwebui/open-webui/backend
bash start.sh
Step 2: Accessing Open WebUI
Open a browser and navigate to localhost:8080
. You’ll see the Open WebUI sign-up page.
Creating and Configuring a User Account
Create a user account for accessing the interface. Fill out the required fields:
- Username:
testuser
- Email:
testuser@example.com
- Password: Choose a strong password
Once signed up, log in to access the Open WebUI dashboard.
Demonstrating Open WebUI Features
After logging in, navigate to the ‘Models’ section. Here, you can load and manage various AI models.
Example Query
Ask a simple question to the AI model to see it in action:
What is the best language module to use?
The AI will process the query and provide an answer based on the loaded model.
Performance Note
If running on a VirtualBox with limited resources, performance might be slower. In future videos, we will explore setups on more robust systems with dedicated GPUs.
Next Steps and Future Videos
Our journey doesn’t end here. In subsequent videos, we will:
- Deploy Open WebUI on higher-end hardware.
- Explore advanced configurations and usage scenarios.
- Integrate other AI models and APIs for enhanced functionality.
- Utilise admin panel settings for refined controls and customisation.
Conclusion
We’ve covered the essential steps for setting up and running Open WebUI on your local machine. By creating dedicated environments with Anaconda and leveraging Docker for containerised deployment, you now have a robust setup to explore the capabilities of AI models.
Stay tuned for our upcoming videos, where we delve deeper into optimising and expanding your AI development toolkit. If you found this guide helpful, don’t forget to like, share, and subscribe to our channel for more insights and tutorials.
Open-WebUi
git clone https://github.com/open-webui/open-webui.git
cd open-webui/
conda create -n open-webui python=3.11
conda activate open-webui
conda install -c conda-forge nodejs
npm install 20
sudo apt install ffmpeg #FFmpeg (Fast Forward MPEG) is a powerful, open-source software suite for handling multimedia data
sudo apt install uvicorn #Uvicorn is a lightning-fast ASGI (Asynchronous Server Gateway Interface) server implementation, used primarily with Python web applications.
Copying required .env file
cp -RPp .env.example .env
Building Frontend Using Node
npm i
npm run build
If required npm audit –fix
Serving Frontend with the Backend
cd ./backend
pip install -r requirements.txt -U
bash start.sh
To Update from Git browse into folder
cd open-webui
Activate conda
git pull origin main
run install process
Deactivate conda all the way and then reactivate
Now start open-webui as ussual