OpenWebUI Main Page
Respect my work
Please do not share website content, slides, or notes with anyone else who are NOT taking this course now. This is my own intellectual work and I hope you can respect that.
Open WebUI
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution
Documentations: link
Community: link
See left menu for various features you can do Open WebUI. Or see below
Table of Contents for Using Open WebUI
(../computing/computing-openwebui-accessingmodel1.qmd).
Accessing LLM models
- Accessing models using Method 1 - Local downloaded models
- Accessing models using Method 2 - Open AI API
- Accessing models using Method 3 - Groq fast AI inference
Various features to add
- Adding Text-to-Speech TTS capability
- Add Web search capability
- Adding Vision capability
- Adding Persoal Knowledge to the model using RAG
Updating softwares
Creating/Sharing/Uploading LLM apps using Models