Install Ollama with the llama2 chat model with a GUI using Docker
Go to file
RomanNum3ral 42de64e5e2 Update 02-ollama_install_v2.sh 2025-09-29 12:30:51 +00:00
01-nvidia_driver_install.sh Update 01-nvidia_driver_install.sh 2025-09-17 12:33:00 +00:00
01-nvidia_driver_install_v2.sh Update 01-nvidia_driver_install_v2.sh 2025-09-29 12:30:28 +00:00
02-ollama_install.sh Update 02-ollama_install.sh 2025-09-29 12:17:27 +00:00
02-ollama_install_v2.sh Update 02-ollama_install_v2.sh 2025-09-29 12:30:51 +00:00
03-stable_diffusion_install.sh Update 03-stable_diffusion_install.sh 2025-09-17 12:31:20 +00:00
03-stable_diffusion_install_v3.sh Add 03-stable_diffusion_install_v3.sh 2025-09-28 23:20:10 +00:00
README.md Update README.md 2024-07-21 20:14:19 -04:00
docker-compose.yaml Add docker-compose.yaml 2025-03-08 01:30:17 +00:00

README.md

7860 and 8080. This will install Ollama with the llama2 chat model. Install in your home directory. Install docker and Open WebUI to give the AI a GUI on port 8080 of what machine it is installed on If the install fails due to not seeing the graphics card (rtx nvidia) restart and try again