Install Ollama with the llama2 chat model with a GUI using Docker
Go to file
RomanNum3ral 7755d6f1a5 Update install1_temp.sh 2025-03-08 16:49:31 +00:00
README.md Update README.md 2024-07-21 20:14:19 -04:00
docker-compose.yaml Add docker-compose.yaml 2025-03-08 01:30:17 +00:00
install1.sh Update install1.sh 2025-03-05 17:22:49 +00:00
install1_temp.sh Update install1_temp.sh 2025-03-08 16:49:31 +00:00
install2.sh Update install2.sh 2025-03-08 16:46:40 +00:00
pre-install.sh Update pre-install.sh 2025-03-05 15:33:30 +00:00

README.md

7860 and 8080. This will install Ollama with the llama2 chat model. Install in your home directory. Install docker and Open WebUI to give the AI a GUI on port 8080 of what machine it is installed on If the install fails due to not seeing the graphics card (rtx nvidia) restart and try again