Install Ollama with the llama2 chat model with a GUI using Docker
Go to file
RomanNum3ral 6aa9fe56b8 Update pre-install.sh 2025-03-05 15:33:30 +00:00
README.md Update README.md 2024-07-21 20:14:19 -04:00
install1.sh Update install1.sh 2024-07-22 12:40:21 -04:00
install2.sh Create install2.sh 2024-07-22 06:29:12 -04:00
pre-install.sh Update pre-install.sh 2025-03-05 15:33:30 +00:00

README.md

7860 and 8080. This will install Ollama with the llama2 chat model. Install in your home directory. Install docker and Open WebUI to give the AI a GUI on port 8080 of what machine it is installed on If the install fails due to not seeing the graphics card (rtx nvidia) restart and try again