Running Stable Diffusion in Docker
Stable Diffusion is a deep learning, text-to-visual model that allows programs and projects that can use test descriptions to generate paintings and images.
Last updated
Stable Diffusion is a deep learning, text-to-visual model that allows programs and projects that can use test descriptions to generate paintings and images.
Last updated
Begin by registering for TensorDock Marketplace and selecting a GPU with at least 10 GB of memory to launch an instance.
We will be using a Ubuntu instance for this tutorial. Select an external port that maps to the internal port 22. This will allowing SSHing into the instance.
Finish up your server by setting a secure username and password.
On the information page for your instance, you can find the necessary IPv4 address and command to SSH through the command line.
Using the command, you can SSH into your server. Through command line, you will need to enter your username and password which will then grant you access to the GPU and using it for stable diffusion!
Docker comes default in all TensorDock instances, however by adding Docker Networking, you can make external requests needed for stable diffusion. First, clone the following git repo and cd into that directory.
git clone https://github.com/monatis/stable-diffusion-tf-docker.git && cd stable-diffusion-tf-docker
Then, we will add the daemon.json
file and restart the docker service.
sudo cp ./daemon.json /etc/docker/daemon.json
sudo systemctl restart docker.service
With Docker Compose, we can create a docker-compose.yml
file that will look to a specified port variable for the GPU. On whatever external port you chose to forward to port 22, run
export PUBLIC_PORT= *your port number here*
docker compose up -d
Once it’s up and running, go to http://mass-a.tensordockmarketplace.com:*port number*/docs
for the Swagger UI provided by FastAPI. Using the POST /generate
endpoint, you can generate your image and its download id. You can download it with GET /download/<download_id>
.