Deploy Reflex with Docker, GitHub Actions, and Neon Postgres
By Justin

This guide will help you deploy any given Reflex app to a Linode Virtual Machine using Docker, GitHub Actions, and Neon Postgres. Reflex is a framework for building full stack web apps using purely Python. We use these other tools for these primary reasons:
- Docker: To run our app anywhere; it turns our runtime into a container that can be deployed nearly anywhere.
- GitHub Actions: This is how we will automate the build of our container and pushing it to Docker Hub.
- Neon Postgres: We need a reliable way to store all user data. What's more, we need a way to test new versions of our app without creating new databases. Neon can do both: manage our Postgres databases and unlock the concept of branching so we can take a point-in-time snapshot of our database without touching production.
Account Sign up
- GitHub Account
- Docker Hub Account.
- Docker Desktop Download optional but recommended for local testing/building.
- Linode Account
- Neon Account
Minimal Project Setup
1. Install Python 3.11+ via the guide for macOS or Windows
2. Create a new virtual environment
The install guides will show you how to do this but generally it's:
bash
mkdir -p ~/dev/reflex-gpt
cd ~/dev/reflex-gpt
python3.11 -m venv venv
source venv/bin/activate
It's .\venv\Scripts\activate on Windows.
###. 3. Install requirements
With the virtual environment activated, install the requirements for the project.
bash
(venv)$ python -m pip install pip reflex psycopg2-binary python-decouple
Let's breakdown these packages:
- pip: The Python package installer.
- reflex: The framework we're using to build our app.
- psycopg2-binary: A binary for library for interacting with Postgres (easiest to install). At the time of this writing, psycopg version 3 wasn't supported -- give it a try if you want: pip uninstall psycopg2-binary && pip install psycopg[binary]
- python-decouple: A library for loading the dotenv .env file into our Python code. Feel feel to use any you prefer.
4. Create a new Reflex project
With the virtual environment activated, we can create a new Reflex project.
bash
(venv)$ reflex init
In side of our reflex-gpt directory, the main new items are:
- reflex_gpt/: The primary Python module of your app based on the name of the parent directory (reflex-gpt).
- reflex_gpt/reflex_gpt.py: The main entry point for our app.
- rxconfig.py: The main configuration for your app.
Integrate Neon with Reflex
1. Create a Neon account at https://neon.tech/cfe
2. Create a New Project
- Login to the Neon Console
- Select Create a new project
- Name your project reflex-cfe
- Select a region near you physically
- Click Create project
3. Database Connection String
After the project is created, find the Database Connection String, it should resemble:
bash
postgres://<username>:<password>@<project_id>.us-east-2.aws.neon.tech/<dbname>?options=project%3D<project_id>
In the root of your project, create a new dotenv file called .env and add the following:
bash
DATABASE_URL=postgres://<username>:<password>@<project_id>.us-east-2.aws.neon.tech/<dbname>?options=project%3D<project_id>
This will be stored at ~/dev/reflex-gpt/.env if you followed the steps above.
4. Add Neon to Reflex
Now using python-decouple we will load the database connection string from the .env file into our Python code. Edit rxconfig.py and add the following:
python
import reflex as rx
from decouple import config
DATABASE_URL = config("DATABASE_URL", default="postgres:///localhost:5432/dbname")
config = rx.Config(
app_name="reflex_gpt",
db_url=DATABASE_URL,
)
Python decouple allows for a default value which I left as a local Postgres database; I actually recommend having no default value so that an error is thrown if either .env file is not set up properly or your runtime environment is missing the production DATABASE_URL value.
5. Create a Reflex Model
Creating models are outside the scope of this guide. If you want to see some example code consider:
- Our Reflex-GPT code that this guide was made for. The related course is available too.
- The Full Stack Python GitHub Repo and related course
- The Reflex documentation
6. Run migrations
After you have a model created, you can run the migrations.
bash
(venv)$ reflex db --help
(venv)$ reflex db init
(venv)$ reflex db makemigrations
(venv)$ reflex db migrate
At this point, you should see a new file in your alembic/ directory in your project that includes all migration history.
Assuming it does, you're now ready to containerize your Reflex app and push it to Docker Hub.
Containerize your Reflex app
For our Reflex app, we are going to have a single Dockerfile so we can run both our frontend and backend using Docker Compose. The runtime commands will be:
- reflex run --env prod --frontend-only
- reflex run --env prod --backend-only
As you can see, this means we will eventually have two containers running on our Linode VM.
In the long run, having 1 container for the frontend and 1 container for the backend might be more efficient. That said, I found the deployment to be more reliable when we separate the two runtimes.
1. Create a Dockerfile
At the root of your project, create a new file called Dockerfile.production and add the following:
dockerfile
FROM python:3.11-slim
WORKDIR /app
ARG NODE_VERSION=20.x
# Install necessary tools, Node.js, and unzip
RUN apt-get update && apt-get install -y \
curl \
libpq-dev \
gnupg \
unzip \
&& curl -fsSL https://deb.nodesource.com/setup_${NODE_VERSION} | bash - \
&& apt-get install -y nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Verify Node.js installation
RUN node --version && npm --version
# Create reflex user
RUN adduser --disabled-password --home /app reflex
# Set up Python environment
RUN python -m venv /app/.venv
ENV PATH="/app/.venv/bin:$PATH"
# Copy the application files
COPY --chown=reflex:reflex . /app
# Move .build-env to .env if it exists
RUN if [ -f .build-env ]; then mv .build-env .env; fi
# Set permissions
RUN chown -R reflex:reflex /app
# Switch to reflex user
USER reflex
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Initialize Reflex
RUN reflex init
# Remove .env file after reflex init
RUN rm -f .env
# Ensure all environment variables are set
ENV PATH="/app/.venv/bin:/usr/local/bin:/usr/bin:/bin:$PATH"
ENV NODE_PATH="/usr/lib/node_modules"
ENV REFLEX_DB_URL="sqlite:///reflex.db"
# Needed until Reflex properly passes SIGTERM on backend.
STOPSIGNAL SIGKILL
# Always apply migrations before starting the backend.
CMD ["sh", "-c", "reflex db migrate && reflex run --env prod --backend-only"]
2. Create the Docker Compose file
Next to this Dockerfile, create another file called compose.prod.yaml and add the following:
yaml
services:
frontend:
build:
context: .
dockerfile: Dockerfile.production
image: codingforentrepreneurs/reflex-gpt:latest
env_file:
- .env
ports:
- 80:3000
command: reflex run --env prod --frontend-only
app:
build:
context: .
dockerfile: Dockerfile.production
image: codingforentrepreneurs/reflex-gpt:latest
env_file:
- .env
ports:
- 8000:8000
Add .dockerignore. This file is much like the .gitignore so feel free to use that:
*.db
*.py[cod]
__pycache__/
assets/external/
.env
.env*
venv/
# Truncated to save space
Now that we have these files, it's time to push our code to GitHub.
Push to GitHub
Now we have a bunch of files that we need to push to GitHub.
- alembic/
- reflex_gpt/
- rxconfig.py
- compose.prod.yaml
- Dockerfile.production
- .gitignore
- .dockerignore
Ensure that .env is not included and .gitignore mirrors this exact .gitignore file. Ensure that .dockerignore is also ignoring .env.
1. Initialize the Repository
bash
git init
git add --all
git commit -m "Initial project commit"
2. Create the Repository
- Create a new repository at GitHub called reflex-deploy.
- Add the remote origin to your local repository.
bash
git remote add origin https://github.com/<your-username>/reflex-deploy.git
3. Push the Repository
bash
git push -u origin main
If done correctly, your code should now be on GitHub.
Create a Docker Hub Repository and Token
-
Create a new repository at Docker Hub called reflex-cfe, ensure that it is private (this is important for deploying to production). My DOCKERHUB_REPO is codingforentrepreneurs/reflex-gpt.
-
Create a Docker Hub token at Docker Hub.
- Navigate and click on Personal Access Tokens
- Click Generate new token
- Give it a description like Reflex GPT Github Actions Workflow
- Give it Read + Write access permissions (you definitely need to be able to write to push your images)
- Click Generate
- Copy the token to your clipboard, you need to add it to your GitHub repo created in the last section.
-
Add the Docker Hub token to your GitHub repo secrets.
- Navigate to your repo on GitHub
- Click on Settings
- Click on Secrets and variables
- Click on Actions
- Click New repository secret
- Add DOCKERHUB_TOKEN as the name and the token you copied from Docker Hub as the value.
- Click Add secret
- Click New repository secret
- Add DOCKERHUB_USERNAME as the name and your Docker Hub username as the value.
- Optionally, you can add your Docker Hub repository name as well.
- Click Add secret
- Click New repository secret
- Add DOCKERHUB_REPO as the name and your Docker Hub repository name as the value.
- Click Add secret
GitHub Actions Workflow to Build + Push Docker Image
In the root of your project, create a new file called .github/workflows/build.yaml (the period . in front of .github is required) and add the following:
yaml
name: Build and Push Container
on:
workflow_dispatch:
push:
branches: [ "main" ]
paths:
- "Dockerfile.production"
- "compose.prod.yaml"
- "requirements.txt"
- assets/
- reflex_gpt/
- rxconfig.py
- alembic.ini
- alembic/
- .github/workflows/build.yaml
env:
DOCKER_IMAGE: codingforentrepreneurs/reflex-gpt
# uncomment if using
# DOCKER_IMAGE: ${{ secrets.DOCKERHUB_REPO }}
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# For Reflex to build a container,
# injecting your environment variables at
# container build time is often required.
- name: Create build env file
run: |
cat << EOF > .build-env
DATABASE_URL=${{ secrets.DATABASE_URL }}
EOF
- name: Build and push
run: |
docker build -f Dockerfile.production -t ${{ env.DOCKER_IMAGE }}:latest .
docker tag ${{ env.DOCKER_IMAGE }}:latest ${{ env.DOCKER_IMAGE }}:${{ github.sha }}
docker push ${{ env.DOCKER_IMAGE }} --all-tags
- name: Remove build env file
run: rm .build-env
Commit this code and push:
bash
git add .github/workflows/build.yaml
git commit -m "Add GitHub Actions build workflow"
git push
After pushing, you should see the workflow run automatically on GitHub. If it errors, check the steps and ensure that all of the secrets are set properly.
Deploy to a Linode Virtual Machine
Most of this is configuration and the process goes like this:
- Create a new disposable SSH Key (e.g. do not use your local ssh key)
- Add the public and private SSH Key on GitHub Actions
- Add the public SSH key on Linode and provision a Linode Virtual Machine
- Add the Ansible Playbook to your GitHub repository. Ansible will install Docker and run your Docker Compose file.
- Create a new GitHub Actions Workflow to deploy via your Ansible playbook.
- Verify the deployment
- Celebrate!
Creating a new disposable SSH Key
This process can be simple if you understand SSH keys. Here's the command:
ssh-keygen -t rsa -b 4096 -C "[email protected]" -f ~/dev/reflex-gpt/reflex-gpt
The above command will create two files in your ~/dev/reflex-gpt directory:
- reflex-gpt (this is your private key)
- reflex-gpt.pub (this is your public key)
Be sure to add both files to your .gitignore file with:
bash
echo "reflex-gpt" >> .gitignore
echo "reflex-gpt.pub" >> .gitignore
Or manually add reflex-gpt and reflex-gpt.pub to your .gitignore.
If you want a more in-depth understanding of how to use SSH keys, read the Using SSH & Creating SSH Keys blog post
Public + Private SSH Key on GitHub Actions
We want to have both the public and private keys on GitHub Actions. The reason is simple: GitHub Actions is going to perform all kinds of automations on our behalf. These keys were made to give GitHub Actions the permission to do so.
In GitHub Actions, create two secrets:
- SSH_PUBLIC_KEY (Located at ~/dev/reflex-gpt/reflex-gpt.pub)
- SSH_PRIVATE_KEY (Located at ~/dev/reflex-gpt/reflex-gpt)
Add the correct key to the correct place. Fow now, keep a local copy of your public and private key in your project in case we need to troubleshoot.
Add the Public SSH Key to Linode
- Create an Account on Linode (use https://linode.com/justin for a $100 credit)
- Login to Linode
- Navigate to your user account dropdown (top right corner)
- Click on SSH Keys
- Click Add SSH Key
- Under Label, add cfe-reflex-gpt
- Under SSH Public Key copy and paste your the public key (~/dev/reflex-gpt/reflex-gpt.pub)
- Click Add Key
With this key added, we can now provision a Linode Virtual Machine.
Provision a Linode Virtual Machine
- Login to Linode
- Click on Create > Linode
- Under OS, select Ubuntu 22.04 LTS
- Under Region, select one near you (e.g. US, Seattle, WA (us-sea))
- Under Linode Plan, select Shared > Nanode 1 GB
- Under Details, set what you like for Label.
- Under Security, set a root password (e.g. with openssl rand -base64 32)
- Under SSH Keys, click Add SSH Key and select the key you created above (e.g. cfe-reflex-gpt)
- All other defaults are fine, click Create Linode.
After a couple minutes, you will see your Linode Virtual Machine running. Click on the detail of it (if you haven't already) and you will see the IP address (e.g. ssh [email protected])
Test you can connect to your Linode Virtual Machine with the following command:
bash
ssh -i ~/dev/reflex-gpt/reflex-gpt [email protected]
You should not be prompted for a password -- you should be connected to the Ubuntu virtual machine.
If you are prompted for a password, double check all of the steps and try again. Get in the habit of deleting virtual machines when you make mistakes; that's what they are there for and certainly what I do all the time.
If it doesn't connect right away, wait a couple minutes and try again.
Assuming you can connect, grab the IP address of your Linode Virtual Machine and add it to your GitHub Actions secrets as VIRTUAL_MACHINE_IP.
Ansible Playbook
Ansible can be a bit tricky to learn and understand. For that reason, I will just give you a basic playbook that you can use to get started.
Ansible takes a yaml file and will configure a virtual machine based on that file, the ssh keys we provide, and any given hosts (ip addresses) we provide.
Add the playbook to your project at devops/playbook.yaml and add the following:
yaml
---
- hosts: linode
become: true
vars:
project_dir: /app/
remote_env_file: "{{ project_dir }}/.env"
compose_file: "{{ project_dir }}/compose.prod.yaml"
env_file: "{{ playbook_dir | dirname }}/.env"
vars_files:
- "{{playbook_dir}}/vars.yaml"
tasks:
- name: Download Docker installation script
get_url:
url: https://get.docker.com
dest: /tmp/get-docker.sh
mode: '0755'
- name: Run Docker installation script
shell: /tmp/get-docker.sh
args:
creates: /usr/bin/docker
- name: Ensure Docker service is running
systemd:
name: docker
state: started
enabled: yes
- name: Create project directory
file:
path: "{{ project_dir }}"
state: directory
mode: '0755'
- name: Copy .env file to server
copy:
src: "{{ env_file }}"
dest: "{{ remote_env_file }}"
no_log: false
- name: Copy Docker Compose file to server
copy:
src: "{{ playbook_dir | dirname }}/compose.prod.yaml"
dest: "{{ compose_file }}"
- name: Login to Docker Hub
shell: echo {{ dockerhub_token }} | docker login -u {{ dockerhub_username }} --password-stdin
no_log: true
- name: Pull latest Docker images
shell:
cmd: docker compose -f compose.prod.yaml pull
args:
chdir: "{{ project_dir }}"
- name: Update Docker services
shell:
cmd: |
services=$(docker compose -f compose.prod.yaml config --services)
for service in $services; do
if docker compose -f compose.prod.yaml ps --status running $service | grep -q $service; then
echo "Updating running service: $service"
docker compose -f compose.prod.yaml up -d --no-deps "$service"
else
echo "Starting service: $service"
docker compose -f compose.prod.yaml up -d --no-deps "$service"
fi
done
args:
chdir: "{{ project_dir }}"
- name: Remove orphaned containers
shell:
cmd: docker compose -f compose.prod.yaml up -d --remove-orphans
args:
chdir: "{{ project_dir }}"
- name: Prune Docker system
shell:
cmd: docker system prune -f
args:
chdir: "{{ project_dir }}"
While this looks a lot like the GitHub Actions workflow. It is different. This gives us a bunch of step-by-step actions that Ansible will run as many times as needed on our virtual machine(s).
Commit this repo for the next step.
git add devops/playbook.yaml
git commit -m "Added Ansible playbook"
git push
GitHub Actions Workflow to Deploy via Ansible
To run Ansible on GitHub Actions we need the following:
- Python and the pypi ansible package installed pip install ansible
- A inventory file to tell Ansible which virtual machine(s) to target.
- Ansible playbooks (like the one from the previous section) and Ansible variable files (to reference our Docker related items).
- An Ansible configuration file that tells Ansible to use the inventory file and how to connect to the virtual machine(s) (e.g. reference to our SSH private key).
With this in mind, create a new workflow file at .github/workflows/deploy.yaml and add the following:
yaml
name: Deploy with Ansible
on:
workflow_dispatch:
env:
DOCKER_IMAGE: codingforentrepreneurs/reflex-gpt
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Ansible
run: |
sudo apt update
sudo apt install -y ansible
- name: Set up SSH key
uses: webfactory/[email protected]
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Add host key to known hosts
run: |
mkdir -p ~/.ssh
ssh-keyscan ${{ secrets.VIRTUAL_MACHINE_IP }} >> ~/.ssh/known_hosts
- name: Create Ansible inventory
run: |
echo "[linode]" > inventory.ini
echo "${{ secrets.VIRTUAL_MACHINE_IP }} ansible_user=root" >> inventory.ini
- name: Create env file
run: |
cat << EOF > .env
DATABASE_URL=${{ secrets.DATABASE_URL }}
DOCKERHUB_TOKEN=${{ secrets.DOCKERHUB_TOKEN }}
DOCKERHUB_USERNAME=${{ secrets.DOCKERHUB_USERNAME }}
OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}
EOF
- name: Create Ansible Vars file
run: |
cat << EOF > devops/vars.yaml
dockerhub_token: ${{ secrets.DOCKERHUB_TOKEN }}
dockerhub_username: ${{ secrets.DOCKERHUB_USERNAME }}
EOF
- name: Run Ansible playbook
env:
ANSIBLE_HOST_KEY_CHECKING: False
run: |
ansible-playbook -i inventory.ini devops/playbook.yaml
- name: Remove vars file
run: rm devops/vars.yaml && rm .env
if: always()
If you go through each step of the workflow, you can see we are running Ansible with our new playbook, inventory file, environment variables secrets for Docker Compose and Docker Hub.
With this in place, we can now push the code to GitHub and trigger the workflow.
bash
git add .github/workflows/deploy.yaml
git commit -m "Added Ansible deploy workflow"
git push
After a short duration, Ansible should complete with our application deployed.
If you see an error, consider running a manual SSH session to your remote host and run:
bash
cd /app
docker compose -f compose.prod.yaml ps
If you see an error with Docker Compose, double check the compose.prod.yaml file for any syntax errors.
Conclusion
You have now successfully deployed a Reflex app to a Linode Virtual Machine using GitHub Actions and Docker. Congratulations!
If you find errors and have suggestions please add them to GitHub Issues so everyone can benefit from the improvements. Keep in mind that the Reflex-GPT repo is the best place to find the final versions of the all the code in this post.
Thank you.