Set Up Your GPU Node

Follow this simple guide to connect your GPU and start earning. Takes about 5-10 minutes.

⏱️~5-10 minutes
🔧Beginner friendly
🐳Docker based

Prerequisites

Before You Start

  • GPU (any brand - NVIDIA, AMD, Apple Silicon, etc.) - any GPU that can run AI models works
  • GPU drivers installed and up to date
  • Stable internet connection (wired preferred)
  • Server URL: http://gpuai.app:8000

🚀 Ready to Start?

No approval needed! Just follow the steps below to connect your GPU and start earning.

Start Setup →
1

Install Docker

🪟 Windows

Just install Docker Desktop like any other app:

  1. 1.Download and install Docker Desktop for Windows (it's free)
  2. 2.Open Docker Desktop and wait for it to start (the whale icon in your system tray)

Note: Docker will prompt you to enable WSL 2 during installation - just click "Yes". Restart if needed.

🐧 Linux (Ubuntu/Debian)

Run these commands in your terminal:

Install Docker
# Update packages and install Docker
sudo apt update && sudo apt upgrade -y
curl -fsSL https://get.docker.com | sh

# Add your user to the docker group
sudo usermod -aG docker $USER

# Log out and back in, then verify
docker --version

🍎 macOS

Just install Docker Desktop like any other app:

After installing, open Docker Desktop and wait for it to start. That's it!

2

Run the GPU Worker

🚀 Ready to Start

Your worker will automatically generate a unique ID when it starts. No setup needed!

API Base URL: http://gpuai.app:8000

Run This Command:

Start GPU Worker
docker run -d --gpus all \
  --name gpu-worker \
  --restart unless-stopped \
  -e API_BASE_URL=http://gpuai.app:8000 \
  michaelmanleyx/gpuai-worker:latest

✓ That's it! Your worker is now running and will automatically connect to the network. Check logs to see your worker ID and status.

3

Verify It's Running

Check Worker Logs:

View Logs
docker logs gpu-worker

You should see your worker ID and "Connected to server" message. If you see errors, check the troubleshooting section below.

Frequently Asked Questions

How much can I earn by supplying my GPU?

Earnings depend on your GPU model and uptime. A typical consumer GPU (e.g., RTX 3080) can earn $50-150/month running 24/7. Higher-end GPUs like A100s or H100s can earn significantly more. You earn based on actual compute time when jobs are processed.

What happens if my GPU goes offline or has downtime?

No problem! The platform automatically routes jobs to other available GPUs. Your worker will reconnect when it comes back online. There's no penalty for downtime - you simply won't earn during offline periods.

Can I pause or stop my worker at any time?

Yes, absolutely! Just stop the Docker container with docker stop gpu-worker. You can restart it whenever you want with docker start gpu-worker. There's no minimum commitment.

What GPUs are supported?

We support NVIDIA GPUs with CUDA capability. This includes consumer cards (RTX 20/30/40 series, GTX 1000 series) and datacenter GPUs (A100, H100, V100, T4, etc.). AMD GPU support is coming soon. Minimum 4GB VRAM recommended.

How can I check the status of my worker?

Use docker logs gpu-worker to see recent activity. You can also check the control plane API at gpuai.app:8000/docs for worker status endpoints.

When and how do I get paid?

Payments are processed monthly via cryptocurrency (USDC/ETH) or PayPal. You need to reach a minimum threshold of $25 to request a payout. Detailed earnings tracking and payout requests will be available in the upcoming dashboard.

Is it safe to run this on my personal computer?

Yes! The worker runs in an isolated Docker container and only processes AI inference jobs. It doesn't have access to your files or personal data. All code is open source and auditable. However, running 24/7 may increase your electricity bill and wear on your GPU.

Can I run multiple GPUs on the same machine?

Yes! You can run multiple worker containers, each assigned to a different GPU using the --gpus device=0, --gpus device=1, etc. flags. Each worker needs a unique ID and will earn independently.

How reliable is the platform? Will I always get jobs?

Job availability varies based on demand. During peak hours, your GPU will stay busy. During quieter periods, there may be idle time. The platform is in beta, and we're actively growing the client base. As usage increases, so will job availability and your earnings potential.

Troubleshooting

Docker says "command not found"

Docker isn't installed or isn't in your PATH. Go back to Step 1 and install Docker Desktop (or Docker Engine on Linux).

"Could not select device driver" or GPU not detected

Your GPU drivers or Docker GPU support isn't set up correctly. For NVIDIA GPUs, install the NVIDIA Container Toolkit.

Worker keeps disconnecting

Check your internet connection. Make sure your firewall allows outbound connections to gpuai.app:8000. If on a corporate network, you may need to whitelist the server.

Docker daemon not running / "Cannot connect to Docker daemon"

This means Docker Desktop/Engine isn't running. Fix by:

  • Windows/macOS: Open Docker Desktop application and wait for the whale icon to stop animating
  • Linux: Start Docker with sudo systemctl start docker
  • Verify it's running: docker ps should not show errors
"Cannot connect to http://gpuai.app:8000" or "Connection refused"

The worker can't reach the API server. Check:

  • Test API access: Run curl http://gpuai.app:8000/health
  • If that fails, try ping gpuai.app to check DNS resolution
  • Check firewall isn't blocking port 8000
  • If on a VPN or corporate network, you may need to whitelist gpuai.app
  • Make sure you're using http:// not https://
"docker: Error response from daemon: could not select device driver"

Docker can't access your GPU. This happens when:

  • NVIDIA GPUs: Install NVIDIA Container Toolkit
  • On Linux, run: distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list && sudo apt-get update && sudo apt-get install -y nvidia-docker2 && sudo systemctl restart docker
  • Verify GPU is visible: nvidia-smi should show your GPU
  • Test Docker GPU access: docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
"pull access denied" or "repository does not exist"

Docker can't find the worker image. This means:

  • The image hasn't been pushed to Docker Hub yet (platform is in beta)
  • Solution: Build the worker locally from the repository:
    # The image is available on Docker Hub
    docker pull michaelmanleyx/gpuai-worker:latest
  • Then run the worker with the same command from Step 2
Worker starts but shows "No jobs available" constantly

This is normal! It means:

  • Your worker successfully connected to the server ✓
  • There are no inference jobs queued right now
  • Your worker is idle and waiting for work
  • You'll start earning when customers submit jobs

Job availability varies throughout the day. Keep your worker running to catch jobs when they come in!

How do I check if the backend API is accessible?

Test the backend connectivity with these commands:

# Check if API is reachable
curl http://gpuai.app:8000/health

# Should return: {"status":"healthy"}

# Check if you can reach the docs
curl -I http://gpuai.app:8000/docs

# Should return: HTTP/1.1 200 OK

If these fail, there may be a network issue or the backend is down. Check the troubleshooting steps above.

"Error: name is already in use by container"

A worker container with that name already exists. You can:

  • Remove the old container: docker rm gpu-worker
  • Stop and remove: docker stop gpu-worker && docker rm gpu-worker
  • Or just restart it: docker restart gpu-worker
Docker is using too much CPU/memory

You can limit resources with Docker flags:

docker run -d --gpus all \
  --name gpu-worker \
  --restart unless-stopped \
  --cpus="4.0" \
  --memory="8g" \
  -e API_BASE_URL=http://gpuai.app:8000 \
  michaelmanleyx/gpuai-worker:latest

This limits the worker to 4 CPU cores and 8GB RAM. Adjust as needed for your system.