Follow this simple guide to connect your GPU and start earning. Takes about 5-10 minutes.
http://gpuai.app:8000No approval needed! Just follow the steps below to connect your GPU and start earning.
Start Setup →Just install Docker Desktop like any other app:
Note: Docker will prompt you to enable WSL 2 during installation - just click "Yes". Restart if needed.
Run these commands in your terminal:
# Update packages and install Docker
sudo apt update && sudo apt upgrade -y
curl -fsSL https://get.docker.com | sh
# Add your user to the docker group
sudo usermod -aG docker $USER
# Log out and back in, then verify
docker --versionJust install Docker Desktop like any other app:
After installing, open Docker Desktop and wait for it to start. That's it!
Your worker will automatically generate a unique ID when it starts. No setup needed!
API Base URL: http://gpuai.app:8000
docker run -d --gpus all \
--name gpu-worker \
--restart unless-stopped \
-e API_BASE_URL=http://gpuai.app:8000 \
michaelmanleyx/gpuai-worker:latest✓ That's it! Your worker is now running and will automatically connect to the network. Check logs to see your worker ID and status.
docker logs gpu-workerYou should see your worker ID and "Connected to server" message. If you see errors, check the troubleshooting section below.
Earnings depend on your GPU model and uptime. A typical consumer GPU (e.g., RTX 3080) can earn $50-150/month running 24/7. Higher-end GPUs like A100s or H100s can earn significantly more. You earn based on actual compute time when jobs are processed.
No problem! The platform automatically routes jobs to other available GPUs. Your worker will reconnect when it comes back online. There's no penalty for downtime - you simply won't earn during offline periods.
Yes, absolutely! Just stop the Docker container with docker stop gpu-worker. You can restart it whenever you want with docker start gpu-worker. There's no minimum commitment.
We support NVIDIA GPUs with CUDA capability. This includes consumer cards (RTX 20/30/40 series, GTX 1000 series) and datacenter GPUs (A100, H100, V100, T4, etc.). AMD GPU support is coming soon. Minimum 4GB VRAM recommended.
Use docker logs gpu-worker to see recent activity. You can also check the control plane API at gpuai.app:8000/docs for worker status endpoints.
Payments are processed monthly via cryptocurrency (USDC/ETH) or PayPal. You need to reach a minimum threshold of $25 to request a payout. Detailed earnings tracking and payout requests will be available in the upcoming dashboard.
Yes! The worker runs in an isolated Docker container and only processes AI inference jobs. It doesn't have access to your files or personal data. All code is open source and auditable. However, running 24/7 may increase your electricity bill and wear on your GPU.
Yes! You can run multiple worker containers, each assigned to a different GPU using the --gpus device=0, --gpus device=1, etc. flags. Each worker needs a unique ID and will earn independently.
Job availability varies based on demand. During peak hours, your GPU will stay busy. During quieter periods, there may be idle time. The platform is in beta, and we're actively growing the client base. As usage increases, so will job availability and your earnings potential.
Docker isn't installed or isn't in your PATH. Go back to Step 1 and install Docker Desktop (or Docker Engine on Linux).
Your GPU drivers or Docker GPU support isn't set up correctly. For NVIDIA GPUs, install the NVIDIA Container Toolkit.
Check your internet connection. Make sure your firewall allows outbound connections to gpuai.app:8000. If on a corporate network, you may need to whitelist the server.
This means Docker Desktop/Engine isn't running. Fix by:
sudo systemctl start dockerdocker ps should not show errorsThe worker can't reach the API server. Check:
curl http://gpuai.app:8000/healthping gpuai.app to check DNS resolutionhttp:// not https://Docker can't access your GPU. This happens when:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list && sudo apt-get update && sudo apt-get install -y nvidia-docker2 && sudo systemctl restart dockernvidia-smi should show your GPUdocker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smiDocker can't find the worker image. This means:
# The image is available on Docker Hub docker pull michaelmanleyx/gpuai-worker:latest
This is normal! It means:
Job availability varies throughout the day. Keep your worker running to catch jobs when they come in!
Test the backend connectivity with these commands:
# Check if API is reachable
curl http://gpuai.app:8000/health
# Should return: {"status":"healthy"}
# Check if you can reach the docs
curl -I http://gpuai.app:8000/docs
# Should return: HTTP/1.1 200 OKIf these fail, there may be a network issue or the backend is down. Check the troubleshooting steps above.
A worker container with that name already exists. You can:
docker rm gpu-workerdocker stop gpu-worker && docker rm gpu-workerdocker restart gpu-workerYou can limit resources with Docker flags:
docker run -d --gpus all \ --name gpu-worker \ --restart unless-stopped \ --cpus="4.0" \ --memory="8g" \ -e API_BASE_URL=http://gpuai.app:8000 \ michaelmanleyx/gpuai-worker:latest
This limits the worker to 4 CPU cores and 8GB RAM. Adjust as needed for your system.