For advanced
Self-hosting: step-by-step guide
In Gonka, GPUs process Qwen3-235B neural network requests and receive GNK tokens for each completed request.
Don't want to deal with servers?
You don't need your own equipment to earn GNK. Join a pool — invest from $30 and get GNK without technical skills.
Hardware Requirements
Network Node (network server)
- 16-core CPU (amd64)
- 64GB+ RAM
- 1TB NVMe SSD
- 100Mbps+ network
ML Node (compute node)
- MLNode 3.0.12+ minimum
- Go 1.22.8, Docker Desktop 4.37+, Java 19+
- NVIDIA GPU (newer generation than Tesla)
- Minimum 40GB VRAM per MLNode container
- For the current model (Qwen3-235B): 640GB VRAM total, minimum 2 ML nodes
- Reference configuration: 8xH200 per node
- CUDA 12.6-12.9, NVIDIA Container Toolkit
Tip: No GPUs of your own? Rent bare-metal from GPU providers from ~$10,500/mo for 8xH100.
Gonka.Top — the best referral program in the ecosystem
Collective plan from $100. 25% L1 + 5% L2 of pool profits. Monthly payouts for 180 days.
Not enough GPU? Spheron – bare-metal GPU marketplace. H100/H200/B200 servers from ~$12,600/month with 99.9% SLA.
Rent GPU →Setup Steps
01 Install inferenced CLI
Download the latest version (v0.2.10+) from GitHub releases.
# Download and install inferenced CLI
wget https://github.com/gonka-ai/gonka/releases/latest/download/inferenced-linux-amd64
chmod +x inferenced-linux-amd64
sudo mv inferenced-linux-amd64 /usr/local/bin/inferenced 02 Create a cold wallet key
The cold wallet is used to manage the node and receive rewards.
inferenced keys add cold-wallet Save your mnemonic phrase in a safe place!
03 Clone the repository and configure
git clone https://github.com/gonka-ai/gonka.git
cd gonka
cp config.env.example config.env
# Edit config.env — enter your parameters 04 Download model weights
The current model is Qwen3-235B-A22B-Instruct-2507-FP8. Download via HuggingFace CLI. подробнее →
pip install huggingface-cli
huggingface-cli download Qwen/Qwen3-235B-A22B-Instruct-2507-FP8 05 Start Docker services
docker compose pull
docker compose up -d 06 Register on-chain
Create an ML operational key and register the node on the network.
# Create operational key
inferenced keys add ml-operator
# Register node
inferenced tx register-ml-node --from cold-wallet
# Grant operator permissions
inferenced tx grant-ml-permissions --from cold-wallet --to ml-operator 07 Configure SSL and launch
Configure DNS (Cloudflare/AWS/GCP/Azure/DO/Hetzner), obtain an SSL certificate, launch the full network, and verify registration.
# Check node status
inferenced status
# Verify registration
inferenced query node-info Don't want to set it up yourself?
There are GPU hosting and pools that do everything for you:
GPU Rental
- Spheron — from ~$12,600/month, H100/H200/B200, 99.9% SLA
Mining pools
- Gonka.Top — from $100, 25% referral + 5% L2, monthly payouts
- GonkaPool.ai — Telegram bot, convenient entry
- Hashiro — from $100, daily payouts
- Mingles Cloud — from $30
- Gonka.Wallet — Telegram Mini App, one-click wallet, built-in pools
Best referral: Gonka.Top — 25% of profits. Lowest entry point: Mingles Cloud - from $30.
What Affects Profitability
- Weight: base weight = 20% automatically. The remaining 80% is activated by staking GNK as collateral. подробнее →
- GPU performance: the more powerful the graphics card, the more neural network requests are processed per round.
- Network utilization: at 40-60% load the price is stable. Above 60% — price increases (more revenue per request).
- Fair rewards: The system automatically verifies each computation (<10ms) and fairly distributes rewards. During network updates, there is a transition period (~5 hours) so that no one loses income. подробнее →