Basic Concepts
What is inference
If you've used ChatGPT — you already know what inference is. Every time you type a question and get an answer from a neural network — that's inference. Let's delve deeper.
Simple explanation
You type a question to ChatGPT — you get an answer. This process is called inference. The neural network “thinks” and generates an answer word by word. Each such request requires a powerful GPU — a video card that can quickly process neural network computations.
Inference in the Gonka network
When someone sends a request through the Gonka API, it is processed by one of the nodes — a GPU server connected to the network. Current model: Qwen3-235B (235 billion parameters). The host-owner of this server receives GNK for each processed request. The more powerful the GPU, the more requests a node can process.
Why inference is the basis of GNK value
The more people use AI through Gonka — the greater the demand for GPU power. Higher demand — higher network load — higher inference price — more earnings for hosts. Unlike Bitcoin, the value of GNK is tied to the real market of AI computations, not to abstract “difficulty”.
Inference is a request to a neural network. Every time someone uses AI through Gonka, hosts earn GNK. More users = more token value.
Want to learn more?
Understand the GNK economy or start earning right now.