Knowledge Base Sections ▾

Tools

LangChain + Gonka AI - AI applications for pennies

LangChain - the most popular framework for building AI applications in Python and JavaScript. RAG pipelines, chains, agents, document processing - LangChain provides abstractions for all of this.

LangChain natively supports OpenAI-compatible APIs through the ChatOpenAI class. This means that JoinGonka Gateway integrates in 3 lines of code - without additional packages and settings.

Result: a RAG system, chatbot, or AI agent working for $0.001/1M tokens instead of $2.50-15 with OpenAI.

Quick Start: 3 lines of code

Minimal example — connecting LangChain to Gonka:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="https://gate.joingonka.ai/v1",
    api_key="jg-your-key",
    model="Qwen/Qwen3-235B-A22B-Instruct-2507-FP8",
)

response = llm.invoke("Explain what RAG is")
print(response.content)

That's it. Three lines — and your LangChain project works through the decentralized Gonka network for pennies.

Install dependencies:

pip install langchain langchain-openai

Recommendation: explicitly specify max_tokens=2048 — this is the maximum through JoinGonka Gateway. The Qwen3-235B context window is 128K tokens — keep this in mind when configuring chunk_size in RAG pipelines.

Example: RAG pipeline with Gonka

RAG (Retrieval-Augmented Generation) - the most popular pattern for AI applications. You load documents, chunk them, create embeddings, find relevant fragments, and generate a contextual answer.

from langchain_openai import ChatOpenAI
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains import RetrievalQA
from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.document_loaders import TextLoader

# 1. LLM via Gonka
llm = ChatOpenAI(
    base_url="https://gate.joingonka.ai/v1",
    api_key="jg-your-key",
    model="Qwen/Qwen3-235B-A22B-Instruct-2507-FP8",
    streaming=True,
)

# 2. Load and index documents
loader = TextLoader("docs/my_document.txt")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=1000)
chunks = splitter.split_documents(docs)

# 3. Vector store (local, free)
embeddings = HuggingFaceEmbeddings()
vectorstore = FAISS.from_documents(chunks, embeddings)

# 4. RAG chain
qa = RetrievalQA.from_chain_type(
    llm=llm,
    retriever=vectorstore.as_retriever(),
)

# 5. Query
result = qa.invoke("What is this document about?")
print(result["result"])

Cost: one RAG pipeline request (retrieval + generation) uses ~2-5K LLM tokens. Via Gonka, this is $0.000002-0.000005. Via OpenAI - $0.005-0.05. The difference is 10,000x.

For production systems processing thousands of requests per day, savings amount to tens of thousands of dollars per month.

Example: AI agent with tool calling

LangChain allows creating agents with tools. Qwen3-235B supports native tool calling - agents work reliably, without parsing textual responses.

from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_tools_agent, AgentExecutor
from langchain.tools import tool
from langchain.prompts import ChatPromptTemplate

llm = ChatOpenAI(
    base_url="https://gate.joingonka.ai/v1",
    api_key="jg-your-key",
    model="Qwen/Qwen3-235B-A22B-Instruct-2507-FP8",
)

@tool
def calculator(expression: str) -> str:
    """Calculates a mathematical expression."""
    return str(eval(expression))

@tool
def search_web(query: str) -> str:
    """Searches for information on the internet."""
    return f"Search results for: {query}"

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_openai_tools_agent(llm, [calculator, search_web], prompt)
executor = AgentExecutor(agent=agent, tools=[calculator, search_web])

result = executor.invoke({"input": "How much is 2**10 * 3.14?"})
print(result["output"])

The agent calls calculator, gets the result, and forms the answer. The entire cycle costs ~$0.00001 via Gonka. Via OpenAI - $0.01-0.05. For systems with thousands of users, this is a difference of tens of thousands of dollars.

LangChain + Gonka = production-ready AI applications for pennies. RAG, agents, chains - all via 3 lines of code with ChatOpenAI. Cost - $0.001/1M tokens, native tool calling, streaming.

Want to learn more?

Explore other sections or start earning GNK right now.

Get 10M free tokens →