IA / ML

On-Chain AI / ML

Running AI/ML inference directly within blockchain smart contracts or verified through on-chain proofs. Current limitations: compute budgets on blockchains are tiny compared to AI needs. Approaches include: off-chain inference with on-chain verification (ZK proofs of inference), optimistic verification, and simplified models (decision trees, linear models) that fit within compute limits.

IDon-chain-ai

Leitura rápida

Comece pela explicação mais curta e útil antes de aprofundar.

Running AI/ML inference directly within blockchain smart contracts or verified through on-chain proofs. Current limitations: compute budgets on blockchains are tiny compared to AI needs. Approaches include: off-chain inference with on-chain verification (ZK proofs of inference), optimistic verification, and simplified models (decision trees, linear models) that fit within compute limits.

Modelo mental

Use primeiro a analogia curta para raciocinar melhor sobre o termo quando ele aparecer em código, docs ou prompts.

Pense nisso como uma peça da pilha de contexto ou inferência usada em produtos com agentes ou LLMs.

Contexto técnico

Coloque o termo dentro da camada de Solana em que ele vive para raciocinar melhor sobre ele.

LLMs, RAG, embeddings, inferência e primitivas voltadas a agentes.

Por que builders ligam para isso

Transforme o termo de vocabulário em algo operacional para produto e engenharia.

Este termo destrava conceitos adjacentes rapidamente, então funciona melhor quando você o trata como um ponto de conexão, não como definição isolada.

Handoff para IA

Handoff para IA

Use este bloco compacto quando quiser dar contexto aterrado para um agente ou assistente sem despejar a página inteira.

On-Chain AI / ML (on-chain-ai)
Categoria: IA / ML
Definição: Running AI/ML inference directly within blockchain smart contracts or verified through on-chain proofs. Current limitations: compute budgets on blockchains are tiny compared to AI needs. Approaches include: off-chain inference with on-chain verification (ZK proofs of inference), optimistic verification, and simplified models (decision trees, linear models) that fit within compute limits.
Relacionados: Zero-Knowledge Proofs (ZKP), AI × Blockchain Integration
Glossary Copilot

Faça perguntas de Solana com contexto aterrado sem sair do glossário.

Use contexto do glossário, relações entre termos, modelos mentais e builder paths para receber respostas estruturadas em vez de output genérico.

Explicar este código

Opcional: cole código Anchor, Solana ou Rust para o Copilot mapear primitivas de volta para termos do glossário.

Faça uma pergunta aterrada no glossário

Faça uma pergunta aterrada no glossário

O Copilot vai responder usando o termo atual, conceitos relacionados, modelos mentais e o grafo ao redor do glossário.

Grafo conceitual

Veja o termo como parte de uma rede, não como uma definição sem saída.

Esses ramos mostram quais conceitos esse termo toca diretamente e o que existe uma camada além deles.

Ramo

Zero-Knowledge Proofs (ZKP)

A zero-knowledge proof is a cryptographic protocol by which a prover convinces a verifier that a statement is true — for example, that a state transition is valid — without revealing any information beyond the truth of the statement itself, satisfying the properties of completeness, soundness, and zero-knowledge. In Solana's ecosystem, ZKPs are used by ZK Compression (via Groth16 SNARKs) to prove correct state transitions for compressed accounts without storing full account state on-chain, and by the Token-2022 Confidential Transfers extension (via ElGamal encryption and range proofs) to prove token balances are non-negative without revealing the actual amounts. Solana's BPF VM exposes the alt_bn128 elliptic curve syscall to make on-chain Groth16 proof verification computationally feasible within the 1.4M compute unit budget.

Ramo

AI × Blockchain Integration

The convergence of AI and blockchain technologies. Key patterns: AI agents executing on-chain transactions autonomously, blockchain providing verifiable compute receipts for AI inference, decentralized GPU networks for AI training, on-chain governance of AI model parameters, NFTs for AI-generated content provenance, and LLMs as smart contract development assistants.

Próximos conceitos para explorar

Continue a cadeia de aprendizado em vez de parar em uma única definição.

Estes são os próximos conceitos que valem abrir se você quiser que este termo faça mais sentido dentro de um workflow real de Solana.

Compressão ZK

Zero-Knowledge Proofs (ZKP)

A zero-knowledge proof is a cryptographic protocol by which a prover convinces a verifier that a statement is true — for example, that a state transition is valid — without revealing any information beyond the truth of the statement itself, satisfying the properties of completeness, soundness, and zero-knowledge. In Solana's ecosystem, ZKPs are used by ZK Compression (via Groth16 SNARKs) to prove correct state transitions for compressed accounts without storing full account state on-chain, and by the Token-2022 Confidential Transfers extension (via ElGamal encryption and range proofs) to prove token balances are non-negative without revealing the actual amounts. Solana's BPF VM exposes the alt_bn128 elliptic curve syscall to make on-chain Groth16 proof verification computationally feasible within the 1.4M compute unit budget.

IA / ML

AI × Blockchain Integration

The convergence of AI and blockchain technologies. Key patterns: AI agents executing on-chain transactions autonomously, blockchain providing verifiable compute receipts for AI inference, decentralized GPU networks for AI training, on-chain governance of AI model parameters, NFTs for AI-generated content provenance, and LLMs as smart contract development assistants.

IA / ML

Open-Source AI Models

AI models with publicly released weights that can be downloaded, modified, and self-hosted. Notable open models: Llama 3 (Meta), Mistral, Falcon, Gemma (Google), Phi (Microsoft). Open models enable privacy (data stays local), customization (fine-tuning), and cost control. Trade-off: generally less capable than frontier proprietary models but rapidly improving.

IA / ML

Nosana

A decentralized GPU compute marketplace built on Solana that connects GPU providers with users needing compute for AI inference workloads. Node operators supply idle GPU capacity and earn NOS tokens for completed jobs. Nosana focuses on cost-effective AI inference rather than training, using Solana for job coordination, payment settlement, and reputation tracking. It supports containerized workloads across consumer and enterprise GPUs.

Comumente confundido com

Termos próximos em vocabulário, sigla ou vizinhança conceitual.

Essas entradas são fáceis de misturar quando você lê rápido, faz prompting em um LLM ou está entrando em uma nova camada de Solana.

IA / MLautonomous-on-chain-agent

Autonomous On-Chain Agent

An AI agent that holds its own blockchain wallet, autonomously signs transactions, and manages on-chain positions (DeFi yields, token trades, NFT operations) without human approval for each action. These agents combine LLM reasoning with blockchain tool use to monitor market conditions, execute strategies, and adapt to changing on-chain state. Key challenges include wallet security, transaction simulation, and defining behavioral guardrails to prevent loss of funds.

IA / MLchain-of-thought

Chain-of-Thought (CoT)

A prompting technique or model-native capability where the LLM produces intermediate reasoning steps before arriving at a final answer, improving accuracy on multi-step problems. Originally a prompting strategy ('think step by step'), CoT is now built directly into reasoning models like o1 and DeepSeek-R1 as an internal process. When combining CoT with structured output, developers should place reasoning fields before answer fields to avoid bypassing the reasoning process.

AliasCoTAliasExtended Thinking
Termos relacionados

Siga os conceitos que realmente dão contexto a este termo.

Entradas de glossário só ficam úteis quando estão conectadas. Esses links são o caminho mais curto para ideias adjacentes.

Compressão ZKzk-proofs

Zero-Knowledge Proofs (ZKP)

A zero-knowledge proof is a cryptographic protocol by which a prover convinces a verifier that a statement is true — for example, that a state transition is valid — without revealing any information beyond the truth of the statement itself, satisfying the properties of completeness, soundness, and zero-knowledge. In Solana's ecosystem, ZKPs are used by ZK Compression (via Groth16 SNARKs) to prove correct state transitions for compressed accounts without storing full account state on-chain, and by the Token-2022 Confidential Transfers extension (via ElGamal encryption and range proofs) to prove token balances are non-negative without revealing the actual amounts. Solana's BPF VM exposes the alt_bn128 elliptic curve syscall to make on-chain Groth16 proof verification computationally feasible within the 1.4M compute unit budget.

IA / MLai-blockchain-integration

AI × Blockchain Integration

The convergence of AI and blockchain technologies. Key patterns: AI agents executing on-chain transactions autonomously, blockchain providing verifiable compute receipts for AI inference, decentralized GPU networks for AI training, on-chain governance of AI model parameters, NFTs for AI-generated content provenance, and LLMs as smart contract development assistants.

Mais na categoria

Permaneça na mesma camada e continue construindo contexto.

Essas entradas vivem ao lado do termo atual e ajudam a página a parecer parte de um grafo maior, não um beco sem saída.

IA / ML

LLM (Modelo de Linguagem Grande)

A neural network trained on vast text corpora to understand and generate human language. LLMs (GPT-4, Claude, Llama, Gemini) use transformer architectures with billions of parameters. They power chatbots, code generation, summarization, and reasoning tasks. In blockchain development, LLMs assist with smart contract writing, audit review, documentation, and code explanation.

IA / ML

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / ML

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

IA / ML

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Foundation models (GPT-4, Claude, Llama 3, Gemini) are pre-trained on internet-scale text/code and can be fine-tuned, prompted, or used via APIs for specific applications. The term emphasizes that one base model serves as the foundation for diverse use cases rather than training task-specific models.