IA / ML

On-Chain AI / ML

Running AI/ML inference directly within blockchain smart contracts or verified through on-chain proofs. Current limitations: compute budgets on blockchains are tiny compared to AI needs. Approaches include: off-chain inference with on-chain verification (ZK proofs of inference), optimistic verification, and simplified models (decision trees, linear models) that fit within compute limits.

IDon-chain-ai

Lectura rápida

Empieza por la explicación más corta y útil antes de profundizar.

Running AI/ML inference directly within blockchain smart contracts or verified through on-chain proofs. Current limitations: compute budgets on blockchains are tiny compared to AI needs. Approaches include: off-chain inference with on-chain verification (ZK proofs of inference), optimistic verification, and simplified models (decision trees, linear models) that fit within compute limits.

Modelo mental

Usa primero la analogía corta para razonar mejor sobre el término cuando aparezca en código, docs o prompts.

Piensa en esto como una pieza de la pila de contexto o inferencia usada en productos con agentes o LLMs.

Contexto técnico

Ubica el término dentro de la capa de Solana en la que vive para razonar mejor sobre él.

LLMs, RAG, embeddings, inferencia y primitivas orientadas a agentes.

Por qué le importa a un builder

Convierte el término de vocabulario en algo operacional para producto e ingeniería.

Este término desbloquea conceptos adyacentes rápido, así que funciona mejor cuando lo tratas como un punto de conexión y no como una definición aislada.

Handoff para IA

Handoff para IA

Usa este bloque compacto cuando quieras dar contexto sólido a un agente o asistente sin volcar toda la página.

On-Chain AI / ML (on-chain-ai)
Categoría: IA / ML
Definición: Running AI/ML inference directly within blockchain smart contracts or verified through on-chain proofs. Current limitations: compute budgets on blockchains are tiny compared to AI needs. Approaches include: off-chain inference with on-chain verification (ZK proofs of inference), optimistic verification, and simplified models (decision trees, linear models) that fit within compute limits.
Relacionados: Zero-Knowledge Proofs (ZKP), AI × Blockchain Integration
Glossary Copilot

Haz preguntas de Solana con contexto aterrizado sin salir del glosario.

Usa contexto del glosario, relaciones entre términos, modelos mentales y builder paths para recibir respuestas estructuradas en vez de output genérico.

Abrir workspace completa del Copilot
Explicar este código

Opcional: pega código Anchor, Solana o Rust para que el Copilot mapee primitivas de vuelta al glosario.

Haz una pregunta aterrizada en el glosario

Haz una pregunta aterrizada en el glosario

El Copilot responderá usando el término actual, conceptos relacionados, modelos mentales y el grafo alrededor del glosario.

Grafo conceptual

Ve el término como parte de una red, no como una definición aislada.

Estas ramas muestran qué conceptos toca este término directamente y qué existe una capa más allá de ellos.

Rama

Zero-Knowledge Proofs (ZKP)

A zero-knowledge proof is a cryptographic protocol by which a prover convinces a verifier that a statement is true — for example, that a state transition is valid — without revealing any information beyond the truth of the statement itself, satisfying the properties of completeness, soundness, and zero-knowledge. In Solana's ecosystem, ZKPs are used by ZK Compression (via Groth16 SNARKs) to prove correct state transitions for compressed accounts without storing full account state on-chain, and by the Token-2022 Confidential Transfers extension (via ElGamal encryption and range proofs) to prove token balances are non-negative without revealing the actual amounts. Solana's BPF VM exposes the alt_bn128 elliptic curve syscall to make on-chain Groth16 proof verification computationally feasible within the 1.4M compute unit budget.

Rama

AI × Blockchain Integration

The convergence of AI and blockchain technologies. Key patterns: AI agents executing on-chain transactions autonomously, blockchain providing verifiable compute receipts for AI inference, decentralized GPU networks for AI training, on-chain governance of AI model parameters, NFTs for AI-generated content provenance, and LLMs as smart contract development assistants.

Siguientes conceptos para explorar

Mantén la cadena de aprendizaje en movimiento en lugar de parar en una sola definición.

Estos son los siguientes conceptos que vale la pena abrir si quieres que este término tenga más sentido dentro de un workflow real de Solana.

Compresión ZK

Zero-Knowledge Proofs (ZKP)

A zero-knowledge proof is a cryptographic protocol by which a prover convinces a verifier that a statement is true — for example, that a state transition is valid — without revealing any information beyond the truth of the statement itself, satisfying the properties of completeness, soundness, and zero-knowledge. In Solana's ecosystem, ZKPs are used by ZK Compression (via Groth16 SNARKs) to prove correct state transitions for compressed accounts without storing full account state on-chain, and by the Token-2022 Confidential Transfers extension (via ElGamal encryption and range proofs) to prove token balances are non-negative without revealing the actual amounts. Solana's BPF VM exposes the alt_bn128 elliptic curve syscall to make on-chain Groth16 proof verification computationally feasible within the 1.4M compute unit budget.

IA / ML

AI × Blockchain Integration

The convergence of AI and blockchain technologies. Key patterns: AI agents executing on-chain transactions autonomously, blockchain providing verifiable compute receipts for AI inference, decentralized GPU networks for AI training, on-chain governance of AI model parameters, NFTs for AI-generated content provenance, and LLMs as smart contract development assistants.

IA / ML

Open-Source AI Models

AI models with publicly released weights that can be downloaded, modified, and self-hosted. Notable open models: Llama 3 (Meta), Mistral, Falcon, Gemma (Google), Phi (Microsoft). Open models enable privacy (data stays local), customization (fine-tuning), and cost control. Trade-off: generally less capable than frontier proprietary models but rapidly improving.

IA / ML

Nosana

A decentralized GPU compute marketplace built on Solana that connects GPU providers with users needing compute for AI inference workloads. Node operators supply idle GPU capacity and earn NOS tokens for completed jobs. Nosana focuses on cost-effective AI inference rather than training, using Solana for job coordination, payment settlement, and reputation tracking. It supports containerized workloads across consumer and enterprise GPUs.

Comúnmente confundido con

Términos cercanos en vocabulario, acrónimo o vecindad conceptual.

Estas entradas son fáciles de mezclar cuando lees rápido, haces prompting a un LLM o estás entrando en una nueva capa de Solana.

IA / MLautonomous-on-chain-agent

Autonomous On-Chain Agent

An AI agent that holds its own blockchain wallet, autonomously signs transactions, and manages on-chain positions (DeFi yields, token trades, NFT operations) without human approval for each action. These agents combine LLM reasoning with blockchain tool use to monitor market conditions, execute strategies, and adapt to changing on-chain state. Key challenges include wallet security, transaction simulation, and defining behavioral guardrails to prevent loss of funds.

IA / MLchain-of-thought

Chain-of-Thought (CoT)

A prompting technique or model-native capability where the LLM produces intermediate reasoning steps before arriving at a final answer, improving accuracy on multi-step problems. Originally a prompting strategy ('think step by step'), CoT is now built directly into reasoning models like o1 and DeepSeek-R1 as an internal process. When combining CoT with structured output, developers should place reasoning fields before answer fields to avoid bypassing the reasoning process.

AliasCoTAliasExtended Thinking
Términos relacionados

Sigue los conceptos que realmente le dan contexto a este término.

Las entradas del glosario se vuelven útiles cuando están conectadas. Estos enlaces son el camino más corto hacia ideas adyacentes.

Compresión ZKzk-proofs

Zero-Knowledge Proofs (ZKP)

A zero-knowledge proof is a cryptographic protocol by which a prover convinces a verifier that a statement is true — for example, that a state transition is valid — without revealing any information beyond the truth of the statement itself, satisfying the properties of completeness, soundness, and zero-knowledge. In Solana's ecosystem, ZKPs are used by ZK Compression (via Groth16 SNARKs) to prove correct state transitions for compressed accounts without storing full account state on-chain, and by the Token-2022 Confidential Transfers extension (via ElGamal encryption and range proofs) to prove token balances are non-negative without revealing the actual amounts. Solana's BPF VM exposes the alt_bn128 elliptic curve syscall to make on-chain Groth16 proof verification computationally feasible within the 1.4M compute unit budget.

IA / MLai-blockchain-integration

AI × Blockchain Integration

The convergence of AI and blockchain technologies. Key patterns: AI agents executing on-chain transactions autonomously, blockchain providing verifiable compute receipts for AI inference, decentralized GPU networks for AI training, on-chain governance of AI model parameters, NFTs for AI-generated content provenance, and LLMs as smart contract development assistants.

Más en la categoría

Quédate en la misma capa y sigue construyendo contexto.

Estas entradas viven junto al término actual y ayudan a que la página se sienta parte de un grafo de conocimiento más amplio en lugar de un callejón sin salida.

IA / ML

LLM (Modelo de Lenguaje Grande)

A neural network trained on vast text corpora to understand and generate human language. LLMs (GPT-4, Claude, Llama, Gemini) use transformer architectures with billions of parameters. They power chatbots, code generation, summarization, and reasoning tasks. In blockchain development, LLMs assist with smart contract writing, audit review, documentation, and code explanation.

IA / ML

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / ML

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

IA / ML

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Foundation models (GPT-4, Claude, Llama 3, Gemini) are pre-trained on internet-scale text/code and can be fine-tuned, prompted, or used via APIs for specific applications. The term emphasizes that one base model serves as the foundation for diverse use cases rather than training task-specific models.