IA / ML

Model Context Protocol (MCP)

An open standard introduced by Anthropic in November 2024 for connecting AI applications to external data sources, tools, and workflows via a unified protocol. Often described as 'USB-C for AI,' MCP eliminates the need for custom integrations per data source. Adopted by OpenAI in March 2025 and donated to the Linux Foundation's Agentic AI Foundation. MCP handles standardized tool/data connections while agent frameworks handle orchestration.

IDmcpAliasMCP

Lectura rápida

Empieza por la explicación más corta y útil antes de profundizar.

An open standard introduced by Anthropic in November 2024 for connecting AI applications to external data sources, tools, and workflows via a unified protocol. Often described as 'USB-C for AI,' MCP eliminates the need for custom integrations per data source. Adopted by OpenAI in March 2025 and donated to the Linux Foundation's Agentic AI Foundation. MCP handles standardized tool/data connections while agent frameworks handle orchestration.

Modelo mental

Usa primero la analogía corta para razonar mejor sobre el término cuando aparezca en código, docs o prompts.

Piensa en esto como una pieza de la pila de contexto o inferencia usada en productos con agentes o LLMs.

Contexto técnico

Ubica el término dentro de la capa de Solana en la que vive para razonar mejor sobre él.

LLMs, RAG, embeddings, inferencia y primitivas orientadas a agentes.

Por qué le importa a un builder

Convierte el término de vocabulario en algo operacional para producto e ingeniería.

Este término desbloquea conceptos adyacentes rápido, así que funciona mejor cuando lo tratas como un punto de conexión y no como una definición aislada.

Handoff para IA

Handoff para IA

Usa este bloque compacto cuando quieras dar contexto sólido a un agente o asistente sin volcar toda la página.

Model Context Protocol (MCP) (mcp)
Categoría: IA / ML
Definición: An open standard introduced by Anthropic in November 2024 for connecting AI applications to external data sources, tools, and workflows via a unified protocol. Often described as 'USB-C for AI,' MCP eliminates the need for custom integrations per data source. Adopted by OpenAI in March 2025 and donated to the Linux Foundation's Agentic AI Foundation. MCP handles standardized tool/data connections while agent frameworks handle orchestration.
Aliases: MCP
Relacionados: Agente de IA, Tool Use (Function Calling), LangChain / LangGraph
Glossary Copilot

Haz preguntas de Solana con contexto aterrizado sin salir del glosario.

Usa contexto del glosario, relaciones entre términos, modelos mentales y builder paths para recibir respuestas estructuradas en vez de output genérico.

Abrir workspace completa del Copilot
Explicar este código

Opcional: pega código Anchor, Solana o Rust para que el Copilot mapee primitivas de vuelta al glosario.

Haz una pregunta aterrizada en el glosario

Haz una pregunta aterrizada en el glosario

El Copilot responderá usando el término actual, conceptos relacionados, modelos mentales y el grafo alrededor del glosario.

Grafo conceptual

Ve el término como parte de una red, no como una definición aislada.

Estas ramas muestran qué conceptos toca este término directamente y qué existe una capa más allá de ellos.

Rama

Agente de IA

An autonomous AI system that can plan, use tools, and take actions to accomplish goals. Agents use LLMs as the reasoning core and have access to tools (APIs, code execution, web browsing, database queries). In blockchain: agents can analyze smart contracts, execute transactions, monitor DeFi positions, and automate trading strategies. Frameworks: LangChain, CrewAI, Claude Agent SDK.

Rama

Tool Use (Function Calling)

An LLM capability where the model generates structured calls to external tools/functions rather than just text. The model decides which tool to invoke and with what parameters. Examples: calling an API, executing code, querying a database, or reading a file. Tool use enables agents to interact with the real world. Claude, GPT-4, and Gemini support native tool use.

Rama

LangChain / LangGraph

LangChain is a popular open-source framework for building LLM-powered applications, providing abstractions for chains, tools, memory, and retrieval. LangGraph extends it with a graph-based runtime for building stateful, multi-step agent workflows with precise control over execution flow, state persistence, and error recovery. LangGraph is the production-grade choice for complex agentic applications requiring fine-grained state management.

Siguientes conceptos para explorar

Mantén la cadena de aprendizaje en movimiento en lugar de parar en una sola definición.

Estos son los siguientes conceptos que vale la pena abrir si quieres que este término tenga más sentido dentro de un workflow real de Solana.

IA / ML

Agente de IA

An autonomous AI system that can plan, use tools, and take actions to accomplish goals. Agents use LLMs as the reasoning core and have access to tools (APIs, code execution, web browsing, database queries). In blockchain: agents can analyze smart contracts, execute transactions, monitor DeFi positions, and automate trading strategies. Frameworks: LangChain, CrewAI, Claude Agent SDK.

IA / ML

Tool Use (Function Calling)

An LLM capability where the model generates structured calls to external tools/functions rather than just text. The model decides which tool to invoke and with what parameters. Examples: calling an API, executing code, querying a database, or reading a file. Tool use enables agents to interact with the real world. Claude, GPT-4, and Gemini support native tool use.

IA / ML

LangChain / LangGraph

LangChain is a popular open-source framework for building LLM-powered applications, providing abstractions for chains, tools, memory, and retrieval. LangGraph extends it with a graph-based runtime for building stateful, multi-step agent workflows with precise control over execution flow, state persistence, and error recovery. LangGraph is the production-grade choice for complex agentic applications requiring fine-grained state management.

IA / ML

Multimodal AI

AI models that can process and generate multiple data types: text, images, audio, video, and code. Modern multimodal models (GPT-4V, Claude, Gemini) can analyze screenshots of dApp UIs, read code from images, generate diagrams, and understand charts. In blockchain development, multimodal capabilities help analyze transaction visualizations, audit UI screenshots, and process documentation with images.

Comúnmente confundido con

Términos cercanos en vocabulario, acrónimo o vecindad conceptual.

Estas entradas son fáciles de mezclar cuando lees rápido, haces prompting a un LLM o estás entrando en una nueva capa de Solana.

IA / MLreasoning-model

Reasoning Model

A class of LLMs trained with reinforcement learning to generate step-by-step internal chain-of-thought before producing a final answer, enabling stronger performance on complex math, coding, and logic tasks. Pioneered by OpenAI's o1 (September 2024) and followed by o3, DeepSeek-R1, and Claude's extended thinking mode. Unlike standard LLMs that answer directly, reasoning models produce a variable-length internal CoT, allowing controllable compute at inference time.

AliasThinking ModelAliaso1
IA / MLcontext-window

Context Window

The maximum amount of text (measured in tokens) an LLM can process in a single interaction. Larger windows enable processing more code/documentation at once. Sizes vary: GPT-4 (128K tokens), Claude (200K tokens), Gemini (1M+ tokens). One token ≈ 4 characters in English. Context window limits affect how much codebase an AI can analyze in a single request.

IA / MLdiffusion-model

Diffusion Model

A generative AI architecture that creates images, video, or audio by learning to reverse a noise-adding process—starting from pure noise and iteratively denoising to produce coherent output. Diffusion models power leading image generators (Stable Diffusion, DALL-E 3, Midjourney) and video generators (Sora). Key variants include latent diffusion (operating in compressed space) and diffusion transformers (DiT).

AliasLatent DiffusionAliasDiT
Términos relacionados

Sigue los conceptos que realmente le dan contexto a este término.

Las entradas del glosario se vuelven útiles cuando están conectadas. Estos enlaces son el camino más corto hacia ideas adyacentes.

IA / MLagent-ai

Agente de IA

An autonomous AI system that can plan, use tools, and take actions to accomplish goals. Agents use LLMs as the reasoning core and have access to tools (APIs, code execution, web browsing, database queries). In blockchain: agents can analyze smart contracts, execute transactions, monitor DeFi positions, and automate trading strategies. Frameworks: LangChain, CrewAI, Claude Agent SDK.

IA / MLtool-use

Tool Use (Function Calling)

An LLM capability where the model generates structured calls to external tools/functions rather than just text. The model decides which tool to invoke and with what parameters. Examples: calling an API, executing code, querying a database, or reading a file. Tool use enables agents to interact with the real world. Claude, GPT-4, and Gemini support native tool use.

IA / MLlangchain

LangChain / LangGraph

LangChain is a popular open-source framework for building LLM-powered applications, providing abstractions for chains, tools, memory, and retrieval. LangGraph extends it with a graph-based runtime for building stateful, multi-step agent workflows with precise control over execution flow, state persistence, and error recovery. LangGraph is the production-grade choice for complex agentic applications requiring fine-grained state management.

Más en la categoría

Quédate en la misma capa y sigue construyendo contexto.

Estas entradas viven junto al término actual y ayudan a que la página se sienta parte de un grafo de conocimiento más amplio en lugar de un callejón sin salida.

IA / ML

LLM (Modelo de Lenguaje Grande)

A neural network trained on vast text corpora to understand and generate human language. LLMs (GPT-4, Claude, Llama, Gemini) use transformer architectures with billions of parameters. They power chatbots, code generation, summarization, and reasoning tasks. In blockchain development, LLMs assist with smart contract writing, audit review, documentation, and code explanation.

IA / ML

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / ML

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

IA / ML

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Foundation models (GPT-4, Claude, Llama 3, Gemini) are pre-trained on internet-scale text/code and can be fine-tuned, prompted, or used via APIs for specific applications. The term emphasizes that one base model serves as the foundation for diverse use cases rather than training task-specific models.