IA / ML

Synthetic Data (AI Training)

Artificially generated training data produced by LLMs or other AI models, used to augment or replace human-annotated datasets. Techniques include prompt-based generation, retrieval-augmented pipelines, and iterative self-refinement. Synthetic data slashes costs from $5-20 per human preference point to under $0.01 per sample and became central to post-training pipelines in 2024-2025.

IDsynthetic-dataAliasAI-Generated Training Data

Lectura rápida

Empieza por la explicación más corta y útil antes de profundizar.

Artificially generated training data produced by LLMs or other AI models, used to augment or replace human-annotated datasets. Techniques include prompt-based generation, retrieval-augmented pipelines, and iterative self-refinement. Synthetic data slashes costs from $5-20 per human preference point to under $0.01 per sample and became central to post-training pipelines in 2024-2025.

Modelo mental

Usa primero la analogía corta para razonar mejor sobre el término cuando aparezca en código, docs o prompts.

Piensa en esto como una pieza de la pila de contexto o inferencia usada en productos con agentes o LLMs.

Contexto técnico

Ubica el término dentro de la capa de Solana en la que vive para razonar mejor sobre él.

LLMs, RAG, embeddings, inferencia y primitivas orientadas a agentes.

Por qué le importa a un builder

Convierte el término de vocabulario en algo operacional para producto e ingeniería.

Este término desbloquea conceptos adyacentes rápido, así que funciona mejor cuando lo tratas como un punto de conexión y no como una definición aislada.

Handoff para IA

Handoff para IA

Usa este bloque compacto cuando quieras dar contexto sólido a un agente o asistente sin volcar toda la página.

Synthetic Data (AI Training) (synthetic-data)
Categoría: IA / ML
Definición: Artificially generated training data produced by LLMs or other AI models, used to augment or replace human-annotated datasets. Techniques include prompt-based generation, retrieval-augmented pipelines, and iterative self-refinement. Synthetic data slashes costs from $5-20 per human preference point to under $0.01 per sample and became central to post-training pipelines in 2024-2025.
Aliases: AI-Generated Training Data
Relacionados: Knowledge Distillation, DPO (Direct Preference Optimization), Fine-Tuning
Glossary Copilot

Haz preguntas de Solana con contexto aterrizado sin salir del glosario.

Usa contexto del glosario, relaciones entre términos, modelos mentales y builder paths para recibir respuestas estructuradas en vez de output genérico.

Abrir workspace completa del Copilot
Explicar este código

Opcional: pega código Anchor, Solana o Rust para que el Copilot mapee primitivas de vuelta al glosario.

Haz una pregunta aterrizada en el glosario

Haz una pregunta aterrizada en el glosario

El Copilot responderá usando el término actual, conceptos relacionados, modelos mentales y el grafo alrededor del glosario.

Grafo conceptual

Ve el término como parte de una red, no como una definición aislada.

Estas ramas muestran qué conceptos toca este término directamente y qué existe una capa más allá de ellos.

Rama

Knowledge Distillation

A technique for transferring capabilities from a large 'teacher' model to a smaller 'student' model, typically by having the teacher generate a synthetic dataset that the student is fine-tuned on. Distilled models can match or exceed teacher performance on specific tasks while being much cheaper to deploy. Common in 2024-2025 for creating efficient specialized models.

Rama

DPO (Direct Preference Optimization)

A simplified alternative to RLHF that aligns LLM outputs with human preferences without training a separate reward model or using reinforcement learning. DPO directly optimizes a policy using pairs of preferred and dispreferred outputs, making it computationally cheaper and more stable than RLHF's multi-stage pipeline. Widely adopted in 2024-2025 for fine-tuning open-source models.

Rama

Fine-Tuning

The process of further training a pre-trained model on a specialized dataset to improve performance on specific tasks. Fine-tuning adapts a foundation model's weights using domain-specific data (e.g., Solana documentation, smart contract code). Techniques include full fine-tuning, LoRA (Low-Rank Adaptation), and QLoRA. Fine-tuned models can outperform general models on narrow tasks.

Siguientes conceptos para explorar

Mantén la cadena de aprendizaje en movimiento en lugar de parar en una sola definición.

Estos son los siguientes conceptos que vale la pena abrir si quieres que este término tenga más sentido dentro de un workflow real de Solana.

IA / ML

Knowledge Distillation

A technique for transferring capabilities from a large 'teacher' model to a smaller 'student' model, typically by having the teacher generate a synthetic dataset that the student is fine-tuned on. Distilled models can match or exceed teacher performance on specific tasks while being much cheaper to deploy. Common in 2024-2025 for creating efficient specialized models.

IA / ML

DPO (Direct Preference Optimization)

A simplified alternative to RLHF that aligns LLM outputs with human preferences without training a separate reward model or using reinforcement learning. DPO directly optimizes a policy using pairs of preferred and dispreferred outputs, making it computationally cheaper and more stable than RLHF's multi-stage pipeline. Widely adopted in 2024-2025 for fine-tuning open-source models.

IA / ML

Fine-Tuning

The process of further training a pre-trained model on a specialized dataset to improve performance on specific tasks. Fine-tuning adapts a foundation model's weights using domain-specific data (e.g., Solana documentation, smart contract code). Techniques include full fine-tuning, LoRA (Low-Rank Adaptation), and QLoRA. Fine-tuned models can outperform general models on narrow tasks.

IA / ML

System Prompt

A persistent, developer-controlled instruction set provided to an LLM that defines its role, behavior, tone, constraints, and capabilities for a given application. Unlike user prompts that change per interaction, the system prompt remains constant and is sent via a separate 'system' role parameter in the API. System prompts establish application-wide behavior including brand voice, output formatting, safety constraints, and tool-use rules.

Comúnmente confundido con

Términos cercanos en vocabulario, acrónimo o vecindad conceptual.

Estas entradas son fáciles de mezclar cuando lees rápido, haces prompting a un LLM o estás entrando en una nueva capa de Solana.

IA / MLtraining

Training (ML)

The process of optimizing a model's parameters by exposing it to data and adjusting weights to minimize a loss function. Pre-training on large datasets creates foundation models. Training LLMs requires massive compute (thousands of GPUs, weeks/months). Training data quality, diversity, and size directly impact model capabilities. Distinguished from fine-tuning (smaller scale, specific domain).

Términos relacionados

Sigue los conceptos que realmente le dan contexto a este término.

Las entradas del glosario se vuelven útiles cuando están conectadas. Estos enlaces son el camino más corto hacia ideas adyacentes.

IA / MLdistillation

Knowledge Distillation

A technique for transferring capabilities from a large 'teacher' model to a smaller 'student' model, typically by having the teacher generate a synthetic dataset that the student is fine-tuned on. Distilled models can match or exceed teacher performance on specific tasks while being much cheaper to deploy. Common in 2024-2025 for creating efficient specialized models.

IA / MLdpo

DPO (Direct Preference Optimization)

A simplified alternative to RLHF that aligns LLM outputs with human preferences without training a separate reward model or using reinforcement learning. DPO directly optimizes a policy using pairs of preferred and dispreferred outputs, making it computationally cheaper and more stable than RLHF's multi-stage pipeline. Widely adopted in 2024-2025 for fine-tuning open-source models.

IA / MLfine-tuning

Fine-Tuning

The process of further training a pre-trained model on a specialized dataset to improve performance on specific tasks. Fine-tuning adapts a foundation model's weights using domain-specific data (e.g., Solana documentation, smart contract code). Techniques include full fine-tuning, LoRA (Low-Rank Adaptation), and QLoRA. Fine-tuned models can outperform general models on narrow tasks.

Más en la categoría

Quédate en la misma capa y sigue construyendo contexto.

Estas entradas viven junto al término actual y ayudan a que la página se sienta parte de un grafo de conocimiento más amplio en lugar de un callejón sin salida.

IA / ML

LLM (Modelo de Lenguaje Grande)

A neural network trained on vast text corpora to understand and generate human language. LLMs (GPT-4, Claude, Llama, Gemini) use transformer architectures with billions of parameters. They power chatbots, code generation, summarization, and reasoning tasks. In blockchain development, LLMs assist with smart contract writing, audit review, documentation, and code explanation.

IA / ML

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / ML

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

IA / ML

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Foundation models (GPT-4, Claude, Llama 3, Gemini) are pre-trained on internet-scale text/code and can be fine-tuned, prompted, or used via APIs for specific applications. The term emphasizes that one base model serves as the foundation for diverse use cases rather than training task-specific models.