IA / ML

Quantization (AI)

A model compression technique that reduces weight precision (e.g., from 16-bit to 4-bit) to decrease model size and inference cost while preserving most quality. Three dominant formats in 2024-2025: GGUF (flexible CPU/GPU format for llama.cpp), GPTQ (GPU-optimized post-training quantization), and AWQ (activation-aware weight quantization). All keep quality within ~6% of full-precision at 4-bit.

IDquantizationAliasGGUFAliasGPTQAliasAWQ

Lectura rápida

Empieza por la explicación más corta y útil antes de profundizar.

A model compression technique that reduces weight precision (e.g., from 16-bit to 4-bit) to decrease model size and inference cost while preserving most quality. Three dominant formats in 2024-2025: GGUF (flexible CPU/GPU format for llama.cpp), GPTQ (GPU-optimized post-training quantization), and AWQ (activation-aware weight quantization). All keep quality within ~6% of full-precision at 4-bit.

Modelo mental

Usa primero la analogía corta para razonar mejor sobre el término cuando aparezca en código, docs o prompts.

Piensa en esto como una pieza de la pila de contexto o inferencia usada en productos con agentes o LLMs.

Contexto técnico

Ubica el término dentro de la capa de Solana en la que vive para razonar mejor sobre él.

LLMs, RAG, embeddings, inferencia y primitivas orientadas a agentes.

Por qué le importa a un builder

Convierte el término de vocabulario en algo operacional para producto e ingeniería.

Este término desbloquea conceptos adyacentes rápido, así que funciona mejor cuando lo tratas como un punto de conexión y no como una definición aislada.

Handoff para IA

Handoff para IA

Usa este bloque compacto cuando quieras dar contexto sólido a un agente o asistente sin volcar toda la página.

Quantization (AI) (quantization)
Categoría: IA / ML
Definición: A model compression technique that reduces weight precision (e.g., from 16-bit to 4-bit) to decrease model size and inference cost while preserving most quality. Three dominant formats in 2024-2025: GGUF (flexible CPU/GPU format for llama.cpp), GPTQ (GPU-optimized post-training quantization), and AWQ (activation-aware weight quantization). All keep quality within ~6% of full-precision at 4-bit.
Aliases: GGUF, GPTQ, AWQ
Relacionados: Inference, Open-Source AI Models, Knowledge Distillation
Glossary Copilot

Haz preguntas de Solana con contexto aterrizado sin salir del glosario.

Usa contexto del glosario, relaciones entre términos, modelos mentales y builder paths para recibir respuestas estructuradas en vez de output genérico.

Abrir workspace completa del Copilot
Explicar este código

Opcional: pega código Anchor, Solana o Rust para que el Copilot mapee primitivas de vuelta al glosario.

Haz una pregunta aterrizada en el glosario

Haz una pregunta aterrizada en el glosario

El Copilot responderá usando el término actual, conceptos relacionados, modelos mentales y el grafo alrededor del glosario.

Grafo conceptual

Ve el término como parte de una red, no como una definición aislada.

Estas ramas muestran qué conceptos toca este término directamente y qué existe una capa más allá de ellos.

Rama

Inference

The process of running a trained model on new inputs to generate predictions or outputs. Inference is the 'using' phase (vs. training). Inference cost depends on model size, input/output token count, and hardware (GPUs/TPUs). API providers (Anthropic, OpenAI) charge per token for inference. On-device inference (llama.cpp, GGUF) runs locally without API calls.

Rama

Open-Source AI Models

AI models with publicly released weights that can be downloaded, modified, and self-hosted. Notable open models: Llama 3 (Meta), Mistral, Falcon, Gemma (Google), Phi (Microsoft). Open models enable privacy (data stays local), customization (fine-tuning), and cost control. Trade-off: generally less capable than frontier proprietary models but rapidly improving.

Rama

Knowledge Distillation

A technique for transferring capabilities from a large 'teacher' model to a smaller 'student' model, typically by having the teacher generate a synthetic dataset that the student is fine-tuned on. Distilled models can match or exceed teacher performance on specific tasks while being much cheaper to deploy. Common in 2024-2025 for creating efficient specialized models.

Siguientes conceptos para explorar

Mantén la cadena de aprendizaje en movimiento en lugar de parar en una sola definición.

Estos son los siguientes conceptos que vale la pena abrir si quieres que este término tenga más sentido dentro de un workflow real de Solana.

IA / ML

Inference

The process of running a trained model on new inputs to generate predictions or outputs. Inference is the 'using' phase (vs. training). Inference cost depends on model size, input/output token count, and hardware (GPUs/TPUs). API providers (Anthropic, OpenAI) charge per token for inference. On-device inference (llama.cpp, GGUF) runs locally without API calls.

IA / ML

Open-Source AI Models

AI models with publicly released weights that can be downloaded, modified, and self-hosted. Notable open models: Llama 3 (Meta), Mistral, Falcon, Gemma (Google), Phi (Microsoft). Open models enable privacy (data stays local), customization (fine-tuning), and cost control. Trade-off: generally less capable than frontier proprietary models but rapidly improving.

IA / ML

Knowledge Distillation

A technique for transferring capabilities from a large 'teacher' model to a smaller 'student' model, typically by having the teacher generate a synthetic dataset that the student is fine-tuned on. Distilled models can match or exceed teacher performance on specific tasks while being much cheaper to deploy. Common in 2024-2025 for creating efficient specialized models.

IA / ML

RAG (Generación Aumentada por Recuperación)

An AI architecture that combines LLMs with external knowledge retrieval. Instead of relying solely on training data, RAG systems retrieve relevant documents from a knowledge base (using embeddings and vector search), then provide them as context to the LLM. RAG reduces hallucinations and enables up-to-date responses. Useful for blockchain documentation bots and developer assistants.

Términos relacionados

Sigue los conceptos que realmente le dan contexto a este término.

Las entradas del glosario se vuelven útiles cuando están conectadas. Estos enlaces son el camino más corto hacia ideas adyacentes.

IA / MLinference

Inference

The process of running a trained model on new inputs to generate predictions or outputs. Inference is the 'using' phase (vs. training). Inference cost depends on model size, input/output token count, and hardware (GPUs/TPUs). API providers (Anthropic, OpenAI) charge per token for inference. On-device inference (llama.cpp, GGUF) runs locally without API calls.

IA / MLopen-source-ai

Open-Source AI Models

AI models with publicly released weights that can be downloaded, modified, and self-hosted. Notable open models: Llama 3 (Meta), Mistral, Falcon, Gemma (Google), Phi (Microsoft). Open models enable privacy (data stays local), customization (fine-tuning), and cost control. Trade-off: generally less capable than frontier proprietary models but rapidly improving.

IA / MLdistillation

Knowledge Distillation

A technique for transferring capabilities from a large 'teacher' model to a smaller 'student' model, typically by having the teacher generate a synthetic dataset that the student is fine-tuned on. Distilled models can match or exceed teacher performance on specific tasks while being much cheaper to deploy. Common in 2024-2025 for creating efficient specialized models.

Más en la categoría

Quédate en la misma capa y sigue construyendo contexto.

Estas entradas viven junto al término actual y ayudan a que la página se sienta parte de un grafo de conocimiento más amplio en lugar de un callejón sin salida.

IA / ML

LLM (Modelo de Lenguaje Grande)

A neural network trained on vast text corpora to understand and generate human language. LLMs (GPT-4, Claude, Llama, Gemini) use transformer architectures with billions of parameters. They power chatbots, code generation, summarization, and reasoning tasks. In blockchain development, LLMs assist with smart contract writing, audit review, documentation, and code explanation.

IA / ML

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / ML

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

IA / ML

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Foundation models (GPT-4, Claude, Llama 3, Gemini) are pre-trained on internet-scale text/code and can be fine-tuned, prompted, or used via APIs for specific applications. The term emphasizes that one base model serves as the foundation for diverse use cases rather than training task-specific models.