IA / ML

State Space Model (Mamba)

An alternative to the Transformer architecture that processes sequences with linear O(n) complexity instead of quadratic O(n^2) attention, enabling efficient handling of very long sequences. Mamba introduced selective state spaces where the model dynamically filters information based on content. Hybrid architectures like Jamba combine SSM efficiency with attention's retrieval strength.

IDstate-space-modelAliasSSMAliasMamba

Leitura rápida

Comece pela explicação mais curta e útil antes de aprofundar.

An alternative to the Transformer architecture that processes sequences with linear O(n) complexity instead of quadratic O(n^2) attention, enabling efficient handling of very long sequences. Mamba introduced selective state spaces where the model dynamically filters information based on content. Hybrid architectures like Jamba combine SSM efficiency with attention's retrieval strength.

Modelo mental

Use primeiro a analogia curta para raciocinar melhor sobre o termo quando ele aparecer em código, docs ou prompts.

Pense nisso como uma peça da pilha de contexto ou inferência usada em produtos com agentes ou LLMs.

Contexto técnico

Coloque o termo dentro da camada de Solana em que ele vive para raciocinar melhor sobre ele.

LLMs, RAG, embeddings, inferência e primitivas voltadas a agentes.

Por que builders ligam para isso

Transforme o termo de vocabulário em algo operacional para produto e engenharia.

Este termo destrava conceitos adjacentes rapidamente, então funciona melhor quando você o trata como um ponto de conexão, não como definição isolada.

Handoff para IA

Handoff para IA

Use este bloco compacto quando quiser dar contexto aterrado para um agente ou assistente sem despejar a página inteira.

State Space Model (Mamba) (state-space-model)
Categoria: IA / ML
Definição: An alternative to the Transformer architecture that processes sequences with linear O(n) complexity instead of quadratic O(n^2) attention, enabling efficient handling of very long sequences. Mamba introduced selective state spaces where the model dynamically filters information based on content. Hybrid architectures like Jamba combine SSM efficiency with attention's retrieval strength.
Aliases: SSM, Mamba
Relacionados: Transformer, Mixture of Experts (MoE), Attention Mechanism
Glossary Copilot

Faça perguntas de Solana com contexto aterrado sem sair do glossário.

Use contexto do glossário, relações entre termos, modelos mentais e builder paths para receber respostas estruturadas em vez de output genérico.

Explicar este código

Opcional: cole código Anchor, Solana ou Rust para o Copilot mapear primitivas de volta para termos do glossário.

Faça uma pergunta aterrada no glossário

Faça uma pergunta aterrada no glossário

O Copilot vai responder usando o termo atual, conceitos relacionados, modelos mentais e o grafo ao redor do glossário.

Grafo conceitual

Veja o termo como parte de uma rede, não como uma definição sem saída.

Esses ramos mostram quais conceitos esse termo toca diretamente e o que existe uma camada além deles.

Ramo

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

Ramo

Mixture of Experts (MoE)

A neural network architecture that routes each input to a subset of specialized 'expert' sub-networks rather than activating all parameters, dramatically improving efficiency. Only a fraction of total parameters are active per token (e.g., DeepSeek-V3 has 671B total but ~37B active). MoE enables training much larger models at manageable compute costs. Used in production models like Mixtral, Jamba, and DeepSeek-V3.

Ramo

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

Próximos conceitos para explorar

Continue a cadeia de aprendizado em vez de parar em uma única definição.

Estes são os próximos conceitos que valem abrir se você quiser que este termo faça mais sentido dentro de um workflow real de Solana.

IA / ML

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / ML

Mixture of Experts (MoE)

A neural network architecture that routes each input to a subset of specialized 'expert' sub-networks rather than activating all parameters, dramatically improving efficiency. Only a fraction of total parameters are active per token (e.g., DeepSeek-V3 has 671B total but ~37B active). MoE enables training much larger models at manageable compute costs. Used in production models like Mixtral, Jamba, and DeepSeek-V3.

IA / ML

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

IA / ML

Structured Output

An LLM capability that constrains model output to conform to a predefined schema, typically JSON or XML, enabling reliable programmatic consumption of responses. Supported natively by OpenAI, Anthropic, and others via API parameters that enforce valid JSON matching a JSON Schema. Research shows strict format constraints can degrade reasoning quality, so a best practice is to include a 'reasoning' field before answer fields in the schema.

Comumente confundido com

Termos próximos em vocabulário, sigla ou vizinhança conceitual.

Essas entradas são fáceis de misturar quando você lê rápido, faz prompting em um LLM ou está entrando em uma nova camada de Solana.

IA / MLreasoning-model

Reasoning Model

A class of LLMs trained with reinforcement learning to generate step-by-step internal chain-of-thought before producing a final answer, enabling stronger performance on complex math, coding, and logic tasks. Pioneered by OpenAI's o1 (September 2024) and followed by o3, DeepSeek-R1, and Claude's extended thinking mode. Unlike standard LLMs that answer directly, reasoning models produce a variable-length internal CoT, allowing controllable compute at inference time.

AliasThinking ModelAliaso1
IA / MLdiffusion-model

Diffusion Model

A generative AI architecture that creates images, video, or audio by learning to reverse a noise-adding process—starting from pure noise and iteratively denoising to produce coherent output. Diffusion models power leading image generators (Stable Diffusion, DALL-E 3, Midjourney) and video generators (Sora). Key variants include latent diffusion (operating in compressed space) and diffusion transformers (DiT).

AliasLatent DiffusionAliasDiT
IA / MLfoundation-model

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Foundation models (GPT-4, Claude, Llama 3, Gemini) are pre-trained on internet-scale text/code and can be fine-tuned, prompted, or used via APIs for specific applications. The term emphasizes that one base model serves as the foundation for diverse use cases rather than training task-specific models.

Termos relacionados

Siga os conceitos que realmente dão contexto a este termo.

Entradas de glossário só ficam úteis quando estão conectadas. Esses links são o caminho mais curto para ideias adjacentes.

IA / MLtransformer

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / MLmixture-of-experts

Mixture of Experts (MoE)

A neural network architecture that routes each input to a subset of specialized 'expert' sub-networks rather than activating all parameters, dramatically improving efficiency. Only a fraction of total parameters are active per token (e.g., DeepSeek-V3 has 671B total but ~37B active). MoE enables training much larger models at manageable compute costs. Used in production models like Mixtral, Jamba, and DeepSeek-V3.

IA / MLattention-mechanism

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

Mais na categoria

Permaneça na mesma camada e continue construindo contexto.

Essas entradas vivem ao lado do termo atual e ajudam a página a parecer parte de um grafo maior, não um beco sem saída.

IA / ML

LLM (Modelo de Linguagem Grande)

A neural network trained on vast text corpora to understand and generate human language. LLMs (GPT-4, Claude, Llama, Gemini) use transformer architectures with billions of parameters. They power chatbots, code generation, summarization, and reasoning tasks. In blockchain development, LLMs assist with smart contract writing, audit review, documentation, and code explanation.

IA / ML

Transformer

The neural network architecture underlying modern LLMs, introduced in 'Attention Is All You Need' (2017). Transformers use self-attention mechanisms to process input sequences in parallel (unlike recurrent networks). Key components: multi-head attention, positional encoding, feedforward layers, and layer normalization. Variants include encoder-only (BERT), decoder-only (GPT), and encoder-decoder (T5).

IA / ML

Attention Mechanism

A neural network component that allows models to weigh the relevance of different parts of the input when producing output. Self-attention computes query-key-value dot products across all positions, enabling each token to 'attend' to every other token. Multi-head attention runs multiple attention functions in parallel. Attention is O(n²) in sequence length, driving context window research.

IA / ML

Foundation Model

A large AI model trained on broad data that can be adapted for many downstream tasks. Foundation models (GPT-4, Claude, Llama 3, Gemini) are pre-trained on internet-scale text/code and can be fine-tuned, prompted, or used via APIs for specific applications. The term emphasizes that one base model serves as the foundation for diverse use cases rather than training task-specific models.