Skip to main content

Case study: BART

Definition

BART (Bidirectional and Auto-Regressive Transformers) is a transformer encoder-decoder model from Meta (Facebook AI). It is pretrained with denoising objectives (e.g. token deletion, masking, sentence permutation) and fine-tuned for summarization, translation, and conditional generation.

BART represents an earlier generation of large sequence-to-sequence models; Google’s Gemini and other modern LLMs build on different architectures (decoder-only, multimodal) but share the goal of strong text understanding and generation. Use case: summarization, question answering, and conditional text generation where encoder-decoder structure is beneficial.

How it works

Encoder: a BERT-like bidirectional encoder processes the source sequence. Decoder: a causal (auto-regressive) decoder attends to the encoder output and previous decoder positions to generate the target. Pretraining: corrupt the input (mask, delete, permute) and train the model to reconstruct the original—this denoising objective learns robust representations. Fine-tuning: add a task-specific head or use the sequence output for summarization (e.g. CNN/DailyMail), translation, or QA. Inference: encode source, then decode token by token.

Use cases

BART-style encoder-decoder models fit conditional generation and understanding tasks with a clear source and target.

  • Document and dialogue summarization
  • Conditional generation (e.g. sentence completion, data-to-text)
  • Fine-tuning for domain-specific NLU and generation

External documentation

See also