Agent frameworks overview
A comprehensive overview of the AI agent framework landscape, covering single-agent, multi-agent, graph-based, and native approaches, with a guide on how to choose the right framework.
Introductory content, no prior AI knowledge needed
View all tagsA comprehensive overview of the AI agent framework landscape, covering single-agent, multi-agent, graph-based, and native approaches, with a guide on how to choose the right framework.
What tools and actions are in the agent context, their types, schemas, and how agents select which tool to use.
Systems that perceive, reason, and act toward goals.
AI for perception, planning, and control in robotics.
Ethical principles and governance for AI.
Core concepts in artificial intelligence and machine learning.
Ensuring AI systems are robust, aligned, and safe.
Anthropic as a developer platform — Claude model family, Messages API, tool use, extended thinking, prompt caching, and long context.
Agent-first IDE for autonomous execution and vibe coding.
Sources and mitigation of bias in ML systems.
How ChatGPT and conversational LLMs work.
Anthropic's instruction-following LLM with long context and safety.
Text-to-image generation with diffusion and language.
Google's multimodal LLM family with native multimodal and scale tiers.
xAI's LLM with real-time knowledge and reasoning.
Anthropic's agentic AI coding assistant available as CLI, VS Code/JetBrains extension, and web app — capable of autonomous multi-step task execution across your entire codebase.
Anthropic's AI coding agent for terminal, IDE, and web.
Project-level and global instruction files that customize Claude Code's behavior — what they are, where they live, how they are loaded, and how to write effective ones.
AI for images and video.
AI-powered code editor and pair-programming tool.
Deep neural networks and representation learning.
Dense vector representations for text and retrieval.
Making AI decisions interpretable and explainable.
Learning from very few examples.
AI pair programmer for code completion and generation.
Google's multimodal AI platform — the Gemini model family, AI Studio, and Vertex AI integration for enterprise-grade generative AI.
Getting started with AI Summary Hub and an overview of AI fields.
AI IDE with spec-driven development and agent hooks from prototype to production.
What LLMs are, how they are trained and used.
Introduction to machine learning — supervised, unsupervised, and reinforcement learning.
How max tokens, stop sequences, and repetition penalties control the length, boundaries, and quality of LLM-generated text.
Overview of MLOps, why it matters, and how it bridges machine learning and production engineering.
Overview of AI model providers — API-based, open-weights, and hybrid approaches.
AI for understanding and generating human language.
Introduction to artificial neural networks and their building blocks.
OpenAI as a developer platform — GPT-4o, o1/o3 reasoning, DALL-E, Whisper, API features, function calling, and SDKs.
Deep learning framework with dynamic computation graphs.
Learning from rewards and sequential decision-making.
Converting speech to text and related audio tasks.
System messages, role prompting, and contextual prompting are foundational techniques for steering LLM behavior — establishing persistent instructions, personas, and background knowledge before the conversation begins.
How temperature, Top-K, and Top-P sampling parameters control randomness and creativity in LLM outputs.
Deep learning framework by Google.
Reusing pretrained models for new tasks.
Transformer architecture and self-attention mechanisms.
Iterative, AI-assisted coding driven by intent and quick feedback.
Performing tasks without task-specific training examples.