Beginner
Direct Model Interface - The Simplest AI Integration Pattern
The foundational pattern: user input goes to a model API, model response comes back. When this is enough and when you need something more.
API - Application Programming Interface
What an API is, REST vs GraphQL vs gRPC, authentication patterns, rate limiting, and how AI services are accessed through standardized API …
Binary and Number Systems in Computing
How computers represent all data in base-2 (binary), why transistors make this fundamental, and how number systems connect to AI model …
Cost Optimization (Well-Architected Pillar)
The Well-Architected pillar covering right-sizing, reserved capacity, spot instances, and cost allocation - and how it applies to AI …
Data Structures for AI Applications
Arrays, hash maps, trees, graphs, queues, and vector stores - how the choice of data structure shapes the performance of AI pipelines.
Floating-Point Arithmetic and Model Precision
IEEE 754, FP32, FP16, BF16, and INT8 - how number precision determines model size, inference speed, and accuracy tradeoffs in AI deployment.
Object-Oriented Programming (OOP)
Classes, objects, inheritance, encapsulation, and polymorphism - how OOP concepts apply directly to AI frameworks like CrewAI and Pydantic.
Operational Excellence (Well-Architected Pillar)
The Well-Architected pillar covering runbooks, automation, observability, incident response, and continuous improvement - and how it applies …
Performance Efficiency (Well-Architected Pillar)
The Well-Architected pillar covering compute selection, storage, database, and networking choices - and how it applies to AI workloads …
Reliability (Well-Architected Pillar)
The Well-Architected pillar covering fault tolerance, disaster recovery, health checks, and scaling - and how it applies to AI workloads …
Security (Well-Architected Pillar)
The Well-Architected pillar covering IAM, encryption, network security, and detection - and how it applies to AI workloads including …
Sustainability (Well-Architected Pillar)
The Well-Architected pillar added in 2021 covering efficient resource usage, managed services, and data lifecycle management - and how it …
Amazon Polly - Text-to-Speech for Applications
Using Amazon Polly to generate natural-sounding speech from text in AI applications, with SSML control and neural voice options.
Amazon S3 - Object Storage for AI Pipelines
How Amazon S3 functions as the storage backbone for AI data pipelines: ingest, staging, output, and lifecycle management.
Amazon Translate - Neural Machine Translation
Using Amazon Translate for real-time and batch document translation in multilingual AI applications.
CI/CD - Continuous Integration and Continuous Delivery
What CI/CD is, why it matters for AI projects, the tools involved, and the AI-specific considerations that extend standard pipelines.
GitHub Actions - CI/CD for AI Projects
GitHub Actions workflow syntax, Hugo deployment pattern, Python testing pipelines, Docker builds, Terraform plan/apply, and model evaluation …
Open Practice Library
What the Open Practice Library is, its key practices for AI projects, and how it structures discovery and delivery for teams building …
Shared Responsibility Model
What the shared responsibility model is, how AWS, Azure, and GCP divide security duties, and special considerations for AI and ML workloads.
About This Wiki
What the AI Solutions Wiki is, who it is for, and how the content is organized.
Agentic AI
What makes AI agentic vs assistive, autonomous task execution, tool use, planning capabilities, and current limitations.
AI Agents - Autonomous Task Execution
What AI agents are, how they differ from simple LLM calls, the key design patterns, and what makes agents fail in production.
AI for Small Businesses - Where to Start
Low-cost AI tools, quick wins in email automation and document processing, and guidance on when to invest in custom solutions.
AI Spark: AI-Assisted Document Review for Legal Teams
How AI can reduce contract review time by surfacing non-standard clauses, missing provisions, and high-risk language - a practical build …
AI Spark: Auto-Classify and Route Incoming Emails
Use AI to classify incoming emails by type, urgency, and intent, then route them to the right team or workflow automatically.
AI Spark: Automate Invoice Processing in 3 Steps
A practical AI spark for automating invoice data extraction - the problem, the approach, and a three-step build path.
AI Spark: Never Write Meeting Notes Again
Automate meeting summaries and action item extraction using transcription and LLM post-processing - a practical three-step build.
Amazon Bedrock - Enterprise AI Foundation
A comprehensive reference for Amazon Bedrock: available models, key features, use cases, and pricing patterns for enterprise teams.
Budgeting an AI Project - What It Really Costs
A practical cost breakdown for enterprise AI projects - from prototype to production - covering model inference, infrastructure, data, …
Claude by Anthropic - Enterprise AI Assistant
What makes Claude useful for enterprise applications, model tiers, key strengths, access options including through Amazon Bedrock, and …
Claude vs GPT - Choosing an Enterprise LLM
A practical comparison of Anthropic Claude and OpenAI GPT for enterprise applications - capability differences, access options, compliance …
Computer Vision
What computer vision is, how it works in AI applications, and how AWS Rekognition, Azure Computer Vision, and GCP Vision AI compare.
Container Registry
What container registries are, how ECR, Docker Hub, Azure ACR, and GCP Artifact Registry compare, and patterns for AI workload container …
Daily AI Sparks - One Automation Idea Per Day
How the Daily AI Sparks series works and how to use short automation ideas to find your first AI quick win.
FFmpeg - Video Processing Swiss Army Knife
Using FFmpeg in AWS Lambda layers and EC2 for video processing in AI pipelines, including common operations and integration with Rekognition …
Foundation Models
What foundation models are, how they differ from task-specific models, the major model families, and the practical implications for …
Getting Started with Amazon Bedrock for Enterprise AI
A practical introduction to Amazon Bedrock: what it is, which models are available, how pricing works, and how to get your first use case …
How to Choose Your First AI Use Case
A practical framework for selecting the right first AI use case - prioritizing for quick wins, avoiding common traps, and setting up for a …
How to Get AWS Funding for Your AI Project
A practical guide to AWS PoC funding (up to 10,000 EUR) and migration funding (up to 400,000 EUR) - eligibility, application process, and …
Hugo - Static Site Generator
Using Hugo to build fast, maintainable documentation sites and AI solution landing pages, with GitHub Pages and Amplify deployment.
Inference - Running AI Models in Production
What inference means in AI context, the key operational parameters that matter (latency, throughput, cost), and the main deployment options …
Infrastructure as Code (IaC)
What Infrastructure as Code is, and how Terraform, AWS CDK, and CloudFormation compare for managing AI project infrastructure.
Knowledge Base (AI)
What an AI knowledge base is, how it differs from a traditional knowledge base, vector stores, and RAG integration.
LLM - Large Language Model
What large language models are, how they work at a high level, key characteristics, and what they can and cannot do reliably.
Model Cards - AI Transparency Documentation
What model cards document, why they matter for AI governance, and how to create one.
Prompt Engineering
What prompt engineering is, why it matters in enterprise AI applications, and the most effective techniques for getting reliable outputs …
RAG - Retrieval Augmented Generation
What RAG is, how it works, when to use it, and the common implementation pitfalls that reduce retrieval quality.
Serverless Computing
What serverless computing means, how Lambda, Fargate, and Step Functions fit AI workloads, and when serverless is and is not the right …
Speech-to-Text (STT)
What speech-to-text technology is, how AWS Transcribe, Azure Speech, and GCP Speech-to-Text compare, and key features like speaker …
Text-to-Speech (TTS)
What text-to-speech technology is, how AWS Polly, Azure Speech, and GCP Text-to-Speech compare, and key features like neural voices and …
Tokenization in AI
What tokens are, how different models tokenize text, why token count matters for cost and context limits.
Using Notion as an AI Backend - Databases, APIs, and Automation
Notion API for structured data, MCP integration, and using Notion databases as knowledge stores for AI agents. When it works and when to …
Time Complexity and Big-O Notation
An introduction to Big-O notation and how it describes the asymptotic behavior of algorithms.