Skip to content

Chapter 2: VSCode and AI Integration

Maximizing AI Effectiveness for DevOps Engineers

Part of: The DevOps Engineer's Guide to Effective AI Usage


Table of Contents

  1. Executive Summary
  2. Part 1: Why VSCode for AI-Enhanced DevOps
  3. Part 2: AI Extension Landscape – Options & Comparison
  4. Part 3: GitHub Copilot – Setup & Best Practices
  5. Part 4: Continue.dev – Open-Source Flexibility
  6. Part 5: Multi-Model Strategy – Qwen, ChatGPT, DeepSeek
  7. Part 6: Token Optimization & Usage Modes
  8. Part 7: DevOps-Specific VSCode Workflows
  9. Part 8: Security & Compliance Considerations
  10. Part 9: Quick Reference & Troubleshooting
  11. Appendix: Configuration Templates

1. Executive Summary

Why This Chapter Exists

VSCode is the most popular IDE for DevOps engineers (60%+ market share). Integrating AI effectively into your VSCode workflow can:

  • 3-5x faster code generation – Infrastructure as Code, scripts, pipelines
  • 50% fewer context switches – AI assistance directly in your editor
  • Better code quality – Real-time suggestions, security scanning, compliance checks
  • Reduced cognitive load – AI handles boilerplate, you focus on architecture

What You'll Learn

Section What You'll Gain Time to Apply
Part 1: Why VSCode Understand the AI-IDE ecosystem 15 minutes reading
Part 2: Extension Landscape Choose the right AI extensions for your needs 30 minutes setup
Part 3: GitHub Copilot Master the most popular AI coding assistant 1 hour setup + practice
Part 4: Continue.dev Open-source flexibility with multiple models 1 hour setup
Part 5: Multi-Model Strategy Use different models for different tasks Ongoing optimization
Part 6: Token Optimization Reduce costs, improve response quality Immediate savings
Part 7: DevOps Workflows DevOps-specific VSCode + AI patterns Use immediately
Part 8: Security & Compliance Keep your code secure when using AI Critical for production
Part 9: Quick Reference One-page checklist for daily use Bookmark this

How This Chapter Builds on Chapter 1

┌─────────────────────────────────────────────────────────────┐
│ CHAPTER 1 → CHAPTER 2 CONNECTION                          │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ Chapter 1 (Introduction):                                  │
│ • Understanding AI paradigms (symbolic vs. data-driven)    │
│ • Prompt engineering framework                             │
│ • DevOps-specific prompt patterns                          │
│                                                             │
│ Chapter 2 (VSCode Integration):                            │
│ • Applying Chapter 1 concepts IN your editor               │
│ • Reducing friction between thinking and coding            │
│ • Making AI assistance seamless and contextual             │
│                                                             │
│ Connection:                                                │
│ Chapter 1 teaches you WHAT to prompt                       │
│ Chapter 2 teaches you WHERE and HOW to prompt efficiently  │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Quick Start: 30-Minute Setup

If you want to start using AI in VSCode immediately:

□ Minute 1-5: Install Continue.dev extension
□ Minute 6-10: Configure API keys (OpenAI, Anthropic, or local models)
□ Minute 11-20: Configure custom prompts for DevOps tasks
□ Minute 21-30: Test with a simple Terraform or bash script task

You'll be productive in 30 minutes. Read the rest for optimization.

2. Part 1: Why VSCode for AI-Enhanced DevOps

2.1 The DevOps Engineer's Editor Landscape

┌─────────────────────────────────────────────────────────────┐
│ IDE MARKET SHARE FOR DEVOPS ENGINEERS (2024)              │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ VSCode: ████████████████████████████████ 60%+             │
│ IntelliJ: ████████ 15%                                     │
│ Vim/Neovim: ██████ 10%                                     │
│ Other: ████████ 15%                                        │
│                                                             │
│ Why VSCode Dominates DevOps:                               │
│ • Extensive extension ecosystem (including AI)             │
│ • Lightweight, fast startup                                │
│ • Excellent terminal integration                           │
│ • Strong remote development support                        │
│ • Free and open-source                                     │
│                                                             │
└─────────────────────────────────────────────────────────────┘

2.2 Why AI Integration Matters for DevOps

DevOps Task Without AI With AI in VSCode Improvement
Write Terraform 30-60 minutes per module 10-20 minutes 3x faster
Write bash scripts 20-40 minutes 5-15 minutes 3-4x faster
Debug CI/CD pipelines 1-2 hours 20-40 minutes 2-3x faster
Write tests 40-80 minutes 15-30 minutes 2-3x faster
Write documentation 60-120 minutes 20-40 minutes 3x faster
Security review 30-60 minutes 10-20 minutes 3x faster

Key Insight: AI in VSCode reduces context switching – you don't leave your editor to ask AI questions. This alone saves 15-30 minutes per session.

2.3 The AI-Enhanced DevOps Workflow

┌─────────────────────────────────────────────────────────────┐
│ TRADITIONAL DEVOPS WORKFLOW                               │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ 1. Think about task                                        │
│ 2. Open browser → AI chat                                  │
│ 3. Copy prompt → Wait for response                         │
│ 4. Copy response → Paste in VSCode                         │
│ 5. Adapt to your context                                   │
│ 6. Test and iterate                                        │
│ 7. Repeat steps 2-6 for each question                      │
│                                                             │
│ Context switches: 4-6 per task                             │
│ Time lost to switching: 15-30 minutes per session          │
│                                                             │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ AI-ENHANCED DEVOPS WORKFLOW (VSCode Integrated)           │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ 1. Think about task                                        │
│ 2. Open AI panel IN VSCode (Ctrl+L or Cmd+L)              │
│ 3. Ask question → Get response inline                      │
│ 4. Accept/reject suggestions directly in code              │
│ 5. Test and iterate (same context)                         │
│ 6. Repeat step 3-5 for each question                       │
│                                                             │
│ Context switches: 0-1 per task                             │
│ Time lost to switching: 2-5 minutes per session            │
│                                                             │
└─────────────────────────────────────────────────────────────┘

2.4 Types of AI Assistance in VSCode

Type What It Does Best For Example Extensions
Inline Completion Suggests code as you type Boilerplate, common patterns GitHub Copilot, Codeium
Chat Panel Conversational AI assistance Complex tasks, explanations Continue, Copilot Chat
Code Review Analyzes existing code Security, best practices CodeRabbit, SonarLint
Documentation Generates docs from code README, comments Documint, AI Comment
Terminal Assistant Helps with shell commands Complex commands, debugging ShellAI, Warp

Recommendation: Start with Chat Panel (Continue.dev) + Inline Completion (Copilot or Codeium) for maximum coverage.


3. Part 2: AI Extension Landscape – Options & Comparison

3.1 Extension Comparison Matrix

Extension Cost Models Supported Best For DevOps Fit
GitHub Copilot $10/month OpenAI (GPT-4), Anthropic Inline completion, general coding ⭐⭐⭐⭐
Continue.dev Free (open-source) Any (OpenAI, Anthropic, Qwen, DeepSeek, local) Chat, multi-model flexibility ⭐⭐⭐⭐⭐
Codeium Free tier available Proprietary + custom Inline completion, free alternative ⭐⭐⭐⭐
Tabnine Free tier + paid Proprietary + custom Inline completion, privacy-focused ⭐⭐⭐
Cursor Free tier + paid OpenAI, Anthropic, custom Full AI-first editor experience ⭐⭐⭐⭐
Sourcegraph Cody Free tier + paid OpenAI, Anthropic Code search + AI assistance ⭐⭐⭐
Amazon Q Developer Free tier + paid Amazon Bedrock AWS-specific DevOps tasks ⭐⭐⭐⭐ (for AWS shops)

3.2 Recommendation by Use Case

┌─────────────────────────────────────────────────────────────┐
│ CHOOSE YOUR AI EXTENSION STACK                            │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ [Budget-Conscious / Maximum Flexibility]                  │
│ • Primary: Continue.dev (free, multi-model)               │
│ • Inline: Codeium (free tier)                              │
│ • Total cost: $0/month                                     │
│                                                             │
│ [Best Overall Experience]                                  │
│ • Primary: Continue.dev (multi-model chat)                │
│ • Inline: GitHub Copilot (best inline suggestions)         │
│ • Total cost: $10/month                                    │
│                                                             │
│ [AWS-Heavy Shop]                                           │
│ • Primary: Amazon Q Developer (AWS integration)            │
│ • Secondary: Continue.dev (for non-AWS tasks)              │
│ • Total cost: $19/month (Q Developer Pro)                 │
│                                                             │
│ [Privacy-Focused / On-Premise]                            │
│ • Primary: Continue.dev + local models (Ollama)           │
│ • Inline: Tabnine (local mode)                             │
│ • Total cost: $0/month (hardware costs only)              │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Continue.dev is the most flexible AI extension for VSCode, especially for DevOps engineers:

✅ Advantages:
• Open-source (transparent, auditable)
• Supports ANY model (OpenAI, Anthropic, Qwen, DeepSeek, local)
• Customizable prompts (perfect for DevOps templates from Chapter 1)
• Free (no subscription required)
• Active community and frequent updates
• Can switch models per task (cost optimization)

⚠️ Considerations:
• Requires more initial setup than Copilot
• You manage API keys and costs
• Inline completion not as polished as Copilot (use Codeium for inline)

Recommendation: Use Continue.dev as your primary chat interface + GitHub Copilot or Codeium for inline completion.


4. Part 3: GitHub Copilot – Setup & Best Practices

4.1 Installation & Setup

# Step 1: Install GitHub Copilot Extension
# In VSCode: Extensions → Search "GitHub Copilot" → Install

# Step 2: Sign in to GitHub
# Click Copilot icon → Sign in to GitHub

# Step 3: Verify activation
# Open any file → Start typing → Should see ghost text suggestions

# Step 4: Configure settings (settings.json)
{
  "github.copilot.enable": {
    "*": true,
    "plaintext": false,
    "markdown": false,
    "scminput": false
  },
  "github.copilot.editor.enableAutoCompletions": true,
  "github.copilot.editor.enableTabAutocompletion": true
}

4.2 Usage Modes – Reduce Token Usage

GitHub Copilot has different modes. Use the right mode for the task to reduce token usage and costs.

Mode When to Use Token Usage Example
Inline Completion Writing code, boilerplate Low (per-line) Typing Terraform resource
Chat Panel Complex questions, explanations Medium (conversation) "How do I structure this module?"
Copilot Edits Refactoring existing code Medium (diff-based) "Add error handling to this function"
Copilot Workspace Large-scale changes High (full context) "Refactor this entire module"

Cost Optimization Tip: Use Inline Completion for 80% of tasks, Chat Panel for 20% of complex tasks.

4.3 DevOps-Specific Copilot Prompts

# Terraform Module Generation
# Type this as a comment, then accept Copilot suggestion:

# Create an S3 bucket with encryption, versioning, and lifecycle policy
# - Bucket name: ${var.bucket_name}
# - Encryption: AES-256
# - Versioning: enabled
# - Lifecycle: transition to IA after 30 days, expire after 365 days

# Bash Script Template
# Type this as a comment, then accept Copilot suggestion:

# Write a bash script to backup PostgreSQL
# - Check disk space first (require 2x DB size)
# - Use pg_dump with --format=custom
# - Verify backup with pg_restore --list
# - Alert on failure via Slack webhook
# - Delete backups older than 7 days

# CI/CD Pipeline
# Type this as a comment, then accept Copilot suggestion:

# Create GitHub Actions workflow for Terraform
# - Run on push to main
# - Run terraform fmt, validate, plan
# - Require approval for apply
# - Post plan results to Slack

4.4 Copilot Best Practices for DevOps

✅ DO:
• Use comments to guide Copilot (more specific = better suggestions)
• Review ALL suggestions before accepting (Copilot can hallucinate)
• Use for boilerplate, not complex logic
• Combine with Chapter 1 prompt patterns for complex tasks

❌ DON'T:
• Accept suggestions without review (security risk)
• Use for secrets or credentials (Copilot may leak them)
• Rely on Copilot for compliance-critical code without validation
• Use Copilot Chat for sensitive infrastructure details (data privacy)

4.5 Troubleshooting Copilot

Issue Likely Cause Fix
No suggestions appearing Extension not activated Check Copilot icon → Sign in
Suggestions are generic Comments too vague Add more specific context in comments
Suggestions violate constraints Copilot doesn't know your rules Add constraints in comments before typing
Slow suggestions Network or API issues Check VSCode output panel → Copilot
Suggestions include secrets Training data leakage NEVER type secrets; use variables

5. Part 4: Continue.dev – Open-Source Flexibility

5.1 Why Continue.dev for DevOps

Continue.dev is the most flexible AI extension for VSCode, perfect for DevOps engineers who need:

  • Multi-model support – Use Qwen for code, DeepSeek for reasoning, ChatGPT for explanations
  • Custom prompts – Embed Chapter 1 prompt templates directly
  • Cost control – Switch models based on task complexity
  • Privacy – Option to use local models (Ollama)
  • Open-source – Audit the code, contribute improvements

5.2 Installation & Setup

# Step 1: Install Continue Extension
# In VSCode: Extensions → Search "Continue" → Install

# Step 2: Open Continue Panel
# Ctrl+L (Windows/Linux) or Cmd+L (Mac)

# Step 3: Configure Models (config.json)
# Click gear icon → Open config.json

# Step 4: Add Your Models (see Section 5.3)

5.3 Configuration – Multi-Model Setup

File: ~/.continue/config.json

{
  "models": [
    {
      "title": "Qwen-2.5-Coder",
      "provider": "openai",
      "model": "qwen-2.5-coder",
      "apiKey": "YOUR_QWEN_API_KEY",
      "apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      }
    },
    {
      "title": "DeepSeek-V3",
      "provider": "openai",
      "model": "deepseek-chat",
      "apiKey": "YOUR_DEEPSEEK_API_KEY",
      "apiBase": "https://api.deepseek.com/v1",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 8192
      }
    },
    {
      "title": "GPT-4o",
      "provider": "openai",
      "model": "gpt-4o",
      "apiKey": "YOUR_OPENAI_API_KEY",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      }
    },
    {
      "title": "Claude-3.5-Sonnet",
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-20241022",
      "apiKey": "YOUR_ANTHROPIC_API_KEY",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      }
    },
    {
      "title": "Ollama (Local)",
      "provider": "ollama",
      "model": "codellama",
      "apiBase": "http://localhost:11434",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      }
    }
  ],
  "tabAutocompleteModel": {
    "title": "Qwen-2.5-Coder",
    "provider": "openai",
    "model": "qwen-2.5-coder",
    "apiKey": "YOUR_QWEN_API_KEY",
    "apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1"
  },
  "customCommands": [
    {
      "name": "terraform",
      "prompt": "{{{ input }}}\n\nGenerate Terraform code following these rules:\n1. Use our standard tagging convention\n2. Follow security best practices\n3. Include variables and outputs\n4. Add comments explaining each resource",
      "description": "Generate Terraform with our standards"
    },
    {
      "name": "bash",
      "prompt": "{{{ input }}}\n\nWrite a bash script following these rules:\n1. Use set -euo pipefail\n2. Add error handling for all commands\n3. Log all operations with timestamps\n4. Include usage instructions at top",
      "description": "Generate bash script with best practices"
    },
    {
      "name": "review",
      "prompt": "{{{ input }}}\n\nReview this code for:\n1. Security vulnerabilities\n2. Compliance gaps (HIPAA, SOC2)\n3. Error handling gaps\n4. Performance issues\n5. Edge cases missed\n\nList all issues, then suggest fixes.",
      "description": "Security and compliance review"
    }
  ]
}

5.4 Using Continue.dev – DevOps Workflows

Workflow 1: Terraform Module Generation

Step 1: Open Continue Panel (Ctrl+L / Cmd+L)
Step 2: Select Model: Qwen-2.5-Coder (best for code)
Step 3: Type prompt using Chapter 1 template:

"Generate Terraform module for RDS PostgreSQL.

SYMBOLIC CONSTRAINTS:
1. publicly_accessible = false
2. backup_retention_period = 7
3. storage_encrypted = true
4. Multi-AZ = true

DATA-DRIVEN PATTERNS:
- Use our tagging convention: {app, env, owner}
- Follow Terraform best practices

CONTEXT:
- Engine: PostgreSQL 14.9
- Instance: db.t3.medium
- Environment: production

OUTPUT:
- Terraform HCL with comments
- Include variables.tf, outputs.tf"

Step 4: Review output against constraints
Step 5: Accept or iterate

Workflow 2: Bash Script with Error Handling

Step 1: Open Continue Panel
Step 2: Select Model: DeepSeek-V3 (best for reasoning)
Step 3: Use custom command: /bash
Step 4: Type: "Backup PostgreSQL database to S3"
Step 5: Review output, test in staging

Workflow 3: Security Review

Step 1: Select code in VSCode
Step 2: Open Continue Panel
Step 3: Use custom command: /review
Step 4: AI reviews selected code
Step 5: Fix issues identified

5.5 Continue.dev Best Practices for DevOps

✅ DO:
• Configure multiple models for different tasks
• Create custom commands for common DevOps tasks
• Use Chapter 1 prompt templates in custom commands
• Switch models based on task complexity (cost optimization)
• Review ALL AI output before deploying to production

❌ DON'T:
• Store API keys in config.json (use environment variables)
• Use AI for secrets or credentials
• Rely on AI for compliance-critical code without validation
• Use cloud models for sensitive infrastructure details (use local Ollama)

5.6 Environment Variable Setup (Security)

Don't hardcode API keys in config.json!

# Step 1: Set environment variables
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export QWEN_API_KEY="sk-..."
export DEEPSEEK_API_KEY="sk-..."

# Step 2: Reference in config.json
{
  "models": [
    {
      "title": "GPT-4o",
      "provider": "openai",
      "model": "gpt-4o",
      "apiKey": "${OPENAI_API_KEY}"
    }
  ]
}

# Step 3: Load environment variables on VSCode startup
# Add to ~/.zshrc or ~/.bashrc:
export OPENAI_API_KEY="sk-..."
# Then restart VSCode

6. Part 5: Multi-Model Strategy – Qwen, ChatGPT, DeepSeek

6.1 Why Use Multiple Models?

Different models excel at different tasks. Using the right model for each task improves quality and reduces costs.

┌─────────────────────────────────────────────────────────────┐
│ MODEL STRENGTHS FOR DEVOPS TASKS                          │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ [Qwen-2.5-Coder]                                           │
│ • Strengths: Code generation, especially Python/Bash       │
│ • Cost: Low ($0.50-1.00 per 1M tokens)                     │
│ • Best for: Terraform, scripts, CI/CD pipelines            │
│                                                             │
│ [DeepSeek-V3]                                              │
│ • Strengths: Reasoning, complex logic, explanations        │
│ • Cost: Very Low ($0.27-1.10 per 1M tokens)                │
│ • Best for: Architecture design, debugging, documentation  │
│                                                             │
│ [GPT-4o]                                                   │
│ • Strengths: General purpose, well-rounded                 │
│ • Cost: Medium ($2.50-10.00 per 1M tokens)                 │
│ • Best for: Complex tasks requiring broad knowledge        │
│                                                             │
│ [Claude-3.5-Sonnet]                                        │
│ • Strengths: Long context, nuanced understanding           │
│ • Cost: Medium-High ($3.00-15.00 per 1M tokens)            │
│ • Best for: Large codebases, detailed reviews              │
│                                                             │
│ [Ollama (Local)]                                           │
│ • Strengths: Privacy, no API costs                         │
│ • Cost: $0 (hardware costs only)                           │
│ • Best for: Sensitive code, offline work                   │
│                                                             │
└─────────────────────────────────────────────────────────────┘

6.2 Model Selection Matrix

Task Type Recommended Model Why Cost per Task
Terraform generation Qwen-2.5-Coder Best code quality for IaC ~$0.01-0.05
Bash/Python scripts Qwen-2.5-Coder Strong scripting patterns ~$0.01-0.03
Architecture design DeepSeek-V3 Best reasoning for complex systems ~$0.05-0.15
Debugging DeepSeek-V3 Strong analytical reasoning ~$0.03-0.10
Documentation DeepSeek-V3 or GPT-4o Clear explanations ~$0.02-0.08
Security review Claude-3.5-Sonnet Best at nuanced security analysis ~$0.05-0.20
Sensitive code Ollama (Local) No data leaves your machine $0
Quick questions Qwen-2.5-Coder Fast, cheap, good enough ~$0.001-0.01

6.3 Cost Optimization Strategy

┌─────────────────────────────────────────────────────────────┐
│ COST OPTIMIZATION HIERARCHY                               │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ Tier 1: Local Models (Ollama)                             │
│ • Use for: Sensitive code, iterative development           │
│ • Cost: $0 (after hardware investment)                     │
│ • Setup: Install Ollama → Pull model → Configure Continue  │
│                                                             │
│ Tier 2: Budget Models (Qwen, DeepSeek)                    │
│ • Use for: 80% of daily tasks (code generation, scripts)   │
│ • Cost: $5-20/month for heavy usage                        │
│ • Setup: Get API keys → Configure in Continue              │
│                                                             │
│ Tier 3: Premium Models (GPT-4, Claude)                    │
│ • Use for: 20% of complex tasks (architecture, security)   │
│ • Cost: $10-50/month for heavy usage                       │
│ • Setup: Get API keys → Configure in Continue              │
│                                                             │
│ Strategy:                                                  │
│ • Default to Tier 2 for most tasks                         │
│ • Escalate to Tier 3 only when needed                      │
│ • Use Tier 1 for sensitive/iterative work                  │
│                                                             │
└─────────────────────────────────────────────────────────────┘

6.4 Configuring Model Switching in Continue

File: ~/.continue/config.json

{
  "models": [
    {
      "title": "🔵 Qwen (Default)",
      "provider": "openai",
      "model": "qwen-2.5-coder",
      "apiKey": "${QWEN_API_KEY}",
      "apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "default": true
    },
    {
      "title": "🟢 DeepSeek (Reasoning)",
      "provider": "openai",
      "model": "deepseek-chat",
      "apiKey": "${DEEPSEEK_API_KEY}",
      "apiBase": "https://api.deepseek.com/v1"
    },
    {
      "title": "🟣 GPT-4o (Complex)",
      "provider": "openai",
      "model": "gpt-4o",
      "apiKey": "${OPENAI_API_KEY}"
    },
    {
      "title": "🟠 Claude (Review)",
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-20241022",
      "apiKey": "${ANTHROPIC_API_KEY}"
    },
    {
      "title": "🔴 Ollama (Local/Privacy)",
      "provider": "ollama",
      "model": "codellama",
      "apiBase": "http://localhost:11434"
    }
  ],
  "tabAutocompleteModel": {
    "title": "🔵 Qwen (Default)",
    "provider": "openai",
    "model": "qwen-2.5-coder",
    "apiKey": "${QWEN_API_KEY}",
    "apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1"
  }
}

Usage: Click model dropdown in Continue panel → Select model for current task.

6.5 API Key Management (Security)

# Recommended: Use environment variables
# Add to ~/.zshrc or ~/.bashrc:

export QWEN_API_KEY="sk-..."
export DEEPSEEK_API_KEY="sk-..."
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."

# Reload shell:
source ~/.zshrc

# Verify in VSCode:
# Open config.json → Keys should show as ${VARIABLE_NAME}
# Restart VSCode to load environment variables

Never commit API keys to git! Add to .gitignore:

# .gitignore
.env
*.key
*.secret
config.local.json

7. Part 6: Token Optimization & Usage Modes

7.1 Understanding Token Usage

┌─────────────────────────────────────────────────────────────┐
│ TOKEN USAGE BREAKDOWN                                     │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ What Counts as Tokens:                                     │
│ • Your prompt (input tokens)                               │
│ • AI response (output tokens)                              │
│ • Context sent to AI (code, files, conversation history)   │
│                                                             │
│ Typical Token Counts:                                      │
│ • Simple question: 100-500 tokens                          │
│ • Code generation: 500-2,000 tokens                        │
│ • Complex architecture: 2,000-10,000 tokens                │
│ • Full codebase review: 10,000-100,000+ tokens             │
│                                                             │
│ Cost Examples (at $1/1M tokens):                          │
│ • Simple question: $0.0001-0.0005                          │
│ • Code generation: $0.0005-0.002                           │
│ • Complex architecture: $0.002-0.01                        │
│ • Full codebase review: $0.01-0.10                         │
│                                                             │
└─────────────────────────────────────────────────────────────┘

7.2 Usage Modes – Reduce Token Consumption

Continue.dev and other AI extensions support different usage modes that affect token usage.

Mode What It Does Token Usage When to Use
Planning Mode AI thinks before responding Medium (extra thinking tokens) Complex tasks, architecture
Direct Mode AI responds immediately Low (no thinking overhead) Simple tasks, quick questions
Context-Aware AI reads open files Medium-High (file context) When file context is needed
Minimal Context AI only sees your prompt Low (prompt only) When privacy matters
Streaming AI streams response as generated Same total, better UX Most tasks

7.3 Token Optimization Techniques

Technique 1: Use Planning Mode Selectively

# ❌ Always Using Planning Mode:
[Every prompt triggers thinking overhead]
Cost: +20-30% tokens per request

# ✅ Selective Planning Mode:
# Simple tasks → Direct Mode
"Write a bash script to list files"

# Complex tasks → Planning Mode
"Design a CI/CD pipeline for microservices with these constraints: ..."

In Continue.dev: Add to config.json:

{
  "completionOptions": {
    "usePlanningMode": false
  }
}

Technique 2: Minimize Context Sent to AI

# ❌ Sending Entire File:
[Select entire 500-line file → Ask AI to review]
Tokens: ~2,000-5,000

# ✅ Sending Only Relevant Section:
[Select 50-line function → Ask AI to review]
Tokens: ~200-500
Savings: 80-90%

Best Practice: Select only the code AI needs to see.

Technique 3: Use Concise Prompts

# ❌ Verbose Prompt:
"I was wondering if you could maybe possibly help me write a Terraform
script for creating an S3 bucket? I need it to have encryption and
versioning enabled, and I was thinking maybe we could also add a
lifecycle policy to transition objects to infrequent access after
30 days and then expire them after 365 days. Also, I'd like to make
sure that public access is blocked because we're dealing with
sensitive data. Oh, and can you add proper tagging as well?"

Tokens: ~200

# ✅ Concise Prompt:
"Generate Terraform for S3 bucket:
- Encryption: AES-256
- Versioning: enabled
- Lifecycle: IA after 30 days, expire after 365 days
- Public access: blocked
- Tags: {app, env, owner}"

Tokens: ~50
Savings: 75%

Technique 4: Cache Reusable Prompts

# ❌ Regenerating Same Prompts:
[Ask AI to generate same Terraform pattern repeatedly]
Cost: Pay tokens every time

# ✅ Cache Reusable Prompts:
# Save effective prompts as snippets in VSCode
# Use Chapter 1 templates from Appendix

# VSCode Snippets Setup:
# File: ~/.vscode/snippets/terraform.json
{
  "Terraform S3 Bucket": {
    "prefix": "tf-s3",
    "body": [
      "Generate Terraform for S3 bucket:",
      "- Encryption: AES-256",
      "- Versioning: enabled",
      "- Lifecycle: IA after 30 days, expire after 365 days",
      "- Public access: blocked",
      "- Tags: {app, env, owner}"
    ]
  }
}

Technique 5: Batch Similar Questions

# ❌ Multiple Separate Questions:
Q1: "How do I create an S3 bucket in Terraform?"
Q2: "How do I create an EC2 instance in Terraform?"
Q3: "How do I create an RDS instance in Terraform?"

Tokens: 3x prompt overhead

# ✅ Single Batched Question:
"Generate Terraform for:
1. S3 bucket with encryption and versioning
2. EC2 instance with security group
3. RDS PostgreSQL with backup retention

Use our standard tagging convention for all."

Tokens: 1x prompt overhead
Savings: 50-60%

7.4 Token Usage Monitoring

Track your token usage to optimize costs:

// Continue.dev config.json
{
  "analytics": {
    "enabled": true,
    "provider": "local"
  }
}

Review weekly: - Total tokens used - Most expensive tasks - Opportunities for optimization

Target: Keep monthly AI costs under $20-50 for individual DevOps engineer.


8. Part 7: DevOps-Specific VSCode Workflows

8.1 Workflow 1: Infrastructure as Code Development

┌─────────────────────────────────────────────────────────────┐
│ TERRAFORM DEVELOPMENT WORKFLOW                            │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ Step 1: Plan with AI                                       │
│ • Open Continue Panel (Ctrl+L)                             │
│ • Select Model: DeepSeek-V3 (reasoning)                    │
│ • Prompt: "Design Terraform structure for [infrastructure]"│
│ • Review architecture suggestions                          │
│                                                             │
│ Step 2: Generate Code                                      │
│ • Select Model: Qwen-2.5-Coder (code)                      │
│ • Use custom command: /terraform                           │
│ • Prompt: "Generate module for [resource]"                 │
│ • Accept/reject suggestions                                │
│                                                             │
│ Step 3: Review with AI                                     │
│ • Select code → Continue Panel                             │
│ • Use custom command: /review                              │
│ • Fix issues identified                                    │
│                                                             │
│ Step 4: Validate                                           │
│ • Run: terraform fmt, validate, plan                       │
│ • Fix any errors                                           │
│                                                             │
│ Total Time: 30-60 minutes (vs. 2-4 hours manually)        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

8.2 Workflow 2: Script Development & Debugging

┌─────────────────────────────────────────────────────────────┐
│ BASH/PYTHON SCRIPT WORKFLOW                               │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ Step 1: Define Requirements                                │
│ • Open Continue Panel                                      │
│ • Prompt: "Write bash script for [task] with these rules:  │
│   1. [constraint 1]                                        │
│   2. [constraint 2]                                        │
│   3. [constraint 3]"                                       │
│                                                             │
│ Step 2: Generate Script                                    │
│ • Select Model: Qwen-2.5-Coder                             │
│ • Use custom command: /bash                                │
│ • Review output against constraints                        │
│                                                             │
│ Step 3: Debug with AI                                      │
│ • Run script → Capture error                               │
│ • Paste error to Continue Panel                            │
│ • Prompt: "Fix this error: [error message]"                │
│ • Apply fix, retest                                        │
│                                                             │
│ Step 4: Add Tests                                          │
│ • Prompt: "Generate test cases for this script"            │
│ • Run tests, fix failures                                  │
│                                                             │
│ Total Time: 15-30 minutes (vs. 1-2 hours manually)        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

8.3 Workflow 3: CI/CD Pipeline Development

┌─────────────────────────────────────────────────────────────┐
│ CI/CD PIPELINE WORKFLOW                                   │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ Step 1: Design Pipeline                                    │
│ • Open Continue Panel                                      │
│ • Select Model: DeepSeek-V3                                │
│ • Prompt: "Design CI/CD pipeline for [app type] with:      │
│   - Security scans                                         │
│   - Approval gates                                         │
│   - Rollback capability"                                   │
│                                                             │
│ Step 2: Generate Pipeline                                  │
│ • Select Model: Qwen-2.5-Coder                             │
│ • Prompt: "Generate GitHub Actions YAML for above design"  │
│ • Review against security requirements                     │
│                                                             │
│ Step 3: Security Review                                    │
│ • Select pipeline code                                     │
│ • Use custom command: /review                              │
│ • Fix security issues identified                           │
│                                                             │
│ Step 4: Test Pipeline                                      │
│ • Push to feature branch                                   │
│ • Monitor pipeline execution                               │
│ • Fix failures with AI assistance                          │
│                                                             │
│ Total Time: 45-90 minutes (vs. 3-6 hours manually)        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

8.4 Workflow 4: Documentation Generation

┌─────────────────────────────────────────────────────────────┐
│ DOCUMENTATION WORKFLOW                                    │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ Step 1: Generate from Code                                 │
│ • Select code file                                         │
│ • Open Continue Panel                                      │
│ • Prompt: "Generate README for this Terraform module"      │
│ • Include: usage, inputs, outputs, examples                │
│                                                             │
│ Step 2: Add Architecture Diagram                           │
│ • Prompt: "Generate Mermaid diagram for this architecture" │
│ • Review and refine diagram                                │
│                                                             │
│ Step 3: Compliance Documentation                           │
│ • Prompt: "Document compliance controls for HIPAA"         │
│ • Review against compliance requirements                   │
│                                                             │
│ Step 4: Review & Publish                                   │
│ • Use custom command: /review                              │
│ • Fix issues, commit to repo                               │
│                                                             │
│ Total Time: 30-60 minutes (vs. 2-4 hours manually)        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

8.5 Workflow 5: Security & Compliance Review

┌─────────────────────────────────────────────────────────────┐
│ SECURITY REVIEW WORKFLOW                                  │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│ Step 1: Initial Scan                                       │
│ • Select code/files to review                              │
│ • Open Continue Panel                                      │
│ • Select Model: Claude-3.5-Sonnet (security strength)      │
│ • Use custom command: /review                              │
│                                                             │
│ Step 2: Review Findings                                    │
│ • AI lists security issues                                 │
│ • Prioritize by severity                                   │
│ • Fix critical issues first                                │
│                                                             │
│ Step 3: Compliance Check                                   │
│ • Prompt: "Review for HIPAA compliance gaps"               │
│ • Address identified gaps                                  │
│                                                             │
│ Step 4: Final Validation                                   │
│ • Run security scanning tools (tfsec, checkov, etc.)       │
│ • Fix any remaining issues                                 │
│ • Document security controls                               │
│                                                             │
│ Total Time: 60-90 minutes (vs. 4-8 hours manually)        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

8.6 VSCode Extensions for DevOps + AI

Extension Purpose AI Integration Recommended
Continue AI chat assistant Multi-model ✅ Essential
GitHub Copilot Inline code completion OpenAI ✅ Recommended
Codeium Free inline completion Proprietary ✅ Good free alternative
Terraform Terraform language support None ✅ Essential for IaC
YAML YAML language support None ✅ Essential for K8s/CI
Docker Docker language support None ✅ Recommended
GitLens Git integration None ✅ Essential
Remote - SSH Remote development None ✅ Essential for DevOps
HashiCorp Terraform Official Terraform extension None ✅ Essential
Kubernetes K8s language support None ✅ Recommended for K8s

Minimum Viable Setup: Continue + Terraform + YAML + GitLens

Recommended Setup: Continue + Copilot + Terraform + YAML + GitLens + Remote-SSH


9. Part 8: Security & Compliance Considerations

9.1 Security Risks of AI in VSCode

Risk Description Mitigation
Code Leakage AI may send your code to external servers Use local models (Ollama) for sensitive code
Secret Exposure AI might suggest or leak secrets Never type secrets; use environment variables
Hallucinated Dependencies AI may suggest non-existent or malicious packages Verify all dependencies before adding
Compliance Violations AI may not know your compliance requirements Add compliance constraints to prompts
Intellectual Property AI-trained on public code may reproduce licensed code Review AI output for license compliance
Prompt Injection Malicious prompts could extract sensitive info Don't share sensitive context with AI

9.2 Security Best Practices

✅ DO:
• Use environment variables for API keys (never hardcode)
• Use local models (Ollama) for sensitive infrastructure code
• Review ALL AI output before committing
• Add security constraints to prompts explicitly
• Use .gitignore to exclude AI config with keys
• Enable AI usage analytics to monitor for anomalies
• Use corporate-approved AI providers only

❌ DON'T:
• Type secrets, passwords, or credentials in AI chat
• Send sensitive infrastructure details to cloud AI models
• Accept AI suggestions without security review
• Commit API keys or config files with keys to git
• Use AI for compliance-critical code without validation
• Share proprietary algorithms with public AI models

9.3 Compliance Checklist for AI-Generated Code

□ SECURITY REVIEW:
  □ No hardcoded secrets or credentials
  □ Input validation implemented
  □ Least-privilege principles followed
  □ No obvious vulnerabilities (check with tfsec, checkov, etc.)

□ COMPLIANCE REVIEW:
  □ Audit logging implemented (if required)
  □ Data retention policies followed
  □ PHI/PII appropriately handled
  □ Industry-specific compliance met (HIPAA, SOC2, etc.)

□ DATA PRIVACY:
  □ No sensitive data sent to cloud AI models
  □ Local models used for sensitive code
  □ API keys stored securely (environment variables)

□ INTELLECTUAL PROPERTY:
  □ AI output reviewed for license compliance
  □ No copyleft code inadvertently included
  □ Proprietary algorithms not shared with AI

9.4 Corporate AI Policy Template

# AI Usage Policy for DevOps Team

## Approved Tools:
- Continue.dev (with approved models)
- GitHub Copilot (enterprise)
- Ollama (local models)

## Approved Models:
- Qwen-2.5-Coder (code generation)
- DeepSeek-V3 (reasoning)
- Ollama Codellama (sensitive code)

## Prohibited:
- Sending secrets to any AI model
- Using unapproved AI tools for production code
- Committing AI config with API keys to git

## Review Requirements:
- All AI-generated code must be reviewed before merge
- Security scan required for all AI-generated infrastructure code
- Compliance review for regulated workloads

## Monitoring:
- AI usage analytics enabled
- Monthly review of AI costs
- Quarterly security audit of AI-generated code

10. Part 9: Quick Reference & Troubleshooting

10.1 Quick Setup Checklist

□ Install VSCode (latest version)
□ Install Continue extension
□ Install GitHub Copilot or Codeium extension
□ Get API keys (Qwen, DeepSeek, OpenAI, Anthropic)
□ Configure config.json with models
□ Set environment variables for API keys
□ Create custom commands for DevOps tasks
□ Test with simple Terraform or bash script
□ Review security settings
□ Share config with team (without API keys)

10.2 Common Issues & Fixes

Issue Likely Cause Fix
Continue panel not opening Extension not activated Reload VSCode, check extension status
No model suggestions API key invalid Verify API key in environment variables
Slow responses Network or API issues Check API status, try different model
AI output violates constraints Prompt not clear enough Add explicit constraints to prompt
High token usage Sending too much context Select only relevant code, use concise prompts
API key errors Key not loaded Restart VSCode after setting env vars
Model not responding API rate limit Wait or switch to different model

10.3 Keyboard Shortcuts Reference

Action Windows/Linux Mac
Open Continue Panel Ctrl+L Cmd+L
Accept Inline Suggestion Tab Tab
Reject Inline Suggestion Esc Esc
Open Command Palette Ctrl+Shift+P Cmd+Shift+P
Toggle AI Panel Ctrl+Shift+L Cmd+Shift+L
Quick Chat Ctrl+I Cmd+I

10.4 Cost Tracking Template

# Monthly AI Cost Tracking

## Model Usage:
- Qwen: $X.XX (X million tokens)
- DeepSeek: $X.XX (X million tokens)
- GPT-4o: $X.XX (X million tokens)
- Claude: $X.XX (X million tokens)
- Ollama: $0.00 (local)

## Total: $XX.XX

## Optimization Opportunities:
- [ ] Shift more tasks to Qwen (cheaper)
- [ ] Use Ollama for sensitive code
- [ ] Batch similar questions
- [ ] Cache reusable prompts

## Target: <$50/month for individual DevOps engineer

10.5 Performance Optimization Checklist

□ Use Qwen for 80% of code generation tasks
□ Use DeepSeek for complex reasoning
□ Use Ollama for sensitive/iterative work
□ Enable planning mode only for complex tasks
□ Minimize context sent to AI
□ Use concise prompts
□ Cache reusable prompts as snippets
□ Batch similar questions
□ Monitor token usage weekly
□ Review and optimize monthly

11. Appendix: Configuration Templates

11.1 Complete config.json Template

{
  "models": [
    {
      "title": "🔵 Qwen-2.5-Coder (Default)",
      "provider": "openai",
      "model": "qwen-2.5-coder",
      "apiKey": "${QWEN_API_KEY}",
      "apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      },
      "default": true
    },
    {
      "title": "🟢 DeepSeek-V3 (Reasoning)",
      "provider": "openai",
      "model": "deepseek-chat",
      "apiKey": "${DEEPSEEK_API_KEY}",
      "apiBase": "https://api.deepseek.com/v1",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 8192
      }
    },
    {
      "title": "🟣 GPT-4o (Complex)",
      "provider": "openai",
      "model": "gpt-4o",
      "apiKey": "${OPENAI_API_KEY}",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      }
    },
    {
      "title": "🟠 Claude-3.5-Sonnet (Review)",
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-20241022",
      "apiKey": "${ANTHROPIC_API_KEY}",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      }
    },
    {
      "title": "🔴 Ollama Codellama (Local/Privacy)",
      "provider": "ollama",
      "model": "codellama",
      "apiBase": "http://localhost:11434",
      "completionOptions": {
        "temperature": 0.7,
        "maxTokens": 4096
      }
    }
  ],
  "tabAutocompleteModel": {
    "title": "🔵 Qwen-2.5-Coder (Default)",
    "provider": "openai",
    "model": "qwen-2.5-coder",
    "apiKey": "${QWEN_API_KEY}",
    "apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1"
  },
  "customCommands": [
    {
      "name": "terraform",
      "prompt": "{{{ input }}}\n\nGenerate Terraform code following these rules:\n1. Use our standard tagging convention: {app, env, owner, cost_center}\n2. Follow security best practices (encryption, least privilege)\n3. Include variables.tf and outputs.tf\n4. Add comments explaining each resource\n5. Validate with terraform fmt and validate",
      "description": "Generate Terraform with our standards"
    },
    {
      "name": "bash",
      "prompt": "{{{ input }}}\n\nWrite a bash script following these rules:\n1. Use set -euo pipefail\n2. Add error handling for all commands\n3. Log all operations with timestamps\n4. Include usage instructions at top\n5. Include testing instructions",
      "description": "Generate bash script with best practices"
    },
    {
      "name": "review",
      "prompt": "{{{ input }}}\n\nReview this code for:\n1. Security vulnerabilities (hardcoded secrets, injection, etc.)\n2. Compliance gaps (HIPAA, SOC2, audit logging)\n3. Error handling gaps\n4. Performance issues\n5. Edge cases missed\n\nList all issues with severity (Critical/High/Medium/Low), then suggest fixes.",
      "description": "Security and compliance review"
    },
    {
      "name": "document",
      "prompt": "{{{ input }}}\n\nGenerate documentation for this code:\n1. README with usage examples\n2. Architecture diagram (Mermaid format)\n3. Input/output specifications\n4. Compliance controls documented\n5. Troubleshooting section",
      "description": "Generate comprehensive documentation"
    },
    {
      "name": "test",
      "prompt": "{{{ input }}}\n\nGenerate test cases for this code:\n1. Unit tests for all functions\n2. Integration tests for key workflows\n3. Security tests (OWASP Top 10)\n4. Performance tests for critical paths\n5. Edge case tests\n\nInclude test execution instructions.",
      "description": "Generate comprehensive test suite"
    }
  ],
  "analytics": {
    "enabled": true,
    "provider": "local"
  },
  "completionOptions": {
    "temperature": 0.7,
    "maxTokens": 4096,
    "usePlanningMode": false
  }
}

11.2 Environment Variables Template

# ~/.zshrc or ~/.bashrc

# AI API Keys (NEVER commit these to git)
export QWEN_API_KEY="sk-..."
export DEEPSEEK_API_KEY="sk-..."
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."

# Optional: Local model settings
export OLLAMA_HOST="http://localhost:11434"

# Reload after adding:
# source ~/.zshrc

11.3 .gitignore Template

# AI Configuration
.env
*.key
*.secret
config.local.json
continue/config.local.json

# VSCode
.vscode/settings.json
.vscode/tasks.json

# API Keys
api_keys.txt
secrets.txt
*.pem
*.key

# Logs
*.log
logs/

# OS
.DS_Store
Thumbs.db

11.4 VSCode Snippets for AI Prompts

File: ~/.vscode/snippets/ai-prompts.json

{
  "Terraform Module": {
    "prefix": "ai-tf",
    "body": [
      "Generate Terraform module for ${1:resource}.",
      "",
      "SYMBOLIC CONSTRAINTS:",
      "1. ${2:security constraint}",
      "2. ${3:compliance requirement}",
      "3. ${4:performance requirement}",
      "",
      "DATA-DRIVEN PATTERNS:",
      "- Use our tagging convention: {app, env, owner}",
      "- Follow Terraform best practices",
      "",
      "CONTEXT:",
      "- Environment: ${5:production}",
      "- Region: ${6:ap-southeast-2}",
      "",
      "OUTPUT:",
      "- Terraform HCL with comments",
      "- Include variables.tf, outputs.tf",
      "- Include README.md with usage"
    ],
    "description": "AI prompt template for Terraform"
  },
  "Bash Script": {
    "prefix": "ai-bash",
    "body": [
      "Write bash script for ${1:purpose}.",
      "",
      "SYMBOLIC CONSTRAINTS:",
      "1. ${2:error handling requirement}",
      "2. ${3:logging requirement}",
      "3. ${4:security requirement}",
      "",
      "DATA-DRIVEN PATTERNS:",
      "- Use set -euo pipefail",
      "- Follow our script style",
      "",
      "CONTEXT:",
      "- Runtime: ${5:where script runs}",
      "- Users: ${6:who runs this}",
      "",
      "OUTPUT:",
      "- Bash script with comments",
      "- Include usage instructions",
      "- Include testing instructions"
    ],
    "description": "AI prompt template for bash scripts"
  },
  "Security Review": {
    "prefix": "ai-review",
    "body": [
      "Review this code for:",
      "",
      "SECURITY:",
      "- Hardcoded secrets or credentials",
      "- Input validation gaps",
      "- Obvious vulnerabilities",
      "",
      "COMPLIANCE:",
      "- Audit logging (if required)",
      "- PHI/PII handling",
      "- Retention policies",
      "",
      "RELIABILITY:",
      "- Error handling comprehensiveness",
      "- Retry mechanisms",
      "- Edge cases",
      "",
      "OUTPUT:",
      "- List all issues with severity",
      "- Suggest fixes for each",
      "- Regenerate with fixes"
    ],
    "description": "AI prompt template for security review"
  }
}

Chapter Summary

Key Takeaways

✅ VSCode + AI = 3-5x faster DevOps work
✅ Continue.dev = Most flexible AI extension for DevOps
✅ Multi-model strategy = Better quality + lower costs
✅ Token optimization = Significant cost savings
✅ Security first = Never send secrets to AI
✅ Custom commands = Embed Chapter 1 prompt templates

Next Steps

□ Today: Install Continue + configure one model
□ This Week: Set up multi-model configuration
□ This Month: Create custom commands for your workflows
□ Ongoing: Monitor costs, optimize prompts, share with team

Connection to Chapter 3

In Chapter 3, we'll cover AI for Infrastructure as Code – applying everything from Chapters 1-2 to Terraform, CloudFormation, and Pulumi with specific patterns, templates, and best practices.


Document Version: 1.0 (Chapter 2) Part of: The DevOps Engineer's Guide to Effective AI Usage Last Updated: [Current Date] Prepared By: [Your Name]


This document is confidential and intended solely for your personal use and book development. Share with your team as appropriate.