3 - Automate Your Workflow: Claude CLI Bash Scripting Guide

3 - Automate Your Workflow: Claude CLI Bash Scripting Guide

November 06, 2025
2 views

Table of Contents


Introduction

What if Claude could run your morning standup report automatically? Or generate documentation while you sleep? Or analyze Zimbra mail server logs and alert you to issues before users complain?

That's the power of Claude CLI automation ⚑.

Interactive mode is great for hands-on work, but automation is where Claude CLI becomes a force multiplier. In this guide, you'll learn how to script Claude CLI with bash, integrate it into your workflows, and build automation that runs while you focus on harder problems.

We'll cover both general-purpose development automation and specialized examples for Zimbra mail server administration, showing you how Claude CLI adapts to any tech stack.

By the end, you'll have a toolkit of automation patterns you can adapt to your own workflowsβ€”whether you're managing React apps, Node.js APIs, or Zimbra collaboration servers.

Let's automate everything.


Print Mode Fundamentals

Print mode (-p or --print) is Claude CLI's automation interface. It processes one request, outputs clean text, and exitsβ€”perfect for scripts.

Understanding Print Mode

| Mode | Usage | Session State | Output | Best For | |------|-------|---------------|--------|----------| | 🎭 Interactive | claude | βœ… Yes | Formatted + colors | Exploratory, debugging | | πŸ“„ Print | claude -p "query" | ❌ No | Clean text | Automation, scripts |

Basic Syntax Patterns

# Simple query
claude -p "your question here"

# With piped input
cat file.txt | claude -p "analyze this"

# Output to file
claude -p "generate a README" > README.md

# Chaining with other commands
claude -p "extract email addresses" | sort | uniq

When to Use Print Mode

βœ… Use print mode for: Automation scripts, CI/CD pipelines, cron jobs, data processing, git hooks

❌ Use interactive mode for: Exploratory work, multi-turn conversations, complex debugging

Claude CLI automation pipeline showing input sources flowing through print mode processing to output actions Figure 1: Claude CLI automation workflow - Input sources (Git diffs, logs, source files) are processed through claude -p and transformed into actionable outputs (commits, documentation, alerts)

Output Formats

| Format | Flag | Use Case | |--------|------|----------| | πŸ“ Text | (default) | Human-readable output | | πŸ“‹ JSON | --output-format json | Parsing with jq | | πŸ“‘ Stream JSON | --output-format stream-json | Real-time processing |

# Default text output
claude -p "explain async/await"

# JSON for parsing
claude -p "list functions" --output-format json | jq '.[]'

Exit Codes

Print mode uses standard exit codes for scripting:

claude -p "validate this configuration"
echo $?  # 0 = success, non-zero = error

# Use in conditionals
if claude -p "is this code valid?"; then
    echo "Code is valid"
else
    echo "Code has issues"
fi

πŸ’‘ Quick Memory Challenge

Test your understanding (30 seconds):

  1. What flag enables print mode? β†’ _____
  2. Does print mode maintain session state? β†’ ____
  3. Which output format works with jq? β†’ _______
<details> <summary>Reveal Answers</summary>
  1. -p or --print
  2. No (single request/response)
  3. --output-format json
</details>

Basic Bash Integration

Let's start with simple one-liners and build up to complex automation.

One-Line Power Tools

Generate git commit message:

git diff --staged | claude -p "Generate a conventional commit message for these changes"

Explain error logs:

tail -n 50 /var/log/app.log | claude -p "Explain the most recent errors"

Quick code review:

git diff main | claude -p "Review for bugs and best practices"

Generate documentation:

cat src/api.js | claude -p "Generate API documentation in markdown"

Capturing Output to Variables

Store Claude's responses in bash variables:

#!/bin/bash

# Generate commit message
COMMIT_MSG=$(git diff --staged | claude -p "Generate conventional commit message")

# Use the captured output
git commit -m "$COMMIT_MSG"

echo "Committed with message: $COMMIT_MSG"

Real example - Auto-documenting functions:

#!/bin/bash

# Generate JSDoc for a function
FUNCTION_CODE=$(cat src/utils.js)
DOCS=$(echo "$FUNCTION_CODE" | claude -p "Generate JSDoc comments for these functions")

# Write to file
echo "$DOCS" > docs/utils.md
echo "Documentation generated: docs/utils.md"

Piping Data to Claude

Process CSV data:

cat sales-data.csv | claude -p "Who were the top 5 salespeople this month?"

Analyze JSON responses:

curl https://api.example.com/users | \
  claude -p "Identify users with incomplete profiles"

Filter and analyze logs:

grep ERROR /var/log/app.log | \
  claude -p "Group errors by type and suggest fixes"

Combining with Other CLI Tools

With jq for JSON processing:

cat data.json | \
  claude -p "Extract user emails" | \
  jq -r '.[]' | \
  sort | uniq

With grep for filtering:

claude -p "List all TODO comments in this codebase" | \
  grep "HIGH PRIORITY"

With xargs for batch operations:

find . -name "*.js" | \
  xargs -I {} claude -p "Count functions in {}"

Environment Variables & Configuration

Configure Claude CLI for automation scripts.

Essential Environment Variables

| Variable | Purpose | Example | Required? | |----------|---------|---------|-----------| | πŸ”‘ ANTHROPIC_API_KEY | Authentication | sk-ant-xxxxx | βœ… Yes | | πŸ€– CLAUDE_MODEL | Model selection | claude-sonnet-4-5 | ❌ Optional | | πŸ“ CLAUDE_MAX_TOKENS | Response limit | 4096 | ❌ Optional | | ⏱️ CLAUDE_TIMEOUT | Request timeout | 300000 (ms) | ❌ Optional |

API Key Management

Set API key via environment variable:

export ANTHROPIC_API_KEY="your-api-key-here"
claude -p "test query"

For automation scripts:

#!/bin/bash

# Load from secure location
source ~/.secrets/claude_api_key

claude -p "your automation task"

Using .env files (recommended):

# .env file
ANTHROPIC_API_KEY=sk-ant-xxxxx
CLAUDE_MODEL=claude-sonnet-4-5

# Load in script
#!/bin/bash
set -a
source .env
set +a

claude -p "your task here"

Important: Never commit API keys to git! Add .env to .gitignore.

Configuration File

Claude CLI reads configuration from ~/.config/claude/config.json:

{
  "model": "claude-sonnet-4-5",
  "maxTokens": 4096,
  "temperature": 0.7,
  "timeout": 300000
}

Override in scripts:

# Use specific model
claude -p "task" --model claude-opus-4

# Set max tokens
claude -p "large task" --max-tokens 8000

Setting Defaults

Create a wrapper script with your defaults:

#!/bin/bash
# ~/bin/claude-auto

# Set defaults for automation
export CLAUDE_MODEL="claude-sonnet-4-5"
export CLAUDE_MAX_TOKENS=4096

# Pass through all arguments
claude -p "$@"

Usage:

claude-auto "your automated query"

Advanced Scripting Patterns

Let's build sophisticated automation workflows.

The 4 Essential Automation Patterns

| Pattern | Use Case | Complexity | Value | |---------|----------|------------|-------| | πŸ” Code Review | PR automation | Medium | High | | πŸ“š Documentation | Auto-generate docs | Low | High | | πŸ“Š Log Analysis | Error monitoring | Medium | Critical | | πŸ”„ Batch Processing | Multi-file tasks | High | Medium |

Pattern 1: Automated Code Review

#!/bin/bash
# review-pr.sh - Automated pull request review

set -e  # Exit on error

PR_NUMBER=$1

if [ -z "$PR_NUMBER" ]; then
    echo "Usage: $0 <pr-number>"
    exit 1
fi

echo "Reviewing PR #$PR_NUMBER..."

# Fetch PR diff
gh pr diff $PR_NUMBER > /tmp/pr-${PR_NUMBER}.diff

# Get Claude's review
REVIEW=$(cat /tmp/pr-${PR_NUMBER}.diff | claude -p "
Perform a comprehensive code review:
1. Identify potential bugs
2. Check for security issues
3. Suggest performance improvements
4. Verify best practices
5. Rate overall code quality (1-10)

Format as markdown with sections.
")

# Post as PR comment
echo "$REVIEW" | gh pr comment $PR_NUMBER --body-file -

echo "βœ“ Review posted to PR #$PR_NUMBER"

# Cleanup
rm /tmp/pr-${PR_NUMBER}.diff

Usage:

./review-pr.sh 123

Pattern 2: Documentation Generator

#!/bin/bash
# generate-docs.sh - Auto-generate project documentation

PROJECT_DIR=${1:-.}
OUTPUT_DIR="docs/auto-generated"

mkdir -p "$OUTPUT_DIR"

echo "Generating documentation for: $PROJECT_DIR"

# Generate README
cat "$PROJECT_DIR/package.json" "$PROJECT_DIR"/src/*.js | \
  claude -p "Generate a comprehensive README.md with:
- Project overview
- Installation instructions
- API documentation
- Usage examples
Format in markdown" > "$OUTPUT_DIR/README.md"

echo "βœ“ README.md generated"

# Generate API docs for each file
for file in "$PROJECT_DIR"/src/*.js; do
    filename=$(basename "$file" .js)

    cat "$file" | claude -p "Generate API documentation for this module.
Include all functions, parameters, return values, and examples." \
    > "$OUTPUT_DIR/${filename}.md"

    echo "βœ“ Documentation for ${filename}.js generated"
done

echo "Documentation generated in: $OUTPUT_DIR"

Pattern 3: Error Log Analyzer

#!/bin/bash
# analyze-errors.sh - Analyze and categorize error logs

LOG_FILE=${1:-/var/log/app.log}
LINES=${2:-500}

echo "Analyzing last $LINES lines of $LOG_FILE..."

# Extract and analyze errors
tail -n "$LINES" "$LOG_FILE" | \
  grep -E "(ERROR|FATAL|Exception)" | \
  claude -p "Analyze these error logs:

1. Group errors by type/category
2. Count frequency of each error type
3. Identify the top 3 most critical issues
4. Suggest fixes for each issue
5. Recommend monitoring alerts

Output as structured markdown with severity levels.
" > error-analysis.txt

echo "βœ“ Analysis saved to: error-analysis.txt"

# Display summary
cat error-analysis.txt

Pattern 4: Batch File Processor

#!/bin/bash
# batch-refactor.sh - Refactor multiple files in parallel

PATTERN=$1
REPLACEMENT=$2

if [ -z "$PATTERN" ] || [ -z "$REPLACEMENT" ]; then
    echo "Usage: $0 <pattern> <replacement>"
    exit 1
fi

echo "Refactoring: $PATTERN β†’ $REPLACEMENT"

# Process files in parallel (max 4 concurrent)
find src -name "*.js" | xargs -P 4 -I {} bash -c '
    FILE="{}"
    echo "Processing: $FILE"

    cat "$FILE" | claude -p "Refactor this code:
Replace all instances of '"'$PATTERN'"' with '"'$REPLACEMENT'"'
Maintain functionality and code style.
Return only the refactored code." > "${FILE}.tmp"

    # Only replace if Claude succeeded
    if [ $? -eq 0 ] && [ -s "${FILE}.tmp" ]; then
        mv "${FILE}.tmp" "$FILE"
        echo "βœ“ Refactored: $FILE"
    else
        rm -f "${FILE}.tmp"
        echo "βœ— Failed: $FILE"
    fi
'

echo "Refactoring complete!"

Real-World Automation Examples

Let's look at production-ready automation scripts.

Example 1: Morning Standup Report Generator

#!/bin/bash
# standup-report.sh - Generate daily standup summary

AUTHOR=$(git config user.email)
YESTERDAY=$(date -d "yesterday" +%Y-%m-%d)
TODAY=$(date +%Y-%m-%d)

echo "Generating standup report for $YESTERDAY..."

# Get git activity
GIT_LOG=$(git log --since="$YESTERDAY 00:00" \
                  --until="$YESTERDAY 23:59" \
                  --author="$AUTHOR" \
                  --pretty=format:"%h - %s" \
                  --no-merges)

# Get modified files
FILES_CHANGED=$(git log --since="$YESTERDAY 00:00" \
                        --until="$YESTERDAY 23:59" \
                        --author="$AUTHOR" \
                        --name-only --pretty=format: | \
                sort | uniq)

# Generate standup report
REPORT=$(cat <<EOF | claude -p "$(cat)"
Based on this git activity from yesterday, create a standup report:

## Git Activity:
$GIT_LOG

## Files Changed:
$FILES_CHANGED

Generate a standup report with:
1. **Yesterday**: What I accomplished (bullet points, focus on value delivered)
2. **Today**: What I plan to work on (inferred from current branch/PRs)
3. **Blockers**: Any potential issues (none if not obvious)

Keep it concise and professional.
EOF
)

# Output report
echo "========================================"
echo "Standup Report - $TODAY"
echo "========================================"
echo "$REPORT"
echo "========================================"

# Optionally send to Slack
# curl -X POST -H 'Content-type: application/json' \
#   --data "{\"text\":\"$REPORT\"}" \
#   $SLACK_WEBHOOK_URL

Run daily with cron:

# Run at 8:30 AM every weekday
30 8 * * 1-5 /home/user/scripts/standup-report.sh

Example 2: Automated PR Description Generator

#!/bin/bash
# create-pr.sh - Create PR with AI-generated description

BRANCH=$(git branch --show-current)
BASE_BRANCH=${1:-main}

echo "Creating PR: $BRANCH β†’ $BASE_BRANCH"

# Get changes diff
DIFF=$(git diff "$BASE_BRANCH"...HEAD)

# Get commit messages
COMMITS=$(git log "$BASE_BRANCH"..HEAD --pretty=format:"- %s")

# Generate PR description
PR_DESC=$(cat <<EOF | claude -p "$(cat)"
Based on these changes, create a comprehensive PR description:

## Commits:
$COMMITS

## Diff Summary:
$(echo "$DIFF" | head -n 200)

Generate a PR description with:
# Summary
Brief overview of changes

## Changes Made
- Bullet points of key changes
- Focus on what and why

## Testing
- How to test these changes
- Test scenarios covered

## Notes
- Any breaking changes
- Deployment considerations
- Follow-up work needed

Keep it clear, concise, and informative.
EOF
)

# Create PR
gh pr create \
  --base "$BASE_BRANCH" \
  --head "$BRANCH" \
  --title "$BRANCH" \
  --body "$PR_DESC"

echo "βœ“ PR created with AI-generated description"

Example 3: Continuous Documentation Updater

#!/bin/bash
# update-docs.sh - Keep docs in sync with code changes

# Run on pre-commit or in CI/CD

echo "Checking if documentation needs updates..."

# Get recently modified files
MODIFIED_FILES=$(git diff --name-only HEAD~1..HEAD | grep "src/.*\.js$")

if [ -z "$MODIFIED_FILES" ]; then
    echo "No source files modified, skipping docs update"
    exit 0
fi

echo "Modified files detected, updating documentation..."

# For each modified file, update its documentation
for FILE in $MODIFIED_FILES; do
    DOC_FILE="docs/$(basename "$FILE" .js).md"

    # Generate updated documentation
    cat "$FILE" | claude -p "Generate updated documentation for this module.
Include: API reference, usage examples, recent changes." > "$DOC_FILE.tmp"

    # Check if docs actually changed
    if ! diff -q "$DOC_FILE" "$DOC_FILE.tmp" > /dev/null 2>&1; then
        mv "$DOC_FILE.tmp" "$DOC_FILE"
        git add "$DOC_FILE"
        echo "βœ“ Updated: $DOC_FILE"
    else
        rm "$DOC_FILE.tmp"
        echo "- No changes: $DOC_FILE"
    fi
done

# Commit documentation updates
if git diff --staged --quiet; then
    echo "No documentation updates needed"
else
    git commit -m "docs: auto-update documentation for modified modules

πŸ€– Generated with Claude CLI"
    echo "βœ“ Documentation committed"
fi

Zimbra Administration Automation

Claude CLI can automate Zimbra collaboration server administration tasks.

Zimbra Automation Overview

| Task | Automation Value | Script Complexity | |------|------------------|-------------------| | πŸ“Š Account Audits | High (manual = 2hr) | Low | | πŸ‘₯ User Provisioning | Critical (error-prone) | Medium | | πŸ“‹ Log Monitoring | High (proactive alerts) | Medium | | βœ‰οΈ Signature Management | Medium (time-saving) | Low |

Zimbra automation workflows showing server interactions with Claude CLI for account auditing, log monitoring, and user provisioning Figure 2: Zimbra administration automation patterns - Three key workflows (Audit Accounts, Monitor Logs, Provision Users) leverage Claude CLI to transform Zimbra server data into actionable reports, alerts, and automated provisioning

Example 1: Zimbra Account Audit Report

#!/bin/bash
# zimbra-audit.sh - Generate comprehensive account audit report

echo "Generating Zimbra account audit report..."

# Get all accounts
ACCOUNTS=$(su - zimbra -c "zmprov -l gaa")

# Get detailed account info
ACCOUNT_DATA=$(su - zimbra -c "zmprov -l gaa | head -n 100" | while read EMAIL; do
    echo "=== $EMAIL ==="
    su - zimbra -c "zmprov ga $EMAIL zimbraAccountStatus zimbraMailQuota zimbraMailQuotaUsed zimbraLastLogonTimestamp"
done)

# Generate audit report with Claude
REPORT=$(echo "$ACCOUNT_DATA" | claude -p "Analyze this Zimbra account data and create an audit report:

1. **Summary Statistics**
   - Total accounts
   - Active vs inactive accounts
   - Accounts over quota

2. **Inactive Accounts**
   - List accounts not logged in for >90 days
   - Recommend action (disable/delete)

3. **Quota Analysis**
   - Accounts using >90% quota
   - Average quota usage
   - Recommendations for quota adjustments

4. **Security Concerns**
   - Locked accounts
   - Accounts with unusual status

Format as professional markdown report.
")

# Save and display report
REPORT_FILE="zimbra-audit-$(date +%Y%m%d).md"
echo "$REPORT" > "$REPORT_FILE"

echo "βœ“ Audit report generated: $REPORT_FILE"
cat "$REPORT_FILE"

Example 2: Bulk User Provisioning with Validation

#!/bin/bash
# zimbra-bulk-provision.sh - Provision users from CSV with AI validation

CSV_FILE=$1

if [ ! -f "$CSV_FILE" ]; then
    echo "Usage: $0 <users.csv>"
    exit 1
fi

echo "Validating user data with Claude..."

# Validate CSV format
VALIDATION=$(cat "$CSV_FILE" | claude -p "Validate this Zimbra user provisioning CSV:

Check for:
1. Valid email formats
2. Duplicate entries
3. Missing required fields (name, email, password, quota)
4. Invalid characters in usernames
5. Quota values are reasonable

Return:
- 'VALID' if no issues
- List of specific errors if problems found
")

if [[ "$VALIDATION" != *"VALID"* ]]; then
    echo "Validation failed:"
    echo "$VALIDATION"
    exit 1
fi

echo "βœ“ Validation passed"

# Generate zmprov commands
PROVISION_SCRIPT=$(cat "$CSV_FILE" | claude -p "Convert this CSV to Zimbra zmprov commands:

For each user, generate:
zmprov ca <email> <password> displayName '<name>' zimbraMailQuota <quota>

Include error handling and logging.
Output as bash script.")

# Save script
echo "$PROVISION_SCRIPT" > /tmp/zimbra-provision.sh
chmod +x /tmp/zimbra-provision.sh

echo "Generated provisioning script: /tmp/zimbra-provision.sh"
echo "Review and run: sudo -u zimbra /tmp/zimbra-provision.sh"

Example 3: Zimbra Log Analysis and Alerting

#!/bin/bash
# zimbra-log-monitor.sh - Monitor Zimbra logs for issues

LOG_FILE="/opt/zimbra/log/mailbox.log"
ALERT_THRESHOLD=10

echo "Monitoring Zimbra logs for anomalies..."

# Analyze recent log entries
ANALYSIS=$(tail -n 1000 "$LOG_FILE" | claude -p "Analyze these Zimbra mailbox logs:

1. Identify error patterns:
   - Authentication failures
   - Database connection issues
   - LMTP delivery failures
   - Memory/performance warnings

2. Count occurrences of each issue

3. Severity assessment (LOW/MEDIUM/HIGH/CRITICAL)

4. Recommended actions for each issue type

Output as structured JSON:
{
  \"issues\": [
    {
      \"type\": \"auth_failure\",
      \"count\": 15,
      \"severity\": \"HIGH\",
      \"action\": \"Check LDAP connectivity\"
    }
  ]
}
" --output-format json)

# Parse JSON and check thresholds
CRITICAL_COUNT=$(echo "$ANALYSIS" | jq '[.issues[] | select(.severity=="CRITICAL")] | length')

if [ "$CRITICAL_COUNT" -gt 0 ]; then
    echo "⚠️  CRITICAL ISSUES DETECTED!"
    echo "$ANALYSIS" | jq -r '.issues[] | select(.severity=="CRITICAL") | "- \(.type): \(.count) occurrences - \(.action)"'

    # Send alert (example: email)
    echo "$ANALYSIS" | jq -r '.issues[] | select(.severity=="CRITICAL")' | \
    mail -s "Zimbra Critical Alert" admin@example.com
fi

# Save full analysis
echo "$ANALYSIS" | jq . > "zimbra-analysis-$(date +%Y%m%d-%H%M).json"
echo "βœ“ Analysis saved"

Example 4: Zimbra Signature Manager

#!/bin/bash
# zimbra-update-signatures.sh - Update email signatures for all users

SIGNATURE_TEMPLATE=$1

if [ ! -f "$SIGNATURE_TEMPLATE" ]; then
    echo "Usage: $0 <signature-template.html>"
    exit 1
fi

# Get template
TEMPLATE=$(cat "$SIGNATURE_TEMPLATE")

# Get all users
USERS=$(su - zimbra -c "zmprov -l gaa")

echo "Updating signatures for $(echo "$USERS" | wc -l) users..."

echo "$USERS" | while read EMAIL; do
    # Get user details
    USER_INFO=$(su - zimbra -c "zmprov ga $EMAIL givenName sn title telephoneNumber")

    # Generate personalized signature
    SIGNATURE=$(cat <<EOF | claude -p "$(cat)"
Generate a personalized email signature using this template and user data:

Template:
$TEMPLATE

User Data:
$USER_INFO
Email: $EMAIL

Replace placeholders with actual user data:
- {{name}} = Full name
- {{title}} = Job title
- {{phone}} = Phone number
- {{email}} = Email address

Output only the final HTML signature.
EOF
)

    # Apply signature
    su - zimbra -c "zmprov ma $EMAIL zimbraPrefMailSignature '$SIGNATURE'"

    echo "βœ“ Updated signature for: $EMAIL"
done

echo "βœ“ All signatures updated"

Error Handling & Best Practices

Build robust automation scripts.

Core Error Handling Patterns

| Pattern | Purpose | When to Use | |---------|---------|-------------| | πŸ›‘οΈ Exit code checking | Detect failures | Every script | | πŸ”„ Retry logic | Handle rate limits | API-heavy scripts | | πŸ“ Logging | Debugging & audit | Production scripts | | πŸ”’ Input validation | Prevent injection | User input scripts |

Exit Code Checking

#!/bin/bash
set -e  # Exit on any error

# Check Claude CLI availability
if ! command -v claude &> /dev/null; then
    echo "Error: Claude CLI not found"
    exit 1
fi

# Run Claude with error handling
if ! OUTPUT=$(claude -p "your query"); then
    echo "Error: Claude CLI failed"
    echo "Exit code: $?"
    exit 1
fi

echo "Success: $OUTPUT"

Retry Logic for Rate Limits

#!/bin/bash

# Retry function with exponential backoff
retry_claude() {
    local max_attempts=5
    local attempt=1
    local delay=2

    while [ $attempt -le $max_attempts ]; do
        echo "Attempt $attempt of $max_attempts..."

        if OUTPUT=$(claude -p "$1" 2>&1); then
            echo "$OUTPUT"
            return 0
        fi

        # Check if rate limited
        if echo "$OUTPUT" | grep -q "rate.*limit"; then
            echo "Rate limited, waiting ${delay}s..."
            sleep $delay
            delay=$((delay * 2))  # Exponential backoff
            attempt=$((attempt + 1))
        else
            echo "Error (not rate limit): $OUTPUT"
            return 1
        fi
    done

    echo "Max retries exceeded"
    return 1
}

# Usage
retry_claude "your automated query"

Logging and Monitoring

#!/bin/bash

# Logging function
log() {
    echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a automation.log
}

log "Starting automation script..."

# Track execution time
START_TIME=$(date +%s)

# Your automation
OUTPUT=$(claude -p "your task" 2>&1)
EXIT_CODE=$?

END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

if [ $EXIT_CODE -eq 0 ]; then
    log "βœ“ Success (${DURATION}s)"
    log "Output: $OUTPUT"
else
    log "βœ— Failed (${DURATION}s)"
    log "Error: $OUTPUT"
fi

# Send metrics to monitoring system
# curl -X POST https://metrics.example.com/log \
#   -d "script=claude-auto&duration=$DURATION&status=$EXIT_CODE"

Security Best Practices

| Practice | Risk Level If Ignored | Implementation Difficulty | |----------|----------------------|---------------------------| | πŸ”’ API key protection | Critical | Easy | | βœ… Input validation | High | Medium | | 🧹 Output sanitization | Medium | Easy | | 🚫 Secret scanning | Critical | Easy |

1. API Key Protection:

# βœ— BAD: Hardcoded
claude -p "task"  # Relies on environment

# βœ“ GOOD: Load from secure location
source ~/.secrets/claude_api_key
claude -p "task"

# βœ“ BETTER: Use secret management
API_KEY=$(vault kv get -field=key secret/claude)
export ANTHROPIC_API_KEY="$API_KEY"

2. Input Validation:

#!/bin/bash

# Validate user input before passing to Claude
USER_INPUT=$1

if [[ ! "$USER_INPUT" =~ ^[a-zA-Z0-9._-]+$ ]]; then
    echo "Error: Invalid input format"
    exit 1
fi

claude -p "process: $USER_INPUT"

3. Output Sanitization:

# Sanitize Claude output before using in commands
CLAUDE_OUTPUT=$(claude -p "get filename")

# Remove potentially dangerous characters
SAFE_FILENAME=$(echo "$CLAUDE_OUTPUT" | tr -cd '[:alnum:]._-')

# Use safely
cp "source.txt" "$SAFE_FILENAME"

Resource Management

#!/bin/bash

# Limit concurrent Claude calls
MAX_PARALLEL=4

# Process files with limited parallelism
find . -name "*.js" | \
  xargs -P "$MAX_PARALLEL" -I {} bash -c '
    echo "Processing: {}"
    claude -p "analyze {}" > "{}.analysis"
  '

FAQ

How do I use Claude CLI in cron jobs?

Ensure environment variables are set in your crontab:

# Edit crontab
crontab -e

# Add environment variables
ANTHROPIC_API_KEY=your-key-here
PATH=/usr/local/bin:/usr/bin:/bin

# Schedule your script
0 9 * * * /home/user/scripts/daily-report.sh >> /var/log/claude-auto.log 2>&1

Pro Tip: Use absolute paths in cron scripts and redirect output for debugging.

Can I run Claude CLI in CI/CD pipelines?

Yes! Claude CLI works great in CI/CD:

GitHub Actions:

- name: AI Code Review
  env:
    ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
  run: |
    git diff main | claude -p "Review for bugs and best practices" > review.txt
    cat review.txt

GitLab CI:

ai_review:
  script:
    - claude -p "Review code changes" > review.txt
  variables:
    ANTHROPIC_API_KEY: $CI_ANTHROPIC_API_KEY

How do I handle rate limits in scripts?

Implement retry logic with exponential backoff (see Error Handling section).

Monitor usage:

# Check remaining quota
claude -p "test" 2>&1 | grep -i "quota\|limit"

Add delays between calls:

for file in *.js; do
    claude -p "analyze $file"
    sleep 2  # Delay between requests
done

How do I pass sensitive data to Claude securely?

Best practices:

  1. Redact before sending:
# Remove sensitive data
cat config.json | \
  sed 's/"password":.*/"password": "REDACTED"/' | \
  claude -p "validate this config"
  1. Use local processing:
# Process locally, only send summaries
grep -v "password\|secret\|token" file.txt | \
  claude -p "analyze this"
  1. Environment-specific configs:
# Never send production credentials
if [ "$ENV" = "production" ]; then
    echo "Error: Cannot process production data"
    exit 1
fi

Can I use Claude CLI with Zimbra's zmprov in parallel?

Yes, but be careful with Zimbra's LDAP load:

#!/bin/bash

# Limit concurrent operations
MAX_PARALLEL=2

# Get users and process in batches
su - zimbra -c "zmprov -l gaa" | \
  xargs -P "$MAX_PARALLEL" -I {} bash -c '
    USER_DATA=$(su - zimbra -c "zmprov ga {}")
    ANALYSIS=$(echo "$USER_DATA" | claude -p "suggest optimizations")
    echo "{}: $ANALYSIS"
  '

Warning: Too many parallel zmprov calls can overload LDAP. Start with MAX_PARALLEL=2.

What's the best way to store API keys for automation?

| Method | Security | Complexity | Recommendation | |--------|----------|------------|----------------| | πŸ† Secret manager (Vault) | Excellent | High | Production systems | | πŸ₯ˆ Encrypted files (GPG) | Good | Medium | Team projects | | πŸ₯‰ File permissions (chmod 600) | Fair | Low | Personal scripts | | 🚫 Hardcoded/Git | Never | - | Never use |

Options ranked by security:

  1. Secret Management (Best):
API_KEY=$(vault kv get -field=key secret/claude)
  1. Encrypted Files:
# Encrypt: gpg -c ~/.secrets/claude_key
# Decrypt and use:
export ANTHROPIC_API_KEY=$(gpg -d ~/.secrets/claude_key.gpg)
  1. File Permissions (Minimum):
# Store in file with restricted permissions
chmod 600 ~/.secrets/claude_api_key
source ~/.secrets/claude_api_key

Never: Commit keys to git, hardcode in scripts, or use world-readable files.


Conclusion

You've just unlocked Claude CLI's automation superpowers ⚑. You now know:

| Category | What You Learned | Impact | |----------|------------------|--------| | πŸ“„ Print Mode | Non-interactive automation interface | Foundation | | πŸ”§ Bash Integration | Piping, variables, CLI composition | Productivity x2 | | βš™οΈ Configuration | Environment setup & security | Reliability | | 🎯 Patterns | 4 essential automation patterns | Reusability | | 🌟 Real Examples | Production-ready scripts | Immediate value | | πŸ“§ Zimbra | Mail server automation | Domain expertise | | πŸ›‘οΈ Best Practices | Error handling & security | Production-ready |

Automation transforms Claude CLI from a helpful tool into an autonomous team member that works 24/7.

What's Next?

In the next post, "Claude CLI Custom Commands: Build Your Personal AI Toolkit", you'll learn:

  • Creating reusable slash commands
  • Building a custom command library
  • Template systems for common tasks
  • Team collaboration with shared commands
  • Organizing commands for maximum productivity

Interactive mode is conversational. Automation is powerful. But custom commands? That's where you build your own AI superpowers.

Challenge: Build Your First Automation

Before moving on, create one automation script:

| Level | Challenge | Time | Value | |-------|-----------|------|-------| | 🌱 Beginner | Auto-generate commit messages | 5 min | Daily use | | βš™οΈ Intermediate | Daily standup report | 30 min | Weekly use | | πŸ—οΈ Advanced | Zimbra account audits | 1 hr | Monthly use |

Beginner: Auto-generate commit messages

git diff --staged | claude -p "conventional commit message"

Intermediate: Daily standup report generator

# Use the example from this post, customize for your workflow

Advanced: Zimbra account audit automation

# Adapt the Zimbra examples for your server

Share what you built! Automation is more fun when we learn from each other.

Resources

Next in Series: Claude CLI Custom Commands: Build Your Personal AI Toolkit β†’


Automate everything? Share this guide with your team.

Have automation ideas? Drop them in the commentsβ€”I love seeing creative use cases!


Last Updated: November 2025 | Claude Code Version 2.x | Part 3 of 8 in the Claude CLI Zero to Hero Series