Skip to content

Privacy Deep Dive

Detailed technical analysis of privacy implications.

When you use AI coding tools, they typically send:

  • Current file content
  • Recent file history
  • Codebase embeddings (if indexed)
  • File paths and structure
  • Your prompts
  • Git diff (sometimes)
  • Terminal output (sometimes)
JurisdictionLegal FrameworkRisk Level
USCLOUD Act, FISA 702Government can compel disclosure
EUGDPRStronger user protections
ChinaPIPL, Cybersecurity LawData localization requirements

Allows US government to compel disclosure of data stored by US companies anywhere globally, potentially without user notification.

  • Data stored by US company = subject to US law
  • Applies even if servers are in EU
  • Enterprise agreements may provide some protection
  • Network destinations (where data goes)
  • Payload contents (what’s sent)
  • Local file access (what’s read)
  • Server-side retention (must trust vendor)
  • Training exclusion (must trust vendor)
  • Government access (no visibility)
Terminal window
# Monitor outbound connections
sudo lsof -i -n | grep cursor
# Use mitmproxy for traffic inspection
mitmproxy --mode regular
# Check request headers for privacy mode
# Look for: x-ghost-mode: true
PracticeRecommendation
StorageSystem keychain, not env files
RotationQuarterly minimum
ScopeSeparate keys for AI vs production
GitPre-commit hooks with gitleaks
Terminal window
# macOS Keychain
security add-generic-password -a "$USER" -s "openai-api-key" -w "sk-..."
export OPENAI_API_KEY=$(security find-generic-password -s "openai-api-key" -w)
# Linux (GNOME Keyring)
secret-tool store --label="OpenAI" service openai username apikey
export OPENAI_API_KEY=$(secret-tool lookup service openai username apikey)
  • Use any tool with privacy mode
  • Basic exclusions for secrets
  • Use tools with zero retention
  • Comprehensive exclusions
  • Review before each session
  • Self-hosted or local only
  • Air-gapped where possible
  • Formal security review
IncidentDateLesson
Anthropic training default ONAug 2024Check defaults
Package hallucination attacksOngoingVerify dependencies
Prompt injection demosOngoingLimit agent permissions
  1. Enable privacy mode immediately
  2. Create comprehensive exclusions
  3. Use BYOK where possible
  4. Prefer local models for sensitive work
  5. Review diffs before committing
  6. Rotate keys regularly