Coding

Comprehensive Codebase Architecture & Security Audit

Ask AI to audit your codebase for architecture quality, scalability, security, performance, and maintainability with concrete, prioritized recommendations.

#code review#architecture audit#secure coding#scalability#performance#maintainability#devops#owasp#threat modeling#observability#testing strategy#refactoring roadmap#monolith vs microservices#dependency management#documentation

Prompt

You are a senior software architect and security engineer. Audit the provided codebase holistically and produce a clear, actionable report. Focus on architecture quality, simplification opportunities, scalability, security, performance, maintainability, developer experience, and compliance.

=== WHAT TO DO ===
1) Map the system
   - Identify modules, layers, boundaries, and dependencies (internal and third-party).
   - Detect architecture style(s): layered/hexagonal/CQRS/DDD/event-driven, etc.
   - Highlight cross-cutting concerns: authn/authz, config, logging, error handling, caching.
   - Note deployment topology and data flow (requests, events, batch jobs).

2) Evaluate architecture quality & simplification
   - Assess cohesion, coupling, stability, and modularity.
   - Flag anti-patterns: god classes, shared mutable state, circular deps, anemic domain models, over-engineering, YAGNI violations.
   - Propose concrete simplifications that do not reduce functionality (e.g., flatten layers, merge duplicate services, remove dead code, extract module, standardize patterns).
   - Compare alternatives (e.g., consolidated monolith vs microservices) with trade-offs.

3) Scalability & resiliency
   - Check statelessness, horizontal scaling, backpressure, load shedding.
   - Review caching (local/distributed), queue usage, database indices/queries, connection pooling, read/write split, sharding/partitioning.
   - Verify idempotency, retries with jitter, circuit breakers, timeouts, bulkheads.
   - Suggest capacity planning and bottleneck mitigations.

4) Security & compliance
   - Map against OWASP Top 10 and CWE categories; include threat modeling (STRIDE) and a risk matrix (Severity Γ— Likelihood).
   - Review authn/authz (RBAC/ABAC), session management, token handling (JWT/opaque), CSRF/CORS, input validation, output encoding, SSRF protections.
   - Inspect secrets management, key rotation, TLS, certificate pinning, secure headers, dependency and supply-chain risks (SCA), SBOM & license compliance (SPDX).
   - Data protections: encryption at rest/in transit, PII handling, retention, audit trails; note GDPR/CCPA/HIPAA implications where relevant.

5) Performance & reliability
   - Identify hot paths and N+1 queries; analyze algorithmic complexity and memory usage.
   - Recommend profiling strategies, representative benchmarks, and caching/DB/query optimizations.
   - Verify graceful shutdown, startup probes, health checks, readiness, and disaster recovery.

6) Testing & quality
   - Assess the test pyramid (unit/integration/e2e), coverage, meaningful assertions, determinism, flake rate.
   - Evaluate contract tests for services/APIs, fixture hygiene, and CI/CD gates (lint, SAST, SCA, tests).
   - Suggest quick wins to strengthen quality without large rewrites.

7) Observability & operations
   - Review logs, metrics, tracing; identify SLIs/SLOs and error budgets.
   - Standardize structured logging, correlation IDs, trace propagation, dashboards, and alerts.
   - Evaluate CI/CD pipelines, trunk-based vs GitFlow, review policies, environment parity, IaC (Terraform/CloudFormation), and runtime policies (OPA/Kyverno).

8) Documentation & developer experience
   - Check READMEs, ADRs, runbooks, onboarding docs, architectural decision records, and code comments.
   - Suggest templates, contribution guidelines, and code style/formatting standards.

=== REQUIRED OUTPUT FORMAT ===
Produce the following sections in Markdown:

1. **Executive Summary**
   - Overall health scorecards (Architecture, Security, Scalability, Performance, Maintainability, DevEx) on a 0–10 scale with brief rationale.
   - Top 5 risks and Top 5 quick wins.

2. **Architecture Map**
   - Textual system diagram: modules/services β†’ dependencies β†’ data stores β†’ external systems.
   - Layering overview and boundaries; call out circular dependencies.
   - Table of modules with purpose, owners (if known), LOC, primary language.

3. **Findings & Evidence**
   - Bullet list of issues grouped by area (Architecture, Security, Scalability, Performance, Testing, Observability, Docs).
   - For each finding: severity (Critical/High/Med/Low), likelihood, affected paths/files, short code excerpts or references, and impact.

4. **Simplification Proposals**
   - Specific refactors with before/after descriptions, scope, effort (S/M/L), risk, and expected impact.
   - Include at least one option to reduce moving parts while preserving behavior.

5. **Security Report**
   - OWASP/CWE mapping, threat model summary, secret scans, dependency risks, SBOM/license notes, hardening checklist.

6. **Scalability & Reliability Plan**
   - Bottlenecks, capacity estimates, required caching/indexing, queuing strategies, resilience patterns, and rollback/DR notes.

7. **Performance & Data Layer Review**
   - Query/index audit, hot endpoints, payload sizes, CPU/memory hotspots, schema and migration health.

8. **Testing & CI/CD**
   - Coverage summary, flaky test hotspots, test debt, recommended gates and tooling.

9. **Observability & Ops**
   - Current vs target SLIs/SLOs, logging/metrics/tracing gaps, alert runbooks, on-call hygiene.

10. **Prioritized Roadmap**
    - Table with Item, Area, Impact, Effort, Owner, Dependencies, ETA buckets (0–30, 31–60, 61–90 days).

11. **Open Questions & Assumptions**
    - Clarifications needed from maintainers; list assumptions made due to missing data.

=== SCORING RUBRICS ===
- **Small (<50k LOC)**: bias toward simplification, consolidate services, emphasize test/observability setup.
- **Medium (50k–250k LOC)**: emphasize modular boundaries, SLOs, and CI/CD gates.
- **Large (>250k LOC or >20 services)**: emphasize domain boundaries, contracts, platform tooling, risk-based sequencing.

=== CHECKLISTS TO APPLY ===
- Architecture: layering, boundaries, dependency direction, ADRs.
- Security: OWASP Top 10, secret management, SCA/SBOM, least privilege, secure defaults.
- Scalability: statelessness, cache strategy, DB indices, queues, idempotency.
- Reliability: timeouts, retries, circuit breakers, graceful shutdown, health probes.
- Performance: profiling, N+1, payload bloat, algorithmic hot spots.
- Testing: pyramid balance, coverage, flaky tests, contract tests.
- Observability: structured logs, metrics, traces, SLOs, dashboards, alerts.
- DevEx: local dev reproducibility, linters/formatters, pre-commit hooks, docs.

=== DELIVERABLE QUALITY BAR ===
- Be objective and cite concrete files/paths and code references.
- Prefer minimal viable changes with high leverage.
- Provide alternatives with trade-offs and recommend one.
- Use tables for prioritization; keep recommendations atomic and testable.
- If repository access is limited, explain exactly what else you need and why.

=== REQUEST ANY MISSING ARTIFACTS ===
If not provided, ask for:
- `repo URL/zip`, languages & frameworks, primary runtime config, IaC files, CI/CD config, env samples, DB schema/migrations, API specs, observed logs/metrics/traces, test reports, SBOM/dependency manifests, threat model/ADRs, and target NFRs.

Begin by briefly summarizing the codebase (as detected) and listing assumptions, then continue with the structured report.

Format the final deliverable as a single, well-structured Markdown document (`audit-report.md`). Use proper Markdown headers, bullet points, tables, and code blocks where relevant. Ensure the file is self-contained and readable in any markdown viewer.
Sam Holstein
Written by
Sam Holstein
@msamholstein_6ead51

AI consultant and software creator helping businesses and creators harness artificial intelligence through practical solutions and innovative products. Creator of BestPromptIdeas.com.

Reviews

0 reviews

No reviews yet. Be the first to leave feedback.

Sign in to leave a rating or review.

Related prompts

View category β†’
Coding

Architecture Review & Fix Strategy

AI chooses between minimal patch, structural refactor, or both; delivers concise architecture review and solution strategy.

#software architecture#code review#refactoring#debugging+3
Sam Holstein
Sam Holstein
Coding

Collaborative Coding Agent

Act as a coding agent that seeks approval before changes and pauses for testing. Review the provided codebase and deliver a concise high‑level summary.

#coding agent#code review#software development#testing+3
Sam Holstein
Sam Holstein
Coding

Whole-Repo Architecture Review (Read-Only)

Run a read-only, whole-repo architecture review with sourced citations, risks, simplifications, target design, and a phased migration plan.

#architecture review#repository analysis#codebase audit#software architecture+9
Sam Holstein
Sam Holstein
Coding

Modernize Objective-C App to Swift & SwiftUI

Convert an 11-year-old Objective-C app into a fully modern SwiftUI app with zero data loss, responsive UI, and native SwiftUI components.

#swiftui#app modernization#objective-c to swift#ios development+2
Sam Holstein
Sam Holstein
Coding

Public Web Portfolio Audit & Cleanup

Audit a portfolio site’s public-facing code and assets for quality, security, and accessibility. Fix issues with reversible commits and produce SUMMARY.md.

#website audit#portfolio site#frontend code review#security hygiene+6
Sam Holstein
Sam Holstein
Coding

Code Performance Analyzer

Analyze any codebase for runtime speed, bottlenecks, and efficiency improvements.

#code analysis#performance optimization#efficiency audit#software engineering
Sam Holstein
Sam Holstein