All posts
Security TestingAppSecDASTPenetration Testing

Web Application Security Testing: A Complete Guide for Development Teams

A practical guide to web application security testing — covering DAST, SAST, penetration testing, and how to build a security testing program that actually works for your team.

Mythos Security Team·January 20, 2025·7 min read

Security testing is one of the most consistently under-invested practices in software development. Most teams know they should do it — but without a clear framework, it becomes ad-hoc, inconsistent, and ultimately ineffective.

This guide gives you a practical framework for web application security testing, covering the major methodologies, when to use each, and how to integrate them into a development workflow that doesn't slow your team down.

The Security Testing Landscape

Security testing for web applications falls into several broad categories, each with different strengths and appropriate use cases.

Static Application Security Testing (SAST)

SAST analyzes source code without executing the application. It can find:

  • Hard-coded secrets and credentials
  • Known-dangerous function calls (e.g., eval(), exec())
  • Input/output flows that could lead to injection vulnerabilities
  • Insecure cryptographic algorithm usage

When to use: Continuous — run in IDE and CI/CD on every commit. SAST is cheapest when it catches issues before they leave the developer's local environment.

Limitations: High false positive rate. Doesn't understand runtime behavior or application context. Can't find configuration issues or business logic flaws.

Dynamic Application Security Testing (DAST)

DAST tests the running application — sending requests and analyzing responses for vulnerability indicators. It can find:

  • Injection vulnerabilities (SQL, XSS, command injection)
  • Authentication and session management issues
  • Security misconfigurations
  • Information disclosure

When to use: Staging environments, pre-release. Increasingly viable in CI/CD with lightweight, scoped scans.

Limitations: Can't see inside the application — limited to what's observable from the outside. Coverage depends on how much of the application the scanner can reach.

Software Composition Analysis (SCA)

SCA scans your dependency tree against known vulnerability databases (NVD, OSV, GitHub Advisory). It finds:

  • Dependencies with known CVEs
  • Outdated packages with security patches available
  • License risks

When to use: Continuous — run on every dependency change. Block deployments with critical CVEs.

Interactive Application Security Testing (IAST)

IAST instruments the application at runtime to observe data flows from a security perspective. Highly accurate because it sees the actual code path — but requires instrumentation and adds overhead.

When to use: Development and staging environments where performance overhead is acceptable.

Penetration Testing

Manual testing by security experts who attempt to exploit vulnerabilities — combining tools, creativity, and deep security knowledge. Finds complex business logic flaws, multi-step attack chains, and issues that automated tools consistently miss.

When to use: Pre-launch, annually, after major architectural changes, and before handling regulated data (PCI, HIPAA, SOC 2).

Think of penetration testing as a depth tool and automated scanning as a breadth tool. You need both: automated scanning continuously catches the 80% of vulnerabilities that have patterns, while penetration testing finds the 20% that require human reasoning.

Building a Security Testing Program

Layer 1: Developer-Integrated Testing

Security testing that developers run as part of their normal workflow:

  • IDE plugins: Snyk, Semgrep, or similar for real-time SAST feedback
  • Pre-commit hooks: Block commits that introduce hardcoded secrets or known-dangerous patterns
  • Code review: Security-focused review checklist as part of your PR process

Goal: Catch the easy, well-defined issues before code is merged.

Layer 2: CI/CD Pipeline Security Gates

Automated testing that runs on every PR and deployment:

# Example GitHub Actions security workflow
name: Security
on: [push, pull_request]

jobs:
  sast:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Semgrep
        uses: semgrep/semgrep-action@v1
        with:
          config: p/owasp-top-ten

  dependency-scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Audit dependencies
        run: npm audit --audit-level=high

  dast:
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    steps:
      - name: Run Mythos Scan
        uses: mythos-scanner/action@v1
        with:
          target: ${{ secrets.STAGING_URL }}
          api-key: ${{ secrets.MYTHOS_API_KEY }}

Goal: Block deployments with critical security issues. Fast enough to not be a bottleneck (target: under 5 minutes for SAST + dependency scan).

Layer 3: Environment-Level Scanning

Regular, comprehensive scans of staging and production:

  • Pre-release scans: Full DAST scan of staging before every major release
  • Scheduled scans: Weekly or monthly full scans of production for drift detection
  • Post-deployment checks: Lightweight smoke test scan after each production deployment

Goal: Catch configuration drift, detect new attack surfaces introduced by changes, and verify staging parity with production.

Layer 4: Continuous Monitoring

Ongoing visibility into your security posture:

  • WAF logs: Analyze for attack patterns and active exploitation attempts
  • Dependency monitoring: Alerts when new CVEs affect dependencies in your stack
  • Certificate and configuration monitoring: TLS, security headers, open ports

Goal: Real-time awareness of emerging threats affecting your deployed applications.

Layer 5: Annual Penetration Testing

External security assessment by qualified testers:

  • Full-scope penetration test covering web, API, and infrastructure
  • Threat modeling review
  • Social engineering assessment (if in scope)
  • Findings remediaton verification

Goal: Depth coverage of issues that automation misses. Required by most compliance frameworks.

Security Testing by Application Phase

During Design

  • Threat modeling: identify assets, threats, and mitigations before writing code
  • Security architecture review: validate authentication, authorization, and data flow decisions
  • Security user stories: "As an attacker, I want to... so that..." to define test cases

During Development

  • SAST in IDE: continuous feedback on code being written
  • Security-focused code review: checklist-based review of security-sensitive changes
  • Unit tests for security controls: test your auth middleware, input validation, etc.

Pre-Release

  • DAST scan of staging environment (comprehensive, authenticated)
  • Dependency audit with manual review of critical vulnerabilities
  • Security configuration review: headers, TLS, cloud storage permissions
  • Regression test for previously found vulnerabilities

In Production

  • Ongoing dependency monitoring
  • Scheduled DAST for drift detection
  • WAF monitoring and tuning
  • Incident response readiness (can you detect exploitation when it happens?)

Common Pitfalls

Only testing at release: Security issues compound. A critical vulnerability introduced today may go undetected for months if you only test before major releases. Continuous, automated testing is essential.

Treating tool output as ground truth: Both SAST and DAST produce false positives. All findings require human triage. Automate the detection; don't automate the decision.

Skipping authenticated scanning: Many DAST tools are configured to scan only unauthenticated surfaces. A significant portion of your application — and its highest-value functionality — is only accessible when authenticated. Always configure authenticated scanning.

Ignoring third-party integrations: Your application's security surface extends to every API and third-party service it integrates with. Your security testing should assess how you handle data from those integrations — you can't control their security, but you can control how you ingest and process their data.

Testing without a threat model: Security testing without a threat model is unfocused. What are you most afraid of? Data breach? Account takeover? Service disruption? Your testing priorities should reflect your actual threat landscape.

The highest-ROI security testing activity for most teams is authenticated DAST scanning, run automatically before every release, with findings integrated into your issue tracker. If you do nothing else on this list, do that.

Metrics for Security Testing Programs

Track these to know if your program is working:

  • Mean time to detect (MTTD): How long between introducing a vulnerability and finding it?
  • Mean time to remediate (MTTR): How long from finding to fix?
  • False positive rate: Are your tools producing signal or noise?
  • Coverage: What percentage of your application is tested?
  • Reopen rate: Are "fixed" vulnerabilities coming back?

Security testing is not a one-time event or a checkbox for compliance. It is an ongoing practice that becomes more effective as it is embedded in how your team works — not layered on top as an afterthought.

The tools and frameworks in this guide give you the building blocks. The discipline to use them consistently is what separates teams that stay secure from teams that learn about breaches from their users.

Related reading: