How to Write Bug Bounty Reports That Get Paid (2026)

Published: April 12, 2026 Reading time: 12 minutes Category: Bug Bounty

📢 Affiliate Disclosure: This site contains affiliate links to Amazon. We earn a commission when you purchase through our links at no additional cost to you.

Finding a bug is only half the job. The other half is convincing someone to pay you for it. Every experienced bug bounty hunter has a story about a valid vulnerability that got closed as "informative" because the report was unclear, lacked proof of concept, or undersold the impact. On the flip side, well-written reports get triaged faster, paid at higher severity, and build your reputation with security teams.

This guide covers exactly how to write bug bounty reports that get triaged quickly and paid at the right level — with real examples of what works and what gets your report sent to the bottom of the pile.

Key Takeaways

  • A clear title with vulnerability type + component + impact gets your report triaged 2-3x faster
  • Step-by-step reproduction instructions are non-negotiable — if the triager can't reproduce it, you don't get paid
  • Impact statements should describe what an attacker can actually do, not just what's theoretically possible
  • Screenshots and HTTP request/response pairs are more convincing than paragraphs of explanation
  • The #1 reason valid bugs get closed as informative: the reporter failed to demonstrate real-world impact

Why Report Quality Matters More Than Bug Quality

Here's a truth that surprises new hunters: a medium-severity bug with an excellent report often pays more than a high-severity bug with a bad report.

Security teams at companies running bug bounty programs receive hundreds of reports per month. Triagers spend an average of 5-10 minutes on initial assessment. If your report is confusing, missing reproduction steps, or doesn't clearly explain the impact, it goes to the bottom of the queue — or gets closed outright.

The math is simple:

  • Clear report → triager reproduces in 5 minutes → validated → paid at correct severity
  • Unclear report → triager asks for clarification → you respond 2 days later → context is lost → closed as "needs more info" or downgraded

Your report is a sales document. You're selling the idea that this bug is real, exploitable, and worth fixing. Treat it that way.

Anatomy of a Report That Gets Paid

Every effective bug bounty report has these components:

  1. Title — Vulnerability type + affected component + impact (one line)
  2. Summary — What the bug is and why it matters (2-3 sentences)
  3. Severity — Your CVSS assessment with justification
  4. Steps to Reproduce — Numbered steps anyone can follow
  5. Proof of Concept — Screenshots, HTTP requests, video, or code
  6. Impact Statement — What an attacker can actually achieve
  7. Remediation Suggestion — Brief fix recommendation (optional but helpful)

That's it. No life story about how you found it. No 3-paragraph introduction about OWASP. No "I've been hacking since I was 12." Just the facts, organized for fast consumption.

Writing Titles That Get Triaged Fast

Your title is the first thing a triager sees. A good title tells them exactly what they're dealing with before they read a single line of the report.

Bad titles:

  • "XSS found" — Where? What kind? What impact?
  • "Security vulnerability in your website" — This tells the triager nothing
  • "Critical bug!!!" — Exclamation marks don't make bugs more critical

Good titles:

  • "Stored XSS in /profile/bio endpoint allows session hijacking via crafted SVG upload"
  • "IDOR on /api/v2/users/{id}/documents allows any authenticated user to download other users' private documents"
  • "SQL injection in search parameter on /products allows extraction of user credentials table"

The formula: [Vulnerability Type] in [Component/Endpoint] allows [Impact]

The Vulnerability Description

Keep this to 2-4 sentences. Answer three questions:

  1. What is the vulnerability?
  2. Where does it exist?
  3. Why is it a security issue?

Example:

The /api/v2/users/{id}/documents endpoint does not verify that the authenticated user owns the requested document. By changing the id parameter, any authenticated user can download documents belonging to other users, including private tax forms and identity documents. This is an Insecure Direct Object Reference (IDOR) vulnerability.

Notice what's not here: no explanation of what IDOR is (the triager knows), no history of how you found it, no speculation about other endpoints that might be affected. Just the facts.

Step-by-Step Reproduction

This is the most important section of your report. If the triager can't reproduce the bug, you don't get paid. Period.

Rules for reproduction steps:

  • Number every step — "First... then... after that..." is harder to follow than "1. 2. 3."
  • Include exact values — URLs, parameters, payloads, headers. Don't say "inject a payload" — show the exact payload
  • State prerequisites — "Requires two accounts: one attacker (free tier), one victim (any tier)"
  • Specify the environment — Browser, OS, any extensions that need to be disabled

Example:

Prerequisites: Two registered accounts (Account A = attacker, Account B = victim). Account B must have at least one uploaded document.

  1. Log in as Account B. Navigate to /documents and upload any file. Note the document URL: /api/v2/users/5432/documents/789
  2. Log out. Log in as Account A (user ID: 1234)
  3. Send the following request (using Burp Suite or curl):
    GET /api/v2/users/5432/documents/789 HTTP/1.1
    Host: example.com
    Authorization: Bearer [Account_A_token]
    Cookie: session=[Account_A_session]
  4. Observe: the response returns Account B's document with a 200 OK status, despite Account A having no authorization to access it

Demonstrating Impact

This is where most reports fail. Hunters describe the vulnerability but don't explain why anyone should care.

Weak impact statement: "An attacker could access other users' data."

Strong impact statement: "An attacker can enumerate all user IDs (they're sequential) and download every document uploaded to the platform. Based on the document types observed (tax forms, government IDs, bank statements), this exposes PII for all users. With approximately 50,000 users on the platform (per the company's public metrics), this represents a significant data breach risk."

The difference: specificity. The strong version tells the triager exactly what's at risk, how many users are affected, and why this matters to the business.

Impact amplifiers to include when relevant:

  • Number of affected users (estimate if exact count isn't available)
  • Type of data exposed (PII, credentials, financial data)
  • Whether the attack is automatable (can an attacker script mass exploitation?)
  • Whether authentication is required (unauthenticated bugs are almost always higher severity)
  • Regulatory implications (GDPR, HIPAA, PCI-DSS if applicable)

Getting Severity Right

Most platforms use CVSS 3.1 for severity scoring. Here's a practical guide:

SeverityCVSS RangeTypical Examples
Critical9.0-10.0RCE, authentication bypass, full database access, admin account takeover
High7.0-8.9Stored XSS with session hijacking, IDOR on sensitive data, SSRF to internal services
Medium4.0-6.9Reflected XSS, CSRF on non-critical actions, information disclosure of non-sensitive data
Low0.1-3.9Self-XSS, missing security headers, verbose error messages

Pro tip: Don't inflate severity. Triagers see hundreds of reports marked "Critical" that are clearly medium. Accurate severity assessment builds trust and gets your future reports taken more seriously. If anything, slightly underselling and letting the triager upgrade is better than overselling and getting downgraded.

7 Mistakes That Get Reports Closed

  1. No reproduction steps — "I found XSS on your site" with a screenshot of an alert box but no steps to reproduce it. The triager can't verify what they can't reproduce.
  2. Theoretical impact only — "An attacker could potentially..." without demonstrating that the attack actually works. Show, don't tell.
  3. Duplicate without checking — Not searching for existing reports before submitting. Most platforms show you disclosed vulnerabilities. Check them.
  4. Out of scope — Not reading the program policy. Testing against staging environments, third-party services, or explicitly excluded domains wastes everyone's time.
  5. Scanner output as a report — Pasting raw Burp Suite or Nuclei output without analysis. Scanners produce false positives. Your job is to verify and explain, not forward automated output.
  6. Missing proof of concept — Describing a vulnerability without evidence it's exploitable. Include screenshots, HTTP request/response pairs, or a video walkthrough.
  7. Reporting best practice violations as vulnerabilities — Missing HSTS header, cookie without SameSite flag, or server version disclosure are not vulnerabilities on most programs. Check the program's policy on "informational" findings.

Report Templates You Can Use Today

Template: Web Vulnerability Report

## Title
[Vuln Type] in [Endpoint/Component] allows [Impact]

## Summary
[2-3 sentences: what, where, why it matters]

## Severity
[CVSS score] — [Critical/High/Medium/Low]
[One sentence justifying the rating]

## Steps to Reproduce
Prerequisites: [accounts needed, tools, browser]
1. [Step 1 with exact URL/payload]
2. [Step 2]
3. [Step 3]
4. Observe: [what happens that shouldn't]

## Proof of Concept
[Screenshots, HTTP requests/responses, video link]

## Impact
[What an attacker can achieve. Be specific about data types,
user count, and whether the attack is automatable.]

## Suggested Fix
[1-2 sentences on remediation]

Template: API Vulnerability Report

## Title
[Vuln Type] in [API Endpoint] — [HTTP Method] [Path]

## Summary
The [endpoint] fails to [validate/authorize/sanitize] [what],
allowing [who] to [do what].

## Severity
[CVSS score] — [Rating]

## Affected Endpoint
- Method: [GET/POST/PUT/DELETE]
- Path: [/api/v2/resource/{id}]
- Authentication: [Required/Not required]

## Steps to Reproduce
Prerequisites: [setup needed]
1. Authenticate as [role] and capture the bearer token
2. Send the following request:
   ```
   [Full HTTP request with headers]
   ```
3. Observe the response:
   ```
   [Relevant response showing the vulnerability]
   ```

## Impact
[Specific impact with data types and scope]

## Suggested Fix
[Brief remediation — input validation, authorization check, etc.]

Frequently Asked Questions

How long should a bug bounty report be?

A good bug bounty report is as long as it needs to be and no longer. Most effective reports are 200-500 words plus a clear proof of concept. The goal is to give the triager everything they need to reproduce and assess the bug in under 5 minutes.

What is the most common reason bug bounty reports get closed as informative?

The most common reason is failing to demonstrate real security impact. Reports that describe theoretical vulnerabilities without showing how an attacker could exploit them to cause harm are routinely closed as informative or N/A.

Should I include remediation advice in my bug bounty report?

Yes, but keep it brief. A one or two sentence fix recommendation shows you understand the root cause and helps the development team. Don't write a full remediation plan — the security team knows their codebase better than you do.

How do I estimate severity for a bug bounty report?

Use the CVSS 3.1 calculator as a starting point, but focus on business impact. Ask: what's the worst thing an attacker could do with this? Data theft, account takeover, and financial loss are critical. Information disclosure and denial of service are typically medium to high depending on scope.

Can I use AI to write my bug bounty reports?

AI tools like GPT-4 and Claude can help draft and polish reports, but never submit an AI-generated report without verifying every technical detail. Triagers can spot generic AI output, and inaccurate technical claims will get your report closed and hurt your reputation.

Advertisement