Bug Bounty Automation in 2026: How to Scale Your Hunting With Custom Scripts and Tools
Key Takeaways
- Automate recon first โ subdomain enumeration, alive checking, and known-CVE scanning are the highest-ROI automation targets
- The subfinder โ httpx โ nuclei pipeline is the foundation most top hunters build on
- Continuous monitoring with diff-based alerts finds bugs that one-time scans miss โ new assets are where fresh vulnerabilities live
- Custom nuclei templates are your competitive edge โ generic templates find what everyone else finds
- Automation handles the boring parts so you can focus on creative exploitation and report writing
Why Automate Your Bug Bounty Hunting?
If you've been hunting manually for a few months, you've probably noticed the pattern: most of your time goes to recon, not exploitation. You're running the same subdomain enumeration commands, checking the same ports, fingerprinting the same technologies โ over and over, across every new program.
That's the work that should be automated. Not because automation finds better bugs (it doesn't), but because it frees you to spend your limited hunting hours on the creative work that actually pays: chaining findings, testing business logic, and writing reports that get triaged quickly.
The math is simple. A manual recon pass on a medium-scope program takes 2-4 hours. An automated pipeline does it in 10-15 minutes. If you're hunting across 5 programs, that's 10-20 hours saved per week โ time you can redirect to actually finding and exploiting vulnerabilities.
The Core Automation Pipeline
Every bug bounty automation setup starts with the same three stages: discover, filter, scan. Here's the minimal pipeline that most top hunters run:
Stage 1: Subdomain Discovery
Use multiple sources to maximize coverage. Subdomain enumeration tools like subfinder aggregate results from certificate transparency logs, DNS datasets, and search engines:
subfinder -d target.com -all -silent | sort -u > subs.txt
For better coverage, combine multiple tools and deduplicate:
cat targets.txt | while read domain; do
subfinder -d "$domain" -silent
amass enum -passive -d "$domain" 2>/dev/null
findomain -t "$domain" -q
done | sort -u > all-subs.txt
Stage 2: Alive Checking and Fingerprinting
Not every subdomain resolves to a live web server. Filter down to what's actually running:
cat all-subs.txt | httpx -silent -status-code -title -tech-detect -o alive.txt
This gives you live hosts with their HTTP status codes, page titles, and detected technologies โ all in one pass.
Stage 3: Vulnerability Scanning
Run nuclei against your alive hosts with templates matched to the technologies you found:
cat alive.txt | nuclei -t cves/ -t exposures/ -t misconfiguration/ -severity medium,high,critical -o findings.txt
For a deeper dive into writing your own templates, see our nuclei template writing guide.
Continuous Monitoring: Where the Real Bugs Are
One-time scans find what's already there. Continuous monitoring finds what's new โ and new assets are where fresh vulnerabilities live. When a company deploys a new subdomain, it often hasn't gone through the same security hardening as their main properties.
The simplest approach: run your recon pipeline on a cron schedule and diff the results.
#!/bin/bash
# daily-recon.sh โ run via cron: 0 6 * * * /path/to/daily-recon.sh
TARGETS="$HOME/bounty/targets.txt"
OUTDIR="$HOME/bounty/recon/$(date +%Y-%m-%d)"
PREV="$HOME/bounty/recon/latest"
mkdir -p "$OUTDIR"
# Discover
cat "$TARGETS" | while read d; do subfinder -d "$d" -silent; done | sort -u > "$OUTDIR/subs.txt"
# Diff against previous run
if [ -d "$PREV" ]; then
comm -13 "$PREV/subs.txt" "$OUTDIR/subs.txt" > "$OUTDIR/new-subs.txt"
NEW_COUNT=$(wc -l < "$OUTDIR/new-subs.txt")
if [ "$NEW_COUNT" -gt 0 ]; then
echo "$NEW_COUNT new subdomains found" | notify -silent
cat "$OUTDIR/new-subs.txt" | httpx -silent | nuclei -t cves/ -severity high,critical | notify -silent
fi
fi
# Update latest symlink
ln -sfn "$OUTDIR" "$PREV"
The notify tool (from ProjectDiscovery) sends alerts to Slack, Discord, Telegram, or email. Configure it once and you'll get pinged whenever your pipeline finds something new.
Custom Nuclei Templates: Your Competitive Edge
Generic nuclei templates find what everyone else finds. The hunters who consistently earn top payouts write custom templates for patterns they've seen before.
Common patterns worth templating:
- Exposed admin panels โ /admin, /wp-admin, /administrator with default credentials or no auth
- Debug endpoints โ /debug, /trace, /actuator, /graphql with introspection enabled
- API versioning gaps โ /api/v1/ endpoints that bypass /api/v2/ security controls
- Technology-specific misconfigs โ Laravel debug mode, Django DEBUG=True, Spring Boot actuator exposure
- Information disclosure โ .env files, .git directories, backup files, source maps
A simple custom template for finding exposed .env files:
id: exposed-env-file
info:
name: Exposed .env File
severity: high
description: Application .env file is publicly accessible
tags: exposure,config
requests:
- method: GET
path:
- "/.env"
matchers-condition: and
matchers:
- type: word
words:
- "DB_PASSWORD"
- "APP_KEY"
- "SECRET"
condition: or
- type: status
status:
- 200
Build a library of these over time. Each template you write is a reusable detector that runs across every program you hunt on.
Scaling Across Multiple Programs
Once your pipeline works for one target, scaling to many is straightforward. Maintain a structured target list:
# targets.txt โ one root domain per line, grouped by program
# HackerOne - Program A
target-a.com
assets.target-a.com
# Bugcrowd - Program B
target-b.com
*.target-b.io
For hunters working across many programs, consider organizing by platform. Our platform comparison guide covers the differences between HackerOne, Bugcrowd, Intigriti, and YesWeHack scopes and rules.
Python for Custom Logic
Bash pipelines handle 80% of automation needs. For the other 20% โ custom parsing, API interaction, conditional logic โ Python is the right tool:
#!/usr/bin/env python3
"""Check for subdomain takeover candidates."""
import subprocess, json
def check_cnames(subs_file):
"""Find subdomains with dangling CNAMEs."""
takeover_candidates = []
with open(subs_file) as f:
for sub in f:
sub = sub.strip()
try:
result = subprocess.run(
["dig", "+short", "CNAME", sub],
capture_output=True, text=True, timeout=5
)
cname = result.stdout.strip()
if cname and any(svc in cname for svc in [
"amazonaws.com", "azurewebsites.net",
"herokuapp.com", "github.io",
"shopify.com", "fastly.net"
]):
# Check if CNAME target resolves
nxcheck = subprocess.run(
["dig", "+short", cname],
capture_output=True, text=True, timeout=5
)
if not nxcheck.stdout.strip():
takeover_candidates.append({
"subdomain": sub,
"cname": cname,
"status": "NXDOMAIN โ possible takeover"
})
except subprocess.TimeoutExpired:
continue
return takeover_candidates
What NOT to Automate
Automation has limits. These parts of bug bounty hunting still require human judgment:
- Business logic testing โ IDOR, privilege escalation, payment bypass. These require understanding how the application is supposed to work.
- Report writing โ A well-written report with clear reproduction steps and impact assessment is the difference between a $500 and $5,000 payout. See our report writing guide.
- Scope interpretation โ Automated tools don't understand program rules. Always verify scope manually before scanning.
- Chaining vulnerabilities โ The highest-value bugs come from combining multiple low-severity findings into a critical chain. This is creative work.
Responsible Automation: Rules of Engagement
Automated scanning can cause real damage if done carelessly. Follow these rules:
- Read the program rules first โ some programs explicitly prohibit automated scanning
- Respect rate limits โ use
-rate-limitflags on all tools. 10-50 requests/second is a safe default - Don't scan out of scope โ validate your target list against the program's scope before every run
- Log everything โ keep records of what you scanned, when, and what you found. This protects you if questions arise
- Test on your own infrastructure first โ before running a new script against a live target, test it against a lab environment. Our recon workflow guide covers setting this up
Getting Started: Your First Automated Session
If you're new to automation, start small. Pick one program you're already familiar with and automate just the recon phase:
- Install subfinder, httpx, and nuclei (all available via
go installor binary releases) - Run the basic pipeline:
subfinder -d target.com | httpx -silent | nuclei -t cves/ - Review the output manually โ understand what each tool found and why
- Save the output and run again tomorrow โ diff the results to see what changed
- Gradually add more tools and custom templates as you learn what works for your targets
For a complete walkthrough of the recon phase, see our recon workflow guide. For the full hunting methodology from recon to payout, check our methodology framework.
Recommended Tool Stack (2026)
| Category | Tool | Purpose |
|---|---|---|
| Subdomain Discovery | subfinder, amass, findomain | Passive subdomain enumeration from multiple sources |
| Alive Checking | httpx | HTTP probing with tech detection |
| Port Scanning | naabu, masscan | Fast port discovery across large ranges |
| Vulnerability Scanning | nuclei | Template-based scanning for known CVEs and misconfigs |
| Screenshots | gowitness, aquatone | Visual recon for manual review |
| Notifications | notify | Route findings to Slack/Discord/Telegram |
| Fuzzing | ffuf, feroxbuster | Directory and parameter discovery |
| Subdomain Takeover | subjack, nuclei takeover templates | Detect dangling DNS records |
For AI-assisted hunting workflows that complement automation, see our AI tools for bug bounty hunting guide.
FAQ
Is automating bug bounty hunting allowed?
Yes, most programs allow automation as long as you stay within scope, respect rate limits, and don't cause service disruption. Always read the program's rules before running automated tools. Some programs explicitly prohibit automated scanning โ skip those or hunt manually.
What programming language is best for bug bounty automation?
Bash for gluing tools together, Python for custom logic and API interaction, and Go for performance-critical tools. Most top hunters use a combination. Start with Bash pipelines wrapping existing tools, then graduate to Python when you need custom parsing or logic.
How much of bug bounty hunting can be automated?
Recon can be 90%+ automated. Vulnerability scanning can be 60-70% automated for known vulnerability classes. But the creative part โ chaining findings into exploitable bugs and writing quality reports โ still requires human judgment. The best hunters automate the boring parts so they can spend more time on the creative parts.
What tools should I automate first?
Start with recon: subfinder + httpx + nuclei is the classic pipeline. Automate subdomain discovery, alive checking, and known-CVE scanning first. Then add technology fingerprinting, screenshot capture, and custom nuclei templates for your specialty.