Bug Bounty Recon Workflow in 2026: From Scope to First Finding
The difference between hunters who find critical bugs and hunters who find duplicates is usually recon. Not skill, not luck โ recon. The hunter who maps the attack surface more thoroughly finds the assets that everyone else missed.
This is the recon workflow I use. It's not the only way, but it's systematic, repeatable, and it works.
Step 1: Read the Scope (Seriously)
Before running any tools, read the program's scope document completely. Look for:
- In-scope domains โ the exact domains and wildcards you're allowed to test
- Out-of-scope assets โ third-party services, specific subdomains, or functionality that's excluded
- Testing restrictions โ rate limits, no automated scanning, no DoS testing
- Reward structure โ what severity levels pay, what vulnerability types are prioritized
The scope tells you where to focus. A wildcard scope (*.example.com) means subdomain enumeration is critical. A single-domain scope means you should go deep on that one application.
Step 2: Subdomain Enumeration
For wildcard scopes, subdomain enumeration is the highest-value recon step. Forgotten subdomains โ staging servers, old APIs, internal tools accidentally exposed โ are where critical bugs live.
Passive Enumeration (No Target Contact)
Start passive. These sources don't touch the target at all:
- Certificate Transparency logs โ
crt.shshows every SSL certificate issued for a domain. Run:curl -s "https://crt.sh/?q=%25.example.com&output=json" | jq -r '.[].name_value' | sort -u - DNS datasets โ SecurityTrails, VirusTotal, Shodan have historical DNS records
- Search engines โ
site:example.comin Google reveals indexed subdomains - GitHub/GitLab โ search for the domain in public repos. Developers leak internal hostnames in config files, CI/CD pipelines, and documentation.
Active Enumeration (Touches Target DNS)
After passive, run active enumeration:
- DNS brute-force โ tools like
amass,subfinder, ormassdnswith a good wordlist - DNS zone transfer โ unlikely to work, but always try:
dig axfr example.com @ns1.example.com - Permutation scanning โ tools like
altdnsgenerate permutations of known subdomains (e.g., ifapi.example.comexists, tryapi-v2.example.com,api-staging.example.com)
Combine results: merge all subdomain lists, deduplicate, and sort. A typical target yields 50-500 unique subdomains.
Step 3: Alive Check and Port Scan
Not all discovered subdomains are alive. Filter to only live hosts:
- HTTP probe โ
httpxorhttprobechecks which subdomains respond on ports 80/443 - Extended port scan โ for high-value targets, scan common web ports (8080, 8443, 3000, 4443, 9090) with
nmapormasscan - Screenshot โ tools like
gowitnessoraquatonetake screenshots of every live host. Visual review catches things automated tools miss โ login pages, admin panels, error pages that leak information.
Step 4: Technology Fingerprinting
For each live host, identify the technology stack. This determines what vulnerabilities to test for.
- Web server โ Apache, Nginx, IIS, Caddy (check
Serverheader) - Application framework โ Next.js, React, Angular, Django, Rails, Spring (check response headers, HTML source, JavaScript files)
- CMS โ WordPress, Drupal, Joomla (check
/wp-admin,/wp-login.php, generator meta tags) - WAF โ Cloudflare, AWS WAF, Akamai (check response headers, cookie names, error pages)
Tools: Wappalyzer (browser extension), Nuclei (tech detection templates), whatweb. SecurityClaw's nextjs-recon and tech-stack-cve-scanner skills automate this for specific stacks โ see our new skills coverage.
Why this matters: If you find a Next.js app, test for _next/data exposure and environment variable leaks. If you find WordPress, test for plugin vulnerabilities. If you find an API gateway, test CORS configuration. The tech stack determines the attack surface.
Step 5: Endpoint Discovery
Now find every endpoint on each live host:
Crawling
- Spider the application with ZAP or Burp to discover linked pages
- Check
robots.txtandsitemap.xmlfor disclosed paths - Look for API documentation endpoints (
/api-docs,/swagger,/graphql)
Directory Brute-Force
- Use
ffuforgobusterwith a targeted wordlist - Match wordlist to tech stack: use a WordPress wordlist for WordPress, a Node.js wordlist for Node apps
- Check for backup files:
.bak,.old,.swp,~suffixes - Check for exposed config:
.env,.git/config,web.config,application.yml
Wayback Machine
- Check
web.archive.orgfor historical URLs โ endpoints that were removed may still be accessible - Tool:
waybackurlsextracts all archived URLs for a domain
Step 6: JavaScript Analysis
Modern web applications ship their logic in JavaScript bundles. These bundles contain:
- API endpoints โ hardcoded paths like
/api/v2/users,/internal/admin - API keys and tokens โ developers accidentally ship secrets in client-side code
- Hidden functionality โ admin features, debug endpoints, feature flags
- Internal hostnames โ references to staging servers, internal APIs, microservice names
How to analyze:
- Download all
.jsfiles from the application - Search for patterns:
/api/,Authorization,Bearer,apiKey,secret,internal - Use tools like
LinkFinderto extract URLs from JavaScript - Beautify minified code with
js-beautifyfor manual review
SecurityClaw's js_analyzer and js-bundle-recon skills automate this โ they extract endpoints, secrets, and internal references from JavaScript bundles.
Step 7: Prioritize Targets
You now have a list of live hosts, their tech stacks, and their endpoints. Don't test everything equally. Prioritize:
High Priority (Test First)
- Admin panels and login pages โ authentication bypass, default credentials, brute-force
- API endpoints with authentication โ IDOR, broken access control, mass assignment
- File upload functionality โ unrestricted upload, path traversal, command injection via filename
- Search and filter parameters โ SQL injection, XSS (these accept user input and often lack sanitization)
- Forgotten/staging subdomains โ often have weaker security controls than production
Medium Priority
- User profile and settings pages โ stored XSS, IDOR on profile data
- Payment and checkout flows โ business logic flaws, price manipulation
- OAuth/SSO integration points โ redirect URI manipulation, token leakage
Lower Priority
- Static content pages โ limited attack surface
- Well-known third-party integrations โ Stripe checkout, Google Maps embed (bugs here are usually the third party's responsibility)
Automating the Workflow
The recon steps above can be chained into a single automated pipeline. Here's the flow:
1. subfinder + amass + crt.sh โ merged subdomain list
2. httpx โ alive hosts with status codes
3. nuclei (tech-detect templates) โ tech stack per host
4. ffuf โ endpoint discovery per host
5. waybackurls โ historical endpoints
6. LinkFinder โ JS-extracted endpoints
7. Merge all endpoints โ deduplicated target list
Tools like reconftw and axiom automate this entire chain. For CI/CD integration, see our guide on automating ZAP in GitHub Actions โ the same pattern works for recon tools.
Common Recon Mistakes
- Skipping passive recon โ jumping straight to active scanning misses CT log subdomains, GitHub leaks, and Wayback Machine endpoints.
- Using default wordlists only โ generic wordlists miss application-specific paths. Build custom wordlists from the target's JavaScript, documentation, and error messages.
- Not checking non-standard ports โ many interesting services run on 8080, 8443, 3000, or custom ports. A port 80/443-only scan misses them.
- Ignoring JavaScript bundles โ the richest source of hidden endpoints and secrets in modern web apps.
- Testing before understanding scope โ testing out-of-scope assets gets you banned from the program. Read the scope first.
Bottom Line
Recon is the foundation. A thorough recon workflow โ subdomain enumeration, alive checking, tech fingerprinting, endpoint discovery, JavaScript analysis โ gives you a complete map of the attack surface. From there, use the security testing checklist to systematically test what you've found.
The hunters who find the best bugs aren't necessarily the most skilled testers. They're the ones who found the assets that nobody else was looking at.