Need help with festin?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

166 Stars 27 Forks BSD 3-Clause "New" or "Revised" License 33 Commits 0 Opened issues


FestIn - S3 Bucket Weakness Discovery

Services available


Need anything else?

Contributors list

# 41,732
31 commits

Festin logo

the powered S3 bucket finder and content discover

What is FestIn

is a tool for discovering open S3 Buckets starting from a domains.

It perform a lot of test and collects information from:

  • DNS
  • Web Pages (Crawler)
  • S3 bucket itself (like S3 redirections)

Why Festin

There's a lot of S3 tools for enumeration and discover S3 bucket. Some of them are great but anyone have a complete list of features that


Main features that does

  • Various techniques for finding buckets: crawling, dns crawling and S3 responses analysis.
  • Proxy support for tunneling requests.
  • AWS credentials are not needed.
  • Works with any S3 compatible provider, not only with AWS.
  • Allows to configure custom DNS servers.
  • Integrated high performance HTTP crawler.
  • Recursively search and feedback from the 3 engines: a domain found by dns crawler is send to S3 and Http Crawlers analyzer and the same for the S3 and Crawler.
  • Works as 'watching' mode, listening for new domains in real time.
  • Save all of the domains discovered in a separate file for further analysis.
  • Allow to download bucket objects and put then in a FullText Search Engine (Redis Search) automatically, indexing the objects content allowing powerful search further.
  • Limit the search for specific domain/s.


Using Python

Python 3.8 of above needed!
$ pip install festin
$ festin -h

Using Docker

$ docker run --rm -it cr0hn/festin -h

Full options

$ festin -h
                   [--tor] [--debug] [--no-print] [-q] [--index] [--index-server INDEX_SERVER] [-dn] [-ds DNS_RESOLVER]
                   [domains [domains ...]]

Festin - the powered S3 bucket finder and content discover

positional arguments: domains

optional arguments: -h, --help show this help message and exit --version show version -f FILE_DOMAINS, --file-domains FILE_DOMAINS file with domains -w, --watch watch for new domains in file domains '-f' option -c CONCURRENCY, --concurrency CONCURRENCY max concurrency

HTTP Probes: --no-links extract web site links -T HTTP_TIMEOUT, --http-timeout HTTP_TIMEOUT set timeout for http connections -M HTTP_MAX_RECURSION, --http-max-recursion HTTP_MAX_RECURSION maximum recursison when follow links -dr DOMAIN_REGEX, --domain-regex DOMAIN_REGEX only follow domains that matches this regex

Results: -rr RESULT_FILE, --result-file RESULT_FILE results file -rd DISCOVERED_DOMAINS, --discovered-domains DISCOVERED_DOMAINS file name for storing new discovered after apply filters -ra RAW_DISCOVERED_DOMAINS, --raw-discovered-domains RAW_DISCOVERED_DOMAINS file name for storing any domain without filters

Connectivity: --tor Use Tor as proxy

Display options: --debug enable debug mode --no-print doesn't print results in screen -q, --quiet Use quiet mode

Redis Search: --index Download and index documents into Redis --index-server INDEX_SERVER Redis Search ServerDefault: redis://localhost:6379

DNS options: -dn, --no-dnsdiscover not follow dns cnames -ds DNS_RESOLVER, --dns-resolver DNS_RESOLVER comma separated custom domain name servers


Configure search domains

By default

accepts a start domain as command line parameter:
> festin

But you also cat setup an external file with a list of domains:

> cat domains.txt
> festin -f domains.txt 


performs a lot of test for a domain. Each test was made concurrently. By default concurrency is set to 5. If you want to increase the number of concurrency tests you must set the option
> festin -c 10 
Be carefull with the number of concurrency test or "alarms" could raises in some web sites.

HTTP Crawling configuration

embed a small crawler to discover links to S3 buckets. Crawler accepts these options:
  • Timeout (
    ): configure a timeout for HTTP connections. If website of the domain you want to analyze is slow, we recommend to increase this value. By default timeout is 5 seconds.
  • Maximum recursion (
    ): this value setup a limit for crawling recursion. Otherwise
    will scan all internet. By default this value is 3. It means that only will follow: -> [link] -> -> [link] -> -> [link] -> Maximum recursion reached. Stop
  • Limit domains (
    ): set this option to limit crawler to these domains that matches with this regex.
  • Black list (-B): configure a black list words file. Each domain that matches with some word in the black list will be skipped.
  • White list (-W): configure a white list words file. Each domain that DOESN'T match with some word in the white list will be skipped.


> echo "cdn" > blacklist.txt
> echo "photos" >> blacklist.txt
> festin -T 20 -M 8 -B blacklist.txt -dr .mydomain. 
BE CAREFUL: -dr (or --domain-regex) only accept valid POSIX regex. -> is not a valida POSIX regex -> is a valida POSIX regex

Manage results


runs it discover a lot of useful information. Not only about S3 buckets, also for other probes we could do. For example:

After we use

we can use discovered information (domains, links, resources, other buckets...) as input of other tools, like nmap.

For above reason

has 3 different modes to store discovered information and we can combine them:
  • FestIn
    result file (
    ): this file contains one JSON per line with buckets found by them. Each JSON includes: origin domain, bucket name and the list of objects for the bucket.
  • Filtered discovered domains file (
    ): this file contains one domain per line. These domains are discovered by the crawler, dns or S3 probes but only are stored these domains that matches with user and internal filters.
  • Raw discovered domains file (
    ): this file contains all domains, one per line, discovered by
    without any filter. This option is useful for post-processing and analyzing.


> festin -rr festin.results -rd discovered-domains.txt -ra raw-domains.txt mydomain.txt 

And, chaining with Nmap:

> festin -rd domains.txt && nmap -Pn -A -iL domains.txt -oN nmap-domains.txt 

Proxy usage

embeds the option
. By using this parameter you need local Tor proxy running at port 9050 at
> tor &
> festin --tor 

DNS Options

Some tests made by

involves DNS. It support these options:
  • Disable DNS discovery (
  • Custom DNS server (
    ): setup custom DNS server. If you plan to perform a lot of tests you should use a different DNS server like you use to your browser.


> festin -ds 

Full Text Support

not only can discover open S3 buckets. It also can download all content and store them in a Full Text Search Engine. This means that you can perform Full Text Queries to the content of the bucket!

uses as Full Text Engine the Open Source project Redis Search.

This feature has two options:

  • Enable indexing (
    ): to enable the indexing to the search engine you must setup this flag.
  • Redis Search config (
    ): you only need to setup this option if your server is running in a different IP/Port that: localhost:6379.


> docker run --rm -p 6700:6379 redislabs/redisearch:latest -d
> festin --index --index-server redis://
Pay attention to option `--index-server` is must has the prefix **redis://** 

Running as a service (or watching mode)

Some times we don't want to stop

and launch them some times when we have a new domain to inspect or any external tool discovered new domains we want to check.

supports watching mode. This means that
will start and listen for new domains. The way to "send" new domains to
is by domains file. It monitor this file for changes.

This feature is useful to combine

with other tools, like dnsrecon


> festin --watch -f domains.txt 

In a different terminal we can write:

> echo "" >> domains.txt 
> echo "" >> domains.txt 

Each new domain added to domains.txt will wakeup


Example: Mixing FesTin + DnsRecon

Using DnsRecon

The domain chosen for this example is

Step 1 - Run dnsrecon with desired options against target domain and save the output

>  dnsrecon -d -t crt -c

With this command we are going to find out other domains related to This will help to maximize our chances of success.

Step 2 - Prepare the previous generated file to feed

> tail -n +2 | sort -u | cut -d "," -f 2 >>

With this command we generate a file with one domain per line. This is the input that


Step 3 - Run FestIn with desired options and save output

>  festin -f -c 5 -rr --tor -ds > 2>

In this example the resulting files are:

  • - Main result file with one line per bucket found. Each line is a JSON object.
  • - The standard output of festin command execution
  • - The standard error of festin command execution

In order to easy the processing of multiple domains, we provide a simple script examples/ that automatize this.

Using FestIn with DnsRecon results

Run against with default options and leaving result to file:

> festin -rr 
Run against using tor proxy, with concurrency of 5, using DNS for resolving CNAMEs and leaving result to file:
> festin -c 5 -rr --tor -ds 


Q: AWS bans my IP A:

When you perform a lot of test against AWS S3, AWS includes your IP in a black list. Then each time you want to access to any S3 bucket with

of with your browser will be blocked.

We recommend to setup a proxy when you use


Who uses FestIn


Mr looquer

They analyze and assess your company risk exposure in real time. Website


This project is distributed under BSD license

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.