Need help with NoXss?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

377 Stars 64 Forks MIT License 65 Commits 3 Opened issues


Faster xss scanner,support reflected-xss and dom-xss

Services available


Need anything else?

Contributors list

# 246,657
62 commits


found issues release license

NoXss is a cross-site script vulnerability scanner supported reflected xss and dom-based xss. It's very fast and suitable for testing millions of urls. It has found some xss vulnerabilities in Bug Bounty program.


  • Fast and suitable for testing millions of urls
  • Support Dom-based xss(use Chrome or Phantomjs) and reflected xss
  • Only use 8 Payloads based on injection postion now(not fuzz,more accurate,faster)
  • Async request(use gevent) and Multi-processed
  • Support single url,file and traffic from Burpsuite
  • Traffic filter based on interface
  • Support speicial headers(referer,cookie,customized token,e.g.)
  • Support rescan quickly by id # Directory
    ├── logo
    ├── url.txt
    ├── cookie
    │   └── test.com_cookie
    ├── traffic
    │   ├── 49226b2cbc77b71b.traffic    #traffic file(pickled)
    │   └── 49226b2cbc77b71b.reflect    #reflected file(pickled)
    ├── url.txt.filtered    #filtered urls
    ├── requirements.txt
    ├── result
    │   └── 49226b2cbc77b71b-2019_10_29_11_24_44.json   #result
    # Screenshot s1
    # Environment Linux
    Browser:Phantomjs or Chrome # Install ### Ubuntu
  • 1.
    apt-get install flex bison phantomjs
  • 2.
    pip install -r requirements.txt
    ### Centos
  • 1.
    yum install flex bison phantomjs
  • 2.
    pip install -r requirements.txt
    ### MacOS
  • 1.
    brew install grep findutils flex phantomjs

+ 2.
pip install -r requirements.txt

If you want to scan use "--browser=chrome",you must install chrome mannually. You can use "--check" to test the installation.

python --check


python --url url --save
python --url url --cookie cookie --browser chrome --save  
python --url url --cookie cookie --browser chrome-headless --save  
python --file ./url.txt --save  
python --burp ./test.xml --save  
python --file file --filter


--url scan from url.
--id rescan from .traffic file by task id.
--file scan urls from text file(like ./url.txt).
 scan .xml(base64 encoded,like ./test.xml) from burpsuite proxy.
--process number of process.
 number of coroutine.
--cookie use cookie.
--filter filter urls.
--browser use browser(chrome,chrome-headless or phantomjs) to scan,it's good at DOM-based xss but slow.
--save save results to ./result/id.json.
--clear delete traffic files after scanning.

How to scan data from Burpsuite

In Proxy,"Save items" ==> "test.xml"
Then you can scan test.xml:

python --burp=./test.xml

How to rescan

After scanning firstly,there will be taskid.traffic and taskid.reflect in ./traffic/:
+ taskid.traffic: Web traffic of request(pickled). + taskid.reflect: Reflected result (pickled)that included reflected params,reflected position,type and others.
NoXss will use these middle files to rescan:

python --id taskid --save

How does NoXss work?


NoXss use only 8 payloads for scanning.These payloads are based on param's reflected position.Fewer payloads make it faster than fuzzing.


NoXss is highly concurrent for using coroutine.

Support dom-based xss

More and more page is using dom to render html.NoXss can parse it with using Phantomjs(default) or chrome.

Analysis files

Some xss is difficult to scan.NoXss will save some files in traffic/ for analysing,include: + *.traffic(traffic file during scanning) + *.reflect(param's reflected result) + *.redirect(30x response) + *.error(some error happened such as timeout,connection reset,etc.) + *.multipart(when request is multupart-formed,not easy to scan)


As you see in Screenshot,the poc is

,That means use the payload
in param "proxyAccount":
Then you can end the double qoutes use payload
.The final exploit is:";alert(1);//&shareName=duhxams

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.