Need help with http-status-check?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

spatie
487 Stars 69 Forks MIT License 114 Commits 1 Opened issues

Description

CLI tool to crawl a website and check HTTP status codes

Services available

!
?

Need anything else?

Contributors list

Check the HTTP status code of all links on a website

Latest Version on Packagist Software License Tests Total Downloads

This repository provides a tool to check the HTTP status code of every link on a given website.

Support us

We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.

We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.

Installation

This package can be installed via Composer:

composer global require spatie/http-status-check

Usage

This tool will scan all links on a given website:

http-status-check scan https://example.com

It outputs a line per link found:

screenshot

When the crawling process is finished a summary will be shown.

By default the crawler uses 10 concurrent connections to speed up the crawling process. You can change that number by passing a different value to the

--concurrency
option:
http-status-check scan https://example.com --concurrency=20

You can also write all urls that gave a non-2xx or non-3xx response to a file:

http-status-check scan https://example.com --output=log.txt

When the crawler finds a link to an external website it will by default crawl that link as well. If you don't want the crawler to crawl such external urls use the

--dont-crawl-external-links
option:
http-status-check scan https://example.com --dont-crawl-external-links

By default, requests timeout after 10 seconds. You can change this by passing the number of seconds to the

--timeout
option:
http-status-check scan https://example.com --timeout=30

By default, the crawler will respect robots data. You can ignore them though with the

--ignore-robots
option:
http-status-check scan https://example.com --ignore-robots

Testing

To run the tests, first make sure you have Node.js installed. Then start the included node based server in a separate terminal window:

cd tests/server
./start_server.sh

With the server running, you can start testing:

vendor/bin/phpunit

Changelog

Please see CHANGELOG for more information on what has changed recently.

Contributing

Please see CONTRIBUTING for details.

Security

If you discover any security related issues, please email [email protected] instead of using the issue tracker.

Credits

License

The MIT License (MIT). Please see License File for more information.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.