Joe Shenouda Github | proxy-scraper

This is a simple proxy scraper script that fetches proxies from various sources and saves them to a file named "proxies.txt". After the script finishes running, it displays a message to the user indicating that the file has been created.

View the Project on GitHub joe-shenouda/proxy-scraper

Simple Proxy Scraper

Created by Joe Shenouda (www.shenouda.nl)

This is a simple proxy scraper script that fetches proxies from various sources and saves them to a file named “proxies.txt”. After the script finishes running, it displays a message to the user indicating that the file has been created.

Features

Usage

  1. Ensure you have Python 3.7 or higher installed on your system. You can check the installed version by running python --version or python3 --version in your command line or terminal.
  2. Install the httpx library, which is required to make HTTP requests, by running the following command: pip install httpx or pip3 install httpx.
  3. Save the provided script to a file named proxy_scraper.py.
  4. Run the script using the command python proxy_scraper.py or python3 proxy_scraper.py.
  5. Upon completion, the script will display a message “Proxies have been saved to proxies.txt”, and the proxies will be saved in the “proxies.txt” file in the same directory as the script.

License

This project is released under the MIT License. See the LICENSE file for more information.

Support

If you would like to support this project, you can make a donation through PayPal:

Donate with PayPal

Don’t forget to give this repo a ✨ STAR!