This file will look for sites whit proxys using Google and then it will store them in a fileproxys.txt
.
There is no need to configure anything. It work whit Linux and Termux.
first use git clone to download the project, then run the install.sh
$ git clone https://github.com/MKVO-pts/Proxy-Scraper.git
$ cd Proxy-scraper
$ bash install.sh
This project do:
- Scrape sites from the internet
- Scrape the proxies automatically
- Remove the repeated proxies
- Grab info about them (ping,country,city...)
This project use the following modules:
- google (googlesearch)
- bs4 (BeautifulSoup)
- ipwhois
- random
- requestes
- re
- subprocess
- time
- pysimplegui
This project use the following programs:
- nmap
- proxychains (optional)