SkillAgentSearch skills...

ShineNETConfigs

-

Install / Use

/learn @shayanheidari01/ShineNETConfigs
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

ShineNETConfigs

Automated V2Ray configuration scraper and tester that runs on GitHub Actions to collect, test, and maintain a list of working proxy configurations.

Features

  • Scrapes V2Ray configurations from v2nodes.com
  • Tests each configuration for connectivity using ping tests
  • Automatically updates the configuration list every hour
  • Only keeps configurations that pass connectivity tests
  • Performs lightweight validation without external V2Ray tooling

How It Works

  1. Scraping: The script scrapes v2nodes.com for V2Ray configurations (vmess, vless, trojan, ss)
  2. Downloading: Downloads necessary binaries (xray, core_engine, hysteria)
  3. Testing: Tests each configuration using ping tests through the core_engine tester
  4. Filtering: Only configurations that pass the connectivity test are saved
  5. Updating: The configs.txt file is automatically updated every hour via GitHub Actions

GitHub Actions Workflow

The workflow runs on a schedule (every hour) and performs these steps:

  1. Checks out the repository
  2. Sets up Python environment
  3. Installs dependencies
  4. Downloads required binaries
  5. Ensures tester executable is available with the correct platform-specific name
  6. Runs the scraping and testing script
  7. Commits and pushes any updates to configs.txt

File Structure

  • v2ray_mining.py - Main scraping and testing script
  • configs.txt - List of working configurations (automatically updated)
  • .github/workflows/scrape.yml - GitHub Actions workflow

Usage

The repository is designed to run automatically via GitHub Actions. To run locally:

python v2ray_mining.py

Requirements

  • Python 3.10+
  • requests
  • beautifulsoup4

Install dependencies:

pip install requests beautifulsoup4

Configuration

You can modify the following settings in v2ray_mining.py:

  • BASE_URL - The website to scrape from
  • PAGES_TO_SCRAPE - Number of pages to scrape
  • REQUEST_TIMEOUT - Request timeout in seconds

Troubleshooting

If you encounter "Tester executable not found" errors:

  1. Ensure all binaries are downloaded properly
  2. Check that the core_engine executable exists in either vendor/ or core_engine/ directories
  3. Make sure the tester executable has proper execute permissions
  4. Verify the platform-specific naming (core_engine_linux for Linux, core_engine.exe for Windows)

License

This project is for educational purposes only. Users are responsible for complying with all applicable laws and regulations in their jurisdiction.

View on GitHub
GitHub Stars4
CategoryDevelopment
Updated5m ago
Forks1

Languages

Python

Security Score

65/100

Audited on Apr 6, 2026

No findings