How It Works
MyOnionSearch is powered by two main components: a Go backend that serves the API and this website, and a separate Go crawler that maintains the index. This separation makes the system robust and efficient.
1. The Backend (Go API)
The main application you interact with is a web server written in Go. It has two jobs:
- Serves the API: It provides the
/api/search,/api/add, and/api/randomendpoints that this website uses to get data. - Serves this Website: The server hosts this static HTML website, eliminating all cross-origin (CORS) issues. It listens on port 8080 for Tor traffic and provides HTTPS on port 443 for the clearnet.
2. The Crawler (Go Worker)
A separate, long-running Go application acts as our index maintainer. It runs in a continuous loop to perform three critical jobs:
- Job 1: Health Checks: It periodically checks every site in the database to see if it's still online. It uses a
failure_countto track unreachable sites. - Job 2: Recursive Crawling: It visits a site, scans its HTML for new
.onionlinks, and adds any new sites it finds to the database. - Job 3: Pruning: If a site fails too many health checks (e.g., 10 times in a row), the crawler assumes it's permanently offline and removes it from the database, keeping search results fresh.
3. The Database (MySQL)
We use a MySQL database to store the public information for our index. The sites table contains:
- Onion Domain, Clearnet Domain
- Title, Description, and Tags
- Crawler data (
failure_count,last_crawled_at)
Crucially, this database does not contain any tables for user accounts, search histories, or IP logs.
4. The Frontend (HTML)
This website is built with plain HTML and a pre-compiled CSS file. There is no large framework, no WebAssembly, and no complex rendering. This makes it:
- Extremely Fast: Pages load almost instantly.
- Lightweight: The site has a tiny footprint, ideal for the high-latency Tor network.
- Secure & Private: With no client-side code, there's no tracking and a minimal attack surface.