Skip to the content.

AtlasP2P - Architecture Documentation

Project Overview

AtlasP2P is a professional, production-ready P2P network visualization and monitoring platform. It provides real-time insights into any cryptocurrency blockchain network, featuring node discovery, geolocation mapping, performance tracking, node verification, operator profiles, and tipping. The platform is chain-agnostic and can be forked for any Bitcoin-derived cryptocurrency.

Architecture

Technology Stack

Frontend:

Backend:

Crawler:

Infrastructure:

Project Structure

AtlasP2P/
├── apps/
│   ├── web/                    # Next.js frontend application
│   │   ├── src/
│   │   │   ├── app/           # App Router pages and API routes
│   │   │   ├── components/    # React components
│   │   │   ├── hooks/         # Custom React hooks
│   │   │   ├── lib/           # Utilities and configs
│   │   │   └── types/         # TypeScript type definitions
│   │   └── public/            # Static assets
│   └── crawler/               # Python P2P network crawler
│       ├── src/
│       │   ├── crawler.py     # Main crawler logic
│       │   ├── protocol.py    # Bitcoin protocol implementation
│       │   ├── geoip.py       # GeoIP lookup service
│       │   └── database.py    # Database operations
│       │   └── config.py      # Reads chainConfig from project.config.yaml
├── packages/
│   └── types/                 # Shared TypeScript types
├── supabase/
│   └── migrations/            # Database migrations (4-layer architecture)
├── docker/
│   ├── docker-compose.yml     # Full stack orchestration
│   ├── Dockerfile.web         # Next.js container
│   ├── Dockerfile.crawler     # Python crawler container
│   └── kong.yml              # API gateway configuration
└── data/
    └── geoip/                # MaxMind GeoIP databases

Database Architecture

4-Layer Migration Strategy

The database follows Supabase’s official architecture with a professional 4-layer initialization:

Layer 1: Foundation (0001_foundation.sql)

Layer 2: Schema (0002_schema.sql)

Layer 3: Functions (0003_functions.sql)

Layer 4: Policies (0004_policies.sql)

Key Tables

nodes - Discovered blockchain nodes

verifications - Node ownership verification challenges

node_profiles - Customization for verified nodes

node_tip_configs - Tipping configuration

network_history - Historical network statistics snapshots

admin_users - Administrator privileges

banned_users - User ban management

moderation_queue - Content moderation workflow

audit_log - Admin action audit trail

rate_limits - Distributed rate limiting

admin_settings - Runtime configuration

alert_subscriptions - Node monitoring alerts

alert_history - Alert delivery log

api_keys - Programmatic API access

api_key_usage - API usage analytics

default_avatars - System-provided avatars

Row Level Security (RLS)

All tables have RLS enabled with policies:

Key Features

1. Node Verification System

Purpose: Prove node ownership to unlock profiles and tipping

Methods:

  1. Message Signing (Primary)
    • Node operator signs challenge with private key
    • Backend verifies signature using bitcoinjs-message
    • Most secure and cryptographically sound
  2. HTTP File Challenge (NAT/CGNAT-Friendly) ⭐ NEW
    • Two-step POST-based verification
    • User downloads chain-specific binary (auto-built in CI/CD)
    • Binary runs on node server, checks:
      • Daemon process running (e.g., bitcoind, bitcoin-qt, dogecoind)
      • P2P port listening (e.g., 8333, 22556)
      • Request IP matches node IP in database
    • No port forwarding required - works behind NAT/CGNAT
    • Multi-layer security: process check + port check + IP validation
    • Binary config injected at build time via ldflags
    • See detailed flow below
  3. User Agent (Automated)
    • Operator sets custom user agent in node config
    • Crawler detects and matches during scans
    • Convenient for technical users
  4. Port Challenge
    • Temporarily bind to specific port as proof
    • Crawler validates port response
    • For users who can’t access wallet keys
  5. DNS TXT Record (Domain-based)
    • Add TXT record to domain pointing to node IP
    • Validates domain ownership + node control
    • For nodes with associated domains

API Endpoints:

Frontend Flow:

Two-Step POST Verification Flow (HTTP File Challenge):

┌─────────────────────────────────────────────────────────────────────┐
│ STEP 1: User Initiates on Website                                  │
│─────────────────────────────────────────────────────────────────────│
│ 1. User clicks "Verify Ownership" on node detail page             │
│ 2. Selects "HTTP File Challenge" method                           │
│ 3. Website creates verification record with:                       │
│    - Random challenge token (32 chars)                             │
│    - Status: pending                                               │
│    - 24-hour expiry                                                │
│ 4. User downloads verification binary for their OS                │
└─────────────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 2: Binary Initialization (on node server)                     │
│─────────────────────────────────────────────────────────────────────│
│ 1. User SSHs to node server                                       │
│ 2. Runs: ./verify {challenge}                                     │
│ 3. Binary POSTs to /api/verify-node/init with challenge           │
│ 4. API validates:                                                  │
│    ✓ Challenge exists in database                                 │
│    ✓ Status is pending                                            │
│    ✓ Not expired                                                  │
│ 5. API stores request IP in verification.ip_address               │
│ 6. API returns node's IP and port from crawler DB                 │
└─────────────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 3: Binary Checks (on node server)                             │
│─────────────────────────────────────────────────────────────────────│
│ 1. Binary checks local daemon process:                            │
│    - Tries: ps aux | grep {daemon}                                │
│    - Fallback: pidof {daemon} (Linux)                             │
│    - Fallback: pgrep -x {daemon} (Unix-like)                      │
│    - Checks all daemon variants (d and qt)                        │
│                                                                     │
│ 2. Binary checks port listening:                                  │
│    - Tries: netstat -an | grep {port}                             │
│    - Fallback: ss -lntp | grep {port} (modern Linux)              │
│    - Fallback: lsof -i :{port} (macOS/BSD)                        │
└─────────────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 4: Binary Confirmation (on node server)                       │
│─────────────────────────────────────────────────────────────────────│
│ 1. Binary POSTs to /api/verify-node/confirm with:                 │
│    - Challenge token                                               │
│    - Process check result (found/not found, method, daemon name)  │
│    - Port check result (listening/not listening, port, method)    │
│    - System info (hostname, platform, arch)                       │
│                                                                     │
│ 2. API Security Validations:                                       │
│    ✓ Request IP matches init IP (prevents IP spoofing)            │
│    ✓ Request IP matches node IP in crawler DB (proves ownership)  │
│    ✓ Process check passed (daemon running)                        │
│    ✓ Port check passed (port listening)                           │
│                                                                     │
│ 3. If all checks pass:                                             │
│    - Update status to pending_approval                            │
│    - Store metadata (process/port check results)                  │
│    - Add to moderation queue                                      │
│    - Binary shows success message                                 │
│                                                                     │
│ 4. If any check fails:                                             │
│    - Update status to failed                                       │
│    - Binary shows detailed error message                          │
└─────────────────────────────────────────────────────────────────────┘
                                ↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 5: Admin Review                                               │
│─────────────────────────────────────────────────────────────────────│
│ 1. Admin reviews in moderation queue                              │
│ 2. Sees all check results and system info                         │
│ 3. Approves or rejects                                             │
│ 4. Node gets verified badge on map                                │
└─────────────────────────────────────────────────────────────────────┘

Security Layers:

  1. Rate Limiting: 10 requests/hour per endpoint per IP
  2. Challenge Validation: Must exist, not expired, correct format
  3. IP Validation Layer 1: Confirm IP matches init IP (prevents IP spoofing between steps)
  4. IP Validation Layer 2: Confirm IP matches node IP in crawler DB (proves node ownership)
  5. Process Validation: Daemon must be running locally
  6. Port Validation: Port must be listening locally
  7. Admin Review: Manual approval for all verifications

2. Node Profiles & Customization

Purpose: Brand your node, showcase operator info

Features:

API Endpoints:

Frontend:

3. Node Tipping

Purpose: Reward node operators with cryptocurrency tips

Features:

API Endpoints:

Frontend:

4. Node Tiers & PIX Score

Tiers: Diamond → Gold → Silver → Bronze → Standard

PIX Score Calculation:

PIX Score = (uptime% × 0.5) + ((100 - latency_avg_ms) × 0.3) + (reliability% × 0.2)

Tier Thresholds:

Visual Design:

5. Statistics Dashboard

Location: /stats page

Components:

Data Sources:

6. Leaderboard

Location: /leaderboard page

Features:

API:

7. Real-time Updates

Implementation:

Realtime Publication:

Notification Types:

P2P Network Crawler

Protocol Implementation

The crawler implements Bitcoin P2P protocol for node discovery:

Handshake Flow:

  1. Send version message with protocol version
  2. Receive version response
  3. Exchange verack acknowledgments
  4. Send getaddr to request peer list
  5. Receive addr with peer addresses

Data Collection:

Chain Adapters

Architecture: Multi-chain support via adapter pattern

class ChainAdapter:
    def get_network_magic(self) -> bytes
    def get_default_port(self) -> int
    def get_dns_seeds(self) -> List[str]
    def parse_version_string(self, version: str) -> Dict

Supported Chains:

Crawler Configuration

Environment Variables:

Docker Deployment:

# Development mode (auto-starts all services including crawler)
make docker-dev

# Production mode (auto-starts all services including crawler)
make prod-docker-no-caddy

API Endpoints

Public Endpoints (No Auth Required)

Nodes:

Statistics:

Leaderboard:

Profiles:

Authenticated Endpoints (JWT Required)

Verification:

Profile Management:

Tipping:

Authentication & Authorization

Supabase Auth

Authentication Methods:

Session Management:

Admin System (Dual-Tier Architecture)

Purpose: Secure admin access with permanent super admins and database-managed regular admins

Tier 1: Super Admins (Environment Variable)

Tier 2: Regular Admins (Database Table)

Implementation:

  1. Environment-Based Checks (isUserAdmin() function):
    • Only checks ADMIN_EMAILS environment variable
    • Does NOT check admin_users table
    • Used by API routes for server-side authorization
    • Located in apps/web/src/lib/security.ts
  2. Database Function (is_admin() in Supabase):
    CREATE OR REPLACE FUNCTION is_admin() RETURNS BOOLEAN AS $$
    BEGIN
      RETURN EXISTS (
        SELECT 1 FROM admin_users
        WHERE user_id = auth.uid() AND is_active = true
      );
    END;
    $$ LANGUAGE plpgsql SECURITY DEFINER;
    
    • Only checks admin_users table
    • Does NOT check environment variable
    • Used by RLS policies for database-level authorization
  3. Combined Authorization:
    • User is admin if EITHER condition is true:
      • Email in ADMIN_EMAILS (super admin)
      • OR admin_users.is_active = true (regular admin)

Admin Operations:

Security Measures:

Row Level Security (RLS)

Policy Examples:

-- Anyone can view nodes
CREATE POLICY "Nodes are viewable by everyone"
  ON nodes FOR SELECT USING (true);

-- Service role can manage nodes (crawler)
CREATE POLICY "Service role can manage nodes"
  ON nodes FOR ALL
  USING (auth.role() = 'service_role');

-- Users can update their own profiles
CREATE POLICY "Users can update own profiles"
  ON node_profiles FOR UPDATE
  USING (auth.uid() = user_id);

-- Admins can view admin_users table
CREATE POLICY "Admins can view admin users"
  ON admin_users FOR SELECT
  USING (is_admin());

Development Setup

Prerequisites

Quick Start (Use Makefile Commands!)

1. First Time Setup:

make setup-docker   # Creates .env, installs deps

2. Start Development:

make dev            # Starts full stack (DB + Web + Crawler)

3. Access the Application:

Key Makefile Commands

Run make help to see all available commands:

# Setup
make setup-docker   # First time setup (local Docker)
make setup-cloud    # First time setup (Supabase Cloud)
make setup-fork     # Setup for forking

# Development
make dev            # Start development stack
make down           # Stop all services
make restart        # Restart development
make logs           # View all logs
make logs-web       # View web app logs
make logs-crawler   # View crawler logs

# Database
make migrate        # Run migrations
make db-shell       # PostgreSQL shell

# Production
make prod-docker    # Production (self-hosted)
make prod-cloud     # Production (cloud DB)

# Code Quality
make lint           # Run ESLint
make typecheck      # TypeScript check
make build          # Production build
make test           # Run tests

Port Configuration

All Ports: 4000-4100 Range:

Database Migrations

Running Migrations:

# Migrations auto-run on web container start via entrypoint.sh + migrate.js
# The migrate.js script tracks applied migrations in schema_migrations table
# and syncs all Supabase internal user passwords after migration

# Manual migration (if needed):
docker exec -i atlasp2p-db psql -U postgres -d postgres < supabase/migrations/0001_foundation.sql

Migration Order (critical):

  1. 0001_foundation.sql - Roles, schemas, extensions
  2. 0002_schema.sql - Tables, constraints, indexes
  3. 0003_functions.sql - Database functions and triggers
  4. 0004_policies.sql - Row Level Security policies

Accessing Services

CRITICAL: Always use make dev to start on port 4000. ALL services use ports 4000-4100.

Production Deployment

Environment Variables

Required:

Optional:

Deployment Checklist

Key Design Decisions

Why Next.js App Router?

Why Supabase?

Why Python for Crawler?

Why Turborepo?

Performance Optimizations

  1. Database Indexes: All frequently queried columns indexed
  2. Materialized Views: Leaderboard pre-computed, refreshed every 5 min
  3. API Pagination: All list endpoints support pagination
  4. Map Clustering: Supercluster reduces marker count at low zoom
  5. Image Optimization: Next.js Image component for avatars
  6. Real-time Throttling: Subscription updates throttled to prevent UI thrashing
  7. CDN Delivery: Static assets served via CDN
  8. Connection Pooling: PostgreSQL connection pooler (pgbouncer)

Security Considerations

  1. Row Level Security: All tables protected with RLS policies
  2. JWT Validation: All authenticated requests validated
  3. Input Validation: Zod schemas for API inputs and runtime config validation (see packages/config/README.md)
  4. SQL Injection: PostgREST parameterizes all queries
  5. XSS Protection: React escapes all user content
  6. CORS Configuration: Restricted to allowed origins
  7. Rate Limiting: Kong API Gateway rate limits
  8. Secrets Management: Environment variables, never committed

Future Enhancements

8. Node Alerts System

Purpose: Notify node operators when their node status changes

Alert Types:

Notification Channels:

Features:

API Endpoints:

9. API Keys

Purpose: Programmatic access to public data endpoints

Available Scopes:

Features:

API Endpoints:

Optional Integrations

Analytics (PostHog)

Purpose: Product analytics, user behavior tracking, feature usage

Setup:

  1. Create account at posthog.com (free tier available)
  2. Add environment variables:
    NEXT_PUBLIC_POSTHOG_KEY=phc_your_project_key
    NEXT_PUBLIC_POSTHOG_HOST=https://eu.i.posthog.com  # or https://us.i.posthog.com
    
  3. Analytics automatically enabled when keys are present
  4. Respects Do Not Track browser setting

Features:

SEO Customization

Configuration: All SEO settings in config/project.config.yaml:

content:
  siteName: "YourCoin Nodes Map"
  siteDescription: "Real-time network visualization"
  seo:
    title: "Custom SEO Title"  # Override siteName
    titleTemplate: "%s | YourCoin"  # Page title format
    description: "Custom meta description"
    keywords:
      - yourcoin
      - nodes
      - blockchain
    twitterHandle: "@yourcoin"
    robots: "index, follow"

Auto-generated if not specified: Title, description, keywords from chain config.

Contributing

See CONTRIBUTING.md for development guidelines.

License

MIT License - see LICENSE file for details.