Skip to the content.

AtlasP2P - Architecture Documentation

Project Overview

AtlasP2P is a professional, production-ready P2P network visualization and monitoring platform. It provides real-time insights into any cryptocurrency blockchain network, featuring node discovery, geolocation mapping, performance tracking, node verification, operator profiles, and tipping. The platform is chain-agnostic and can be forked for any Bitcoin-derived cryptocurrency.

Architecture

Technology Stack

Frontend:

Backend:

Crawler:

Infrastructure:

Project Structure

AtlasP2P/
├── apps/
│   ├── web/                    # Next.js frontend application
│   │   ├── src/
│   │   │   ├── app/           # App Router pages and API routes
│   │   │   ├── components/    # React components
│   │   │   ├── hooks/         # Custom React hooks
│   │   │   ├── lib/           # Utilities and configs
│   │   │   └── types/         # TypeScript type definitions
│   │   └── public/            # Static assets
│   └── crawler/               # Python P2P network crawler
│       ├── src/
│       │   ├── crawler.py     # Main crawler logic
│       │   ├── protocol.py    # Bitcoin protocol implementation
│       │   ├── geoip.py       # GeoIP lookup service
│       │   └── database.py    # Database operations
│       │   └── config.py      # Reads chainConfig from project.config.yaml
├── packages/
│   └── types/                 # Shared TypeScript types
├── supabase/
│   └── migrations/            # Database migrations (4-layer architecture)
├── docker/
│   ├── docker-compose.yml     # Full stack orchestration
│   ├── Dockerfile.web         # Next.js container
│   ├── Dockerfile.crawler     # Python crawler container
│   └── kong.yml              # API gateway configuration
└── data/
    └── geoip/                # MaxMind GeoIP databases

Database Architecture

4-Layer Migration Strategy

The database follows Supabase’s official architecture with a professional 4-layer initialization:

Layer 1: Foundation (0001_foundation.sql)

Layer 2: Schema (0002_schema.sql)

Layer 3: Functions (0003_functions.sql)

Layer 4: Policies (0004_policies.sql)

Key Tables

nodes - Discovered blockchain nodes

verifications - Node ownership verification challenges

node_profiles - Customization for verified nodes

node_tip_configs - Tipping configuration

Row Level Security (RLS)

All tables have RLS enabled with policies:

Key Features

1. Node Verification System

Purpose: Prove node ownership to unlock profiles and tipping

Methods:

  1. Message Signing (Primary)
    • Node operator signs challenge with private key
    • Backend verifies signature using bitcoinjs-message
    • Most secure and cryptographically sound
  2. User Agent (Automated)
    • Operator sets custom user agent in node config
    • Crawler detects and matches during scans
    • Convenient for technical users
  3. Port Challenge
    • Temporarily bind to specific port as proof
    • Crawler validates port response
    • For users who can’t access wallet keys
  4. DNS TXT Record (Domain-based)
    • Add TXT record to domain pointing to node IP
    • Validates domain ownership + node control
    • For nodes with associated domains

API Endpoints:

Frontend Flow:

2. Node Profiles & Customization

Purpose: Brand your node, showcase operator info

Features:

API Endpoints:

Frontend:

3. Node Tipping

Purpose: Reward node operators with cryptocurrency tips

Features:

API Endpoints:

Frontend:

4. Node Tiers & PIX Score

Tiers: Diamond → Gold → Silver → Bronze → Standard

PIX Score Calculation:

PIX Score = (uptime% × 0.5) + ((100 - latency_avg_ms) × 0.3) + (reliability% × 0.2)

Tier Thresholds:

Visual Design:

5. Statistics Dashboard

Location: /stats page

Components:

Data Sources:

6. Leaderboard

Location: /leaderboard page

Features:

API:

7. Real-time Updates

Implementation:

Realtime Publication:

Notification Types:

P2P Network Crawler

Protocol Implementation

The crawler implements Bitcoin P2P protocol for node discovery:

Handshake Flow:

  1. Send version message with protocol version
  2. Receive version response
  3. Exchange verack acknowledgments
  4. Send getaddr to request peer list
  5. Receive addr with peer addresses

Data Collection:

Chain Adapters

Architecture: Multi-chain support via adapter pattern

class ChainAdapter:
    def get_network_magic(self) -> bytes
    def get_default_port(self) -> int
    def get_dns_seeds(self) -> List[str]
    def parse_version_string(self, version: str) -> Dict

Supported Chains:

Crawler Configuration

Environment Variables:

Docker Deployment:

docker compose --profile crawler up -d

API Endpoints

Public Endpoints (No Auth Required)

Nodes:

Statistics:

Leaderboard:

Profiles:

Authenticated Endpoints (JWT Required)

Verification:

Profile Management:

Tipping:

Authentication & Authorization

Supabase Auth

Authentication Methods:

Session Management:

Admin System (Dual-Tier Architecture)

Purpose: Secure admin access with permanent super admins and database-managed regular admins

Tier 1: Super Admins (Environment Variable)

Tier 2: Regular Admins (Database Table)

Implementation:

  1. Environment-Based Checks (isUserAdmin() function):
    • Only checks ADMIN_EMAILS environment variable
    • Does NOT check admin_users table
    • Used by API routes for server-side authorization
    • Located in apps/web/src/lib/security.ts
  2. Database Function (is_admin() in Supabase):
    CREATE OR REPLACE FUNCTION is_admin() RETURNS BOOLEAN AS $$
    BEGIN
      RETURN EXISTS (
        SELECT 1 FROM admin_users
        WHERE user_id = auth.uid() AND is_active = true
      );
    END;
    $$ LANGUAGE plpgsql SECURITY DEFINER;
    
    • Only checks admin_users table
    • Does NOT check environment variable
    • Used by RLS policies for database-level authorization
  3. Combined Authorization:
    • User is admin if EITHER condition is true:
      • Email in ADMIN_EMAILS (super admin)
      • OR admin_users.is_active = true (regular admin)

Admin Operations:

Security Measures:

Row Level Security (RLS)

Policy Examples:

-- Anyone can view nodes
CREATE POLICY "Nodes are viewable by everyone"
  ON nodes FOR SELECT USING (true);

-- Service role can manage nodes (crawler)
CREATE POLICY "Service role can manage nodes"
  ON nodes FOR ALL
  USING (auth.role() = 'service_role');

-- Users can update their own profiles
CREATE POLICY "Users can update own profiles"
  ON node_profiles FOR UPDATE
  USING (auth.uid() = user_id);

-- Admins can view admin_users table
CREATE POLICY "Admins can view admin users"
  ON admin_users FOR SELECT
  USING (is_admin());

Development Setup

Prerequisites

Quick Start (Use Makefile Commands!)

1. First Time Setup:

make setup-docker   # Creates .env, installs deps

2. Start Development:

make dev            # Starts full stack (DB + Web + Crawler)

3. Access the Application:

Key Makefile Commands

Run make help to see all available commands:

# Setup
make setup-docker   # First time setup (local Docker)
make setup-cloud    # First time setup (Supabase Cloud)
make setup-fork     # Setup for forking

# Development
make dev            # Start development stack
make down           # Stop all services
make restart        # Restart development
make logs           # View all logs
make logs-web       # View web app logs
make logs-crawler   # View crawler logs

# Database
make migrate        # Run migrations
make db-shell       # PostgreSQL shell

# Production
make prod-docker    # Production (self-hosted)
make prod-cloud     # Production (cloud DB)

# Code Quality
make lint           # Run ESLint
make typecheck      # TypeScript check
make build          # Production build
make test           # Run tests

Port Configuration

All Ports: 4000-4100 Range:

Database Migrations

Running Migrations:

# Migrations auto-run on container start via /docker-entrypoint-initdb.d/
# Or manually:
docker exec -i atlasp2p-db psql -U supabase_admin -d postgres < supabase/migrations/0001_foundation.sql

Migration Order (critical):

  1. 0001_foundation.sql - Roles, schemas, extensions
  2. 0002_schema.sql - Tables, constraints, indexes
  3. 0003_functions.sql - Database functions and triggers
  4. 0004_policies.sql - Row Level Security policies

Accessing Services

CRITICAL: Always use make dev to start on port 4000. ALL services use ports 4000-4100.

Production Deployment

Environment Variables

Required:

Optional:

Deployment Checklist

Key Design Decisions

Why Next.js App Router?

Why Supabase?

Why Python for Crawler?

Why Turborepo?

Performance Optimizations

  1. Database Indexes: All frequently queried columns indexed
  2. Materialized Views: Leaderboard pre-computed, refreshed every 5 min
  3. API Pagination: All list endpoints support pagination
  4. Map Clustering: Supercluster reduces marker count at low zoom
  5. Image Optimization: Next.js Image component for avatars
  6. Real-time Throttling: Subscription updates throttled to prevent UI thrashing
  7. CDN Delivery: Static assets served via CDN
  8. Connection Pooling: PostgreSQL connection pooler (pgbouncer)

Security Considerations

  1. Row Level Security: All tables protected with RLS policies
  2. JWT Validation: All authenticated requests validated
  3. Input Validation: Zod schemas for API inputs
  4. SQL Injection: PostgREST parameterizes all queries
  5. XSS Protection: React escapes all user content
  6. CORS Configuration: Restricted to allowed origins
  7. Rate Limiting: Kong API Gateway rate limits
  8. Secrets Management: Environment variables, never committed

Future Enhancements

Contributing

See CONTRIBUTING.md for development guidelines.

License

MIT License - see LICENSE file for details.