AtlasP2P - Architecture Documentation
Project Overview
AtlasP2P is a professional, production-ready P2P network visualization and monitoring platform. It provides real-time insights into any cryptocurrency blockchain network, featuring node discovery, geolocation mapping, performance tracking, node verification, operator profiles, and tipping. The platform is chain-agnostic and can be forked for any Bitcoin-derived cryptocurrency.
Architecture
Technology Stack
Frontend:
- Next.js 16 (App Router) with React 19
- TypeScript for type safety
- TailwindCSS + shadcn/ui for styling
- Leaflet for interactive mapping with Supercluster for clustering
- Recharts for statistics visualization
- Zustand for state management
- Supabase Client (SSR-aware)
Backend:
- Next.js API Routes (serverless functions)
- Supabase (PostgreSQL 15 + PostgREST + GoTrue + Realtime)
- Row Level Security (RLS) for data access control
- JWT authentication with role-based access
Crawler:
- Python 3.12 with asyncio
- Bitcoin P2P protocol implementation
- MaxMind GeoLite2 for geolocation
- Supabase Python client for data storage
Infrastructure:
- Docker Compose for local development
- Turborepo for monorepo management
- pnpm for package management
- Kong API Gateway for request routing
Project Structure
AtlasP2P/
├── apps/
│ ├── web/ # Next.js frontend application
│ │ ├── src/
│ │ │ ├── app/ # App Router pages and API routes
│ │ │ ├── components/ # React components
│ │ │ ├── hooks/ # Custom React hooks
│ │ │ ├── lib/ # Utilities and configs
│ │ │ └── types/ # TypeScript type definitions
│ │ └── public/ # Static assets
│ └── crawler/ # Python P2P network crawler
│ ├── src/
│ │ ├── crawler.py # Main crawler logic
│ │ ├── protocol.py # Bitcoin protocol implementation
│ │ ├── geoip.py # GeoIP lookup service
│ │ └── database.py # Database operations
│ │ └── config.py # Reads chainConfig from project.config.yaml
├── packages/
│ └── types/ # Shared TypeScript types
├── supabase/
│ └── migrations/ # Database migrations (4-layer architecture)
├── docker/
│ ├── docker-compose.yml # Full stack orchestration
│ ├── Dockerfile.web # Next.js container
│ ├── Dockerfile.crawler # Python crawler container
│ └── kong.yml # API gateway configuration
└── data/
└── geoip/ # MaxMind GeoIP databases
Database Architecture
4-Layer Migration Strategy
The database follows Supabase’s official architecture with a professional 4-layer initialization:
Layer 1: Foundation (0001_foundation.sql)
- Creates all Supabase system users and roles
supabase_admin- superuser for migrationsauthenticator- PostgREST connection user (NOINHERIT for security)anon,authenticated,service_role- JWT-switchable API rolessupabase_auth_admin,supabase_storage_admin- service admins- Creates
extensions,auth,storageschemas - Installs PostgreSQL extensions (uuid-ossp, pgcrypto, pg_trgm)
- Sets up
supabase_realtimepublication - Configures default privileges and search paths
Layer 2: Schema (0002_schema.sql)
- Core tables:
nodes,snapshots,node_snapshots,network_snapshots - User features:
verifications,verified_nodes,node_profiles - Monetization:
node_tip_configs,tips - Views:
nodes_public,network_stats,leaderboard - All constraints, indexes, and relationships
Layer 3: Functions (0003_functions.sql)
auth.uid()- Extract user UUID from JWTauth.role()- Extract role from JWTauth.email()- Extract email from JWTis_admin()- Check admin status- Automated triggers for timestamps, profile changes
- Used extensively in RLS policies
Layer 4: Policies (0004_policies.sql)
- Core tables:
nodes,snapshots,node_snapshots,network_snapshots - User features:
verifications,verified_nodes,node_profiles - Monetization:
node_tip_configs,tips - Views:
nodes_public,network_stats,leaderboard - RLS policies for security
- Triggers for auto-updates
Key Tables
nodes - Discovered blockchain nodes
- Network identity (IP, port, address)
- P2P handshake data (version, protocol, services)
- GeoIP data (country, city, lat/long, ISP, ASN)
- Performance metrics (latency, uptime, reliability)
- Computed fields (tier, PIX score, rank)
- Verification status and customization flags
verifications - Node ownership verification challenges
- Multiple methods: message_sign, user_agent, port_challenge, dns_txt
- Challenge/response pattern with expiration
- Status tracking: pending → verified/failed/expired
node_profiles - Customization for verified nodes
- Display name, description, avatar
- Social links (Twitter, Discord, Telegram, GitHub, website)
- Tags and public/private toggle
node_tip_configs - Tipping configuration
- Wallet addresses for tips
- Accepted coins, minimum amounts
- Thank you messages
network_history - Historical network statistics snapshots
- Periodic snapshots of network state (total nodes, online nodes, countries)
- Tier distribution (diamond, gold, silver, bronze counts)
- Average metrics (uptime, latency, PIX score)
- Most common version tracking
admin_users - Administrator privileges
- Links user_id to admin status
- Active/inactive flag
- Admin assignment tracking
banned_users - User ban management
- User bans with optional expiration
- Ban reason and admin notes
- Permanent or temporary ban flag
moderation_queue - Content moderation workflow
- Item types: avatar, profile, verification
- Status: pending → approved/rejected/flagged
- Reviewer tracking and notes
- Flag workflow with flagged_by/flagged_at (multi-admin review)
audit_log - Admin action audit trail
- All admin actions logged with timestamp
- Resource type and ID
- IP address and user agent
- JSONB details for action-specific data
rate_limits - Distributed rate limiting
- Per-user or per-IP rate tracking
- Endpoint-specific limits
- Sliding window implementation
admin_settings - Runtime configuration
- Key-value settings with JSONB values
- Category grouping (general, alerts, api, etc.)
- Public/private visibility flag
- Override defaults from project.config.yaml
alert_subscriptions - Node monitoring alerts
- Per-node alert configuration
- Alert types: offline, online, version outdated, tier change
- Delivery methods: email, Discord webhook
- Cooldown to prevent spam
alert_history - Alert delivery log
- Tracks sent alerts per subscription
- Email/webhook delivery status and errors
- Message content and metadata
api_keys - Programmatic API access
- User-owned API keys with hashed storage
- Scoped permissions (read:nodes, read:stats, etc.)
- Rate limits and usage tracking
- Expiration and revocation support
api_key_usage - API usage analytics
- Per-request logging
- Endpoint, method, status code, response time
- IP address and user agent tracking
default_avatars - System-provided avatars
- Pre-configured avatar options
- Display order and active status
Row Level Security (RLS)
All tables have RLS enabled with policies:
- Public read: nodes, snapshots, verified_nodes, tips
- Authenticated write: profiles, tip configs (own data only)
- Service role bypass: crawler can insert/update nodes
Key Features
1. Node Verification System
Purpose: Prove node ownership to unlock profiles and tipping
Methods:
- Message Signing (Primary)
- Node operator signs challenge with private key
- Backend verifies signature using bitcoinjs-message
- Most secure and cryptographically sound
- HTTP File Challenge (NAT/CGNAT-Friendly) ⭐ NEW
- Two-step POST-based verification
- User downloads chain-specific binary (auto-built in CI/CD)
- Binary runs on node server, checks:
- Daemon process running (e.g., bitcoind, bitcoin-qt, dogecoind)
- P2P port listening (e.g., 8333, 22556)
- Request IP matches node IP in database
- No port forwarding required - works behind NAT/CGNAT
- Multi-layer security: process check + port check + IP validation
- Binary config injected at build time via ldflags
- See detailed flow below
- User Agent (Automated)
- Operator sets custom user agent in node config
- Crawler detects and matches during scans
- Convenient for technical users
- Port Challenge
- Temporarily bind to specific port as proof
- Crawler validates port response
- For users who can’t access wallet keys
- DNS TXT Record (Domain-based)
- Add TXT record to domain pointing to node IP
- Validates domain ownership + node control
- For nodes with associated domains
API Endpoints:
POST /api/verify- Initiate verification, returns challengePOST /api/verify-node/init- Initialize two-step verification (step 1)POST /api/verify-node/confirm- Confirm verification with checks (step 2)PUT /api/verify- Complete single-step verification methodsGET /api/verify- Check pending verification status
Frontend Flow:
- VerificationModal component with method selector
- Real-time status updates via Supabase subscription
- Success notification + verified badge appears on map
Two-Step POST Verification Flow (HTTP File Challenge):
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 1: User Initiates on Website │
│─────────────────────────────────────────────────────────────────────│
│ 1. User clicks "Verify Ownership" on node detail page │
│ 2. Selects "HTTP File Challenge" method │
│ 3. Website creates verification record with: │
│ - Random challenge token (32 chars) │
│ - Status: pending │
│ - 24-hour expiry │
│ 4. User downloads verification binary for their OS │
└─────────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 2: Binary Initialization (on node server) │
│─────────────────────────────────────────────────────────────────────│
│ 1. User SSHs to node server │
│ 2. Runs: ./verify {challenge} │
│ 3. Binary POSTs to /api/verify-node/init with challenge │
│ 4. API validates: │
│ ✓ Challenge exists in database │
│ ✓ Status is pending │
│ ✓ Not expired │
│ 5. API stores request IP in verification.ip_address │
│ 6. API returns node's IP and port from crawler DB │
└─────────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 3: Binary Checks (on node server) │
│─────────────────────────────────────────────────────────────────────│
│ 1. Binary checks local daemon process: │
│ - Tries: ps aux | grep {daemon} │
│ - Fallback: pidof {daemon} (Linux) │
│ - Fallback: pgrep -x {daemon} (Unix-like) │
│ - Checks all daemon variants (d and qt) │
│ │
│ 2. Binary checks port listening: │
│ - Tries: netstat -an | grep {port} │
│ - Fallback: ss -lntp | grep {port} (modern Linux) │
│ - Fallback: lsof -i :{port} (macOS/BSD) │
└─────────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 4: Binary Confirmation (on node server) │
│─────────────────────────────────────────────────────────────────────│
│ 1. Binary POSTs to /api/verify-node/confirm with: │
│ - Challenge token │
│ - Process check result (found/not found, method, daemon name) │
│ - Port check result (listening/not listening, port, method) │
│ - System info (hostname, platform, arch) │
│ │
│ 2. API Security Validations: │
│ ✓ Request IP matches init IP (prevents IP spoofing) │
│ ✓ Request IP matches node IP in crawler DB (proves ownership) │
│ ✓ Process check passed (daemon running) │
│ ✓ Port check passed (port listening) │
│ │
│ 3. If all checks pass: │
│ - Update status to pending_approval │
│ - Store metadata (process/port check results) │
│ - Add to moderation queue │
│ - Binary shows success message │
│ │
│ 4. If any check fails: │
│ - Update status to failed │
│ - Binary shows detailed error message │
└─────────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────────┐
│ STEP 5: Admin Review │
│─────────────────────────────────────────────────────────────────────│
│ 1. Admin reviews in moderation queue │
│ 2. Sees all check results and system info │
│ 3. Approves or rejects │
│ 4. Node gets verified badge on map │
└─────────────────────────────────────────────────────────────────────┘
Security Layers:
- Rate Limiting: 10 requests/hour per endpoint per IP
- Challenge Validation: Must exist, not expired, correct format
- IP Validation Layer 1: Confirm IP matches init IP (prevents IP spoofing between steps)
- IP Validation Layer 2: Confirm IP matches node IP in crawler DB (proves node ownership)
- Process Validation: Daemon must be running locally
- Port Validation: Port must be listening locally
- Admin Review: Manual approval for all verifications
2. Node Profiles & Customization
Purpose: Brand your node, showcase operator info
Features:
- Display name and description
- Avatar upload (Supabase Storage, 256x256 resized)
- Social links with validation
- Custom tags (e.g., “Community Node”, “Exchange”)
- Public/private toggle
API Endpoints:
GET /api/profiles/:nodeId- Fetch profilePUT /api/profiles/:nodeId- Update (RLS protected)POST /api/profiles/:nodeId/avatar- Upload avatar
Frontend:
- Node detail page (
/node/[id]) with full profile display - ProfileEditor component in dashboard
- AvatarUpload with drag-and-drop
3. Node Tipping
Purpose: Reward node operators with cryptocurrency tips
Features:
- Configure multiple wallet addresses (DINGO, DOGE, BTC)
- QR code generation for easy mobile tipping
- Optional tip tracking (on-chain verification)
- Thank you messages
- Tip statistics (if enabled)
API Endpoints:
GET /api/nodes/:nodeId/tip-config- Get tip configPUT /api/nodes/:nodeId/tip-config- Update (authenticated)
Frontend:
- TipModal with QR code and copyable address
- “Copy Address” toast confirmation
- Tip history display (if tracking enabled)
4. Node Tiers & PIX Score
Tiers: Diamond → Gold → Silver → Bronze → Standard
PIX Score Calculation:
PIX Score = (uptime% × 0.5) + ((100 - latency_avg_ms) × 0.3) + (reliability% × 0.2)
Tier Thresholds:
- Diamond: PIX > 950, uptime > 99%, latency < 50ms
- Gold: PIX > 900, uptime > 98%, latency < 100ms
- Silver: PIX > 850, uptime > 95%, latency < 200ms
- Bronze: PIX > 800, uptime > 90%, latency < 500ms
- Standard: Everything else
Visual Design:
- Color-coded markers on map
- Tier badges on profiles
- Leaderboard sorting by tier + PIX score
5. Statistics Dashboard
Location: /stats page
Components:
- Network health score (aggregate metric)
- Total/online nodes, country count, version stats
- Version distribution (pie chart)
- Country distribution (bar chart)
- Tier distribution (stacked bar)
- Historical trends (line chart, 7/30/90 days)
- Geographic heatmap (Leaflet)
Data Sources:
network_statsview (real-time aggregation)network_snapshotstable (historical data)- Recharts for visualizations
6. Leaderboard
Location: /leaderboard page
Features:
- Sortable by: rank, PIX score, uptime, latency
- Filterable by: tier, country, verified status
- Top 3 podium styling
- Rank change indicators (↑↓)
- Pagination or infinite scroll
- “Your Node” highlighting (if authenticated)
API:
- Uses
leaderboardview (materialized for performance) GET /api/leaderboard?tier=gold&country=US&sort=pix_score
7. Real-time Updates
Implementation:
- Supabase Realtime WebSocket subscriptions
useNodeshook subscribes tonodestable changes- Automatic refetch on INSERT/UPDATE/DELETE
- Optimistic updates for better UX
- Toast notifications for important events
Realtime Publication:
nodestable added tosupabase_realtimepublication- Enable per-table:
ALTER PUBLICATION supabase_realtime ADD TABLE nodes;
Notification Types:
- New node discovered
- Node status changed (up/down)
- Tier upgrade
- Verification completed
P2P Network Crawler
Protocol Implementation
The crawler implements Bitcoin P2P protocol for node discovery:
Handshake Flow:
- Send
versionmessage with protocol version - Receive
versionresponse - Exchange
verackacknowledgments - Send
getaddrto request peer list - Receive
addrwith peer addresses
Data Collection:
- IP address and port
- Version string (e.g.,
/Bitcoin:0.21.0/) - Protocol version number
- Services bitmask
- Start height (blockchain tip)
- User agent (parsed for verification)
- Connection latency
Chain Adapters
Architecture: Multi-chain support via adapter pattern
class ChainAdapter:
def get_network_magic(self) -> bytes
def get_default_port(self) -> int
def get_dns_seeds(self) -> List[str]
def parse_version_string(self, version: str) -> Dict
Supported Chains:
- Bitcoin (via config)
- Litecoin (via config)
- Dogecoin (via config)
- Dingocoin (example in config/project.config.yaml.example)
- Any Bitcoin-derived chain (configure in project.config.yaml)
Crawler Configuration
Environment Variables:
CHAIN- Which blockchain to crawl (default: bitcoin)CRAWLER_INTERVAL_MINUTES- Crawl frequency (default: 5)MAX_CONCURRENT_CONNECTIONS- Connection pool size (default: 100)CONNECTION_TIMEOUT_SECONDS- Socket timeout (default: 10)SUPABASE_SERVICE_ROLE_KEY- Database access keyGEOIP_DB_PATH- MaxMind database location
Docker Deployment:
# Development mode (auto-starts all services including crawler)
make docker-dev
# Production mode (auto-starts all services including crawler)
make prod-docker-no-caddy
API Endpoints
Public Endpoints (No Auth Required)
Nodes:
GET /api/nodes- List nodes with filtering- Query params:
page,limit,tier,country,version,verified,online,sort,order - Returns:
{ nodes: [], pagination: {} }
- Query params:
Statistics:
GET /api/stats- Network statistics- Returns: Total/online nodes, countries, version/country/tier distributions, historical data
Leaderboard:
GET /api/leaderboard- Top nodes by PIX score- Query params:
page,limit,tier,country,verified - Returns: Ranked nodes with performance metrics
- Query params:
Profiles:
GET /api/profiles/:nodeId- Public node profile (ifis_public=true)
Authenticated Endpoints (JWT Required)
Verification:
POST /api/verify- Start node verification (single-step methods)- Body:
{ nodeId, method, signature?, userAgent? } - Returns: Verification result or challenge
- Body:
POST /api/verify-node/init- Initialize two-step verification- Body:
{ nodeId, method } - Returns: Challenge object with verificationId
- Body:
POST /api/verify-node/confirm- Confirm two-step verification- Body:
{ verificationId, signature?, dnsValue? }
- Body:
GET /api/verify/dns-check- Check DNS TXT record status
Profile Management:
PUT /api/profiles/:nodeId- Update node profile (owner only, RLS enforced)- Body:
{ displayName, description, avatarUrl, socialLinks, tags, isPublic }
- Body:
POST /api/profiles/:nodeId/avatar- Upload avatar- Body: FormData with image file
- Returns:
{ url }- CDN URL
Tipping:
GET /api/nodes/:nodeId/tip-config- Get tip configurationPUT /api/nodes/:nodeId/tip-config- Configure tipping (owner only)- Body:
{ walletAddress, acceptedCoins, minimumTip, thankYouMessage, isActive }
- Body:
Authentication & Authorization
Supabase Auth
Authentication Methods:
- Email/password
- Magic link (passwordless)
- Social OAuth (optional: GitHub, Google)
Session Management:
- Next.js middleware for protected routes
- Server-side session with cookies
- JWT tokens in localStorage for client-side
Admin System (Dual-Tier Architecture)
Purpose: Secure admin access with permanent super admins and database-managed regular admins
Tier 1: Super Admins (Environment Variable)
- Defined in
ADMIN_EMAILSenvironment variable - Permanent admin access, cannot be removed via UI
- Comma-separated email list:
ADMIN_EMAILS=admin@example.com,super@example.com - Checked by
isUserAdmin()function in API routes - Never committed to git (stored in
.env, not.env.example)
Tier 2: Regular Admins (Database Table)
- Stored in
admin_userstable:user_id(PRIMARY KEY) - References auth.usersrole- Admin role type (‘moderator’, ‘support’, etc.)granted_by- Super admin who granted accessis_active- Active status (can be revoked)revoked_at- Timestamp when revoked
- Added/removed by super admins via
/api/admin/usersPOST endpoint - Checked by
is_admin()database function for RLS policies - Can be revoked if needed
Implementation:
- Environment-Based Checks (
isUserAdmin()function):- Only checks
ADMIN_EMAILSenvironment variable - Does NOT check
admin_userstable - Used by API routes for server-side authorization
- Located in
apps/web/src/lib/security.ts
- Only checks
- Database Function (
is_admin()in Supabase):CREATE OR REPLACE FUNCTION is_admin() RETURNS BOOLEAN AS $$ BEGIN RETURN EXISTS ( SELECT 1 FROM admin_users WHERE user_id = auth.uid() AND is_active = true ); END; $$ LANGUAGE plpgsql SECURITY DEFINER;- Only checks
admin_userstable - Does NOT check environment variable
- Used by RLS policies for database-level authorization
- Only checks
- Combined Authorization:
- User is admin if EITHER condition is true:
- Email in
ADMIN_EMAILS(super admin) - OR
admin_users.is_active = true(regular admin)
- Email in
- User is admin if EITHER condition is true:
Admin Operations:
- User management: List, promote, demote, ban, unban, delete
- Uses service role client (
createAdminClient()) to bypass RLS - Upsert operation with
onConflict: 'user_id'for reactivating revoked admins - Protects super admins from deletion/banning
- Prevents self-deletion
Security Measures:
- Row Level Security (RLS) on
admin_userstable - Service role client used for admin API operations
- Middleware clears invalid session cookies
- Cannot delete/ban super admins or own account
Row Level Security (RLS)
Policy Examples:
-- Anyone can view nodes
CREATE POLICY "Nodes are viewable by everyone"
ON nodes FOR SELECT USING (true);
-- Service role can manage nodes (crawler)
CREATE POLICY "Service role can manage nodes"
ON nodes FOR ALL
USING (auth.role() = 'service_role');
-- Users can update their own profiles
CREATE POLICY "Users can update own profiles"
ON node_profiles FOR UPDATE
USING (auth.uid() = user_id);
-- Admins can view admin_users table
CREATE POLICY "Admins can view admin users"
ON admin_users FOR SELECT
USING (is_admin());
Development Setup
Prerequisites
- Node.js 20+ and pnpm 9+
- Docker and Docker Compose
- Python 3.12+ (for local crawler development)
- MaxMind account for GeoIP database
Quick Start (Use Makefile Commands!)
1. First Time Setup:
make setup-docker # Creates .env, installs deps
2. Start Development:
make dev # Starts full stack (DB + Web + Crawler)
3. Access the Application:
- Web App: http://localhost:4000
- Supabase Studio: http://localhost:4022
- API: http://localhost:4020
Key Makefile Commands
Run make help to see all available commands:
# Setup
make setup-docker # First time setup (local Docker)
make setup-cloud # First time setup (Supabase Cloud)
make setup-fork # Setup for forking
# Development
make dev # Start development stack
make down # Stop all services
make restart # Restart development
make logs # View all logs
make logs-web # View web app logs
make logs-crawler # View crawler logs
# Database
make migrate # Run migrations
make db-shell # PostgreSQL shell
# Production
make prod-docker # Production (self-hosted)
make prod-cloud # Production (cloud DB)
# Code Quality
make lint # Run ESLint
make typecheck # TypeScript check
make build # Production build
make test # Run tests
Port Configuration
All Ports: 4000-4100 Range:
- Development Server: 4000 (via
make dev) - Testing Server: 4001 (automatic during tests)
- Kong API: 4020
- PostgreSQL: 4021
- Supabase Studio: 4022
- Inbucket Web: 4023
- Inbucket SMTP: 4024
- Kong SSL: 4025
Database Migrations
Running Migrations:
# Migrations auto-run on web container start via entrypoint.sh + migrate.js
# The migrate.js script tracks applied migrations in schema_migrations table
# and syncs all Supabase internal user passwords after migration
# Manual migration (if needed):
docker exec -i atlasp2p-db psql -U postgres -d postgres < supabase/migrations/0001_foundation.sql
Migration Order (critical):
0001_foundation.sql- Roles, schemas, extensions0002_schema.sql- Tables, constraints, indexes0003_functions.sql- Database functions and triggers0004_policies.sql- Row Level Security policies
Accessing Services
- Web App: http://localhost:4000 (development)
- Supabase Studio: http://localhost:4022
- Kong API: http://localhost:4020
- PostgreSQL: localhost:4021 (user: postgres, password: postgres)
- PostgREST: http://localhost:4020/rest/v1
- Inbucket (Email): http://localhost:4023
CRITICAL: Always use make dev to start on port 4000. ALL services use ports 4000-4100.
Production Deployment
Environment Variables
Required:
NEXT_PUBLIC_SUPABASE_URL- Supabase project URLNEXT_PUBLIC_SUPABASE_ANON_KEY- Public anon keySUPABASE_SERVICE_ROLE_KEY- Service role key (server-only)MAXMIND_ACCOUNT_ID- MaxMind account IDMAXMIND_LICENSE_KEY- MaxMind license key
Optional:
CRAWLER_INTERVAL_MINUTES- Crawl frequencyMAX_CONCURRENT_CONNECTIONS- Crawler concurrencyGEOIP_DB_PATH- GeoIP database path
Deployment Checklist
- Set up Supabase cloud project
- Run migrations on production database
- Configure RLS policies
- Set up Supabase Storage bucket for avatars
- Download MaxMind GeoIP database
- Deploy Next.js app to Vercel/Netlify
- Deploy crawler to VPS/cloud (continuous)
- Set up monitoring and alerts
- Configure CDN for static assets
- Enable SSL/TLS
Key Design Decisions
Why Next.js App Router?
- Server Components for better performance
- Built-in API routes (serverless)
- Excellent TypeScript support
- RSC enables server-side data fetching without client-side overhead
Why Supabase?
- PostgreSQL with PostgREST = instant REST API
- Built-in auth with JWT
- Real-time subscriptions via WebSocket
- Row Level Security for fine-grained access control
- Generous free tier for open source projects
Why Python for Crawler?
- Excellent async support with asyncio
- Simple socket programming
- Rich ecosystem for networking
- MaxMind GeoIP2 has official Python library
Why Turborepo?
- Fast builds with caching
- Simple monorepo management
- Shared TypeScript types between packages
- Clear dependency graph
Performance Optimizations
- Database Indexes: All frequently queried columns indexed
- Materialized Views: Leaderboard pre-computed, refreshed every 5 min
- API Pagination: All list endpoints support pagination
- Map Clustering: Supercluster reduces marker count at low zoom
- Image Optimization: Next.js Image component for avatars
- Real-time Throttling: Subscription updates throttled to prevent UI thrashing
- CDN Delivery: Static assets served via CDN
- Connection Pooling: PostgreSQL connection pooler (pgbouncer)
Security Considerations
- Row Level Security: All tables protected with RLS policies
- JWT Validation: All authenticated requests validated
- Input Validation: Zod schemas for API inputs and runtime config validation (see
packages/config/README.md) - SQL Injection: PostgREST parameterizes all queries
- XSS Protection: React escapes all user content
- CORS Configuration: Restricted to allowed origins
- Rate Limiting: Kong API Gateway rate limits
- Secrets Management: Environment variables, never committed
Future Enhancements
- Multi-chain support (Dogecoin, Litecoin, Bitcoin)
- Mobile app (React Native)
Network health alerts (email/webhook)Implemented! See Alert Subscriptions below.- Embeddable widgets (iframe badges)
Public API with rate limitingImplemented! See API Keys below.- Node comparison tool
- Historical data export (CSV/JSON)
- Analytics dashboard with ML-based anomaly detection
8. Node Alerts System
Purpose: Notify node operators when their node status changes
Alert Types:
- Node goes offline
- Node comes back online
- Version becomes outdated
- Tier changes (upgrade/downgrade)
Notification Channels:
- Email: Via Resend API, includes unsubscribe link
- Discord Webhook: Rich embeds with node details
Features:
- Per-subscription cooldown (prevents alert spam)
- Unsubscribe via email link without login (token-based)
- Only for verified nodes (user must own the node)
API Endpoints:
GET /api/alerts- List subscriptionsPOST /api/alerts- Create subscriptionPUT /api/alerts/:id- Update subscriptionDELETE /api/alerts/:id- Delete subscriptionGET /api/alerts/unsubscribe?token=xxx- Unsubscribe via email link
9. API Keys
Purpose: Programmatic access to public data endpoints
Available Scopes:
read:nodes- Read node informationread:stats- Read network statisticsread:leaderboard- Read leaderboard dataread:profiles- Read node profiles
Features:
- Key format:
{ticker}_sk_{32chars}(e.g.,dingo_sk_abc123...) - SHA-256 hashed storage (raw key only shown once)
- Per-key rate limiting (10-10000 requests/hour)
- Key rotation (revoke old, create new with same settings)
- Maximum 10 active keys per user
- Optional expiration date
API Endpoints:
GET /api/keys- List keys (masked)POST /api/keys- Create key (returns raw key once)PUT /api/keys/:id- Update key settingsDELETE /api/keys/:id- Delete keyPOST /api/keys/:id/rotate- Rotate key
Optional Integrations
Analytics (PostHog)
Purpose: Product analytics, user behavior tracking, feature usage
Setup:
- Create account at posthog.com (free tier available)
- Add environment variables:
NEXT_PUBLIC_POSTHOG_KEY=phc_your_project_key NEXT_PUBLIC_POSTHOG_HOST=https://eu.i.posthog.com # or https://us.i.posthog.com - Analytics automatically enabled when keys are present
- Respects Do Not Track browser setting
Features:
- Automatic page view tracking
- Session recording (optional)
- Feature flags (optional)
- Completely optional - app works without it
SEO Customization
Configuration: All SEO settings in config/project.config.yaml:
content:
siteName: "YourCoin Nodes Map"
siteDescription: "Real-time network visualization"
seo:
title: "Custom SEO Title" # Override siteName
titleTemplate: "%s | YourCoin" # Page title format
description: "Custom meta description"
keywords:
- yourcoin
- nodes
- blockchain
twitterHandle: "@yourcoin"
robots: "index, follow"
Auto-generated if not specified: Title, description, keywords from chain config.
Contributing
See CONTRIBUTING.md for development guidelines.
License
MIT License - see LICENSE file for details.