Skip to main content

๐ŸŒ Unit 6 โ€” Interaction Layer

Goal Unit Ini

Di akhir unit ini kamu bisa:

  • Memahami bagaimana validator query miner (on-chain metadata + HTTP endpoint)
  • Implement axon serving dengan FastAPI pattern untuk respon data request
  • Setup timeout handling dan graceful degradation saat scraper lambat
  • Deploy monitoring stack (Prometheus + Grafana atau simpler alternatif)
  • Konfigurasi PM2 auto-restart + log rotation
  • Siap submit bukti graduation SN13 ke HackQuest Learning Track
Prasyarat

๐Ÿง  Bagaimana Validator Berinteraksi dengan Miner?โ€‹

Ada dua jalur interaksi miner โ†” validator di SN13:

Jalur Passive (Primary)โ€‹

Sudah dibahas di Unit 5. Miner push โ†’ chain โ†’ validator pull. Dominant flow.

Jalur Active (OnDemand)โ€‹

Validator kadang minta miner live sample: "beri aku 100 tweet dengan label #bitcoin dari 1 jam terakhir." Ini untuk spot-check freshness realtime. Miner harus expose HTTP endpoint (axon) yang selalu siap respond.


๐Ÿ”Œ Axon โ€” The Miner Endpointโ€‹

Bittensor framework sudah bundle bt.axon โ€” wrapper FastAPI untuk handle RPC ala gRPC tapi HTTP.

Synapse Definitionโ€‹

Synapse = schema request/response. Data Universe punya synapse seperti GetDataEntities, OnDemandRequest.

# protocol.py (contoh definisi โ€” lihat repo untuk exact schema)
import bittensor as bt
from typing import List, Optional
from pydantic import BaseModel

class DataEntity(BaseModel):
uri: str
datetime: str
source: str
label: str
content: str

class OnDemandRequest(bt.Synapse):
"""Validator minta miner kembalikan data sesuai filter."""
source: str # "reddit", "x", "youtube"
label: str # misal "r/cryptocurrency"
keywords: Optional[List[str]] = None
start_time: str # ISO 8601
end_time: str
limit: int = 100

# Response field
data_entities: Optional[List[DataEntity]] = None

Handler Functionโ€‹

# neurons/miner.py (skeleton)
import bittensor as bt
from protocol import OnDemandRequest, DataEntity
from storage.query import DataStore

class Miner:
def __init__(self, config: bt.config):
self.wallet = bt.wallet(config=config)
self.subtensor = bt.subtensor(config=config)
self.axon = bt.axon(wallet=self.wallet, config=config)
self.datastore = DataStore() # interface ke local buffer / recent S3

# Attach handler
self.axon.attach(
forward_fn=self.handle_on_demand,
blacklist_fn=self.blacklist_check,
priority_fn=self.priority_check,
)

async def handle_on_demand(self, synapse: OnDemandRequest) -> OnDemandRequest:
bt.logging.info(f"OnDemand query: {synapse.source}/{synapse.label} limit={synapse.limit}")
try:
entities = await self.datastore.query(
source=synapse.source,
label=synapse.label,
keywords=synapse.keywords,
start=synapse.start_time,
end=synapse.end_time,
limit=synapse.limit,
)
synapse.data_entities = entities
except Exception as e:
bt.logging.exception(f"Query failed: {e}")
synapse.data_entities = [] # graceful degrade
return synapse

def blacklist_check(self, synapse: OnDemandRequest) -> tuple[bool, str]:
"""Reject request dari hotkey non-validator."""
hotkey = synapse.dendrite.hotkey
if not self.is_validator(hotkey):
return True, "Not a validator"
return False, ""

def priority_check(self, synapse: OnDemandRequest) -> float:
"""Validator dengan stake lebih besar โ†’ priority lebih tinggi."""
hotkey = synapse.dendrite.hotkey
return self.get_stake(hotkey)

def run(self):
self.axon.serve(netuid=13, subtensor=self.subtensor)
self.axon.start()
bt.logging.info(f"Axon listening on :{self.axon.config.axon.port}")
while True:
# main loop โ€” heartbeat, refresh metagraph, dll
...

โฑ๏ธ Timeout & Graceful Degradationโ€‹

Validator kirim request dengan timeout (biasanya 10-30 detik). Miner harus respon sebelum timeout, sekalipun data belum siap.

Pattern: Fast Fail Over Slow Successโ€‹

import asyncio

async def handle_on_demand(self, synapse: OnDemandRequest) -> OnDemandRequest:
try:
entities = await asyncio.wait_for(
self.datastore.query(...),
timeout=8.0, # internal budget < external timeout 10s
)
synapse.data_entities = entities
except asyncio.TimeoutError:
bt.logging.warning("Query timed out, returning partial/empty")
synapse.data_entities = [] # empty > no response
except Exception as e:
bt.logging.exception(f"Query error: {e}")
synapse.data_entities = []
return synapse
Jangan Lambat

Miner yang sering timeout (no response) dapat validator weight 0. Lebih baik response empty dari late response.

Cache Layerโ€‹

Query yang sama bisa datang dari multiple validator dalam 1 menit. Pakai LRU cache:

from functools import lru_cache
from cachetools import TTLCache

class DataStore:
def __init__(self):
self.cache = TTLCache(maxsize=1000, ttl=60) # 60 detik cache

async def query(self, source, label, keywords, start, end, limit):
key = (source, label, tuple(keywords or []), start, end, limit)
if key in self.cache:
return self.cache[key]
result = await self._real_query(source, label, keywords, start, end, limit)
self.cache[key] = result
return result

๐Ÿ“Š Monitoring Stackโ€‹

Opsi 1: Simple โ€” Script + Discord Webhookโ€‹

Untuk miner CLC (bukan production enterprise), Discord webhook sudah cukup:

# monitoring/health_check.py
import requests
import subprocess
import time
import os

WEBHOOK = os.getenv("DISCORD_WEBHOOK")

def check_miner_pm2():
result = subprocess.run(["pm2", "jlist"], capture_output=True, text=True)
return "online" in result.stdout

def check_disk():
result = subprocess.run(["df", "-h", "/"], capture_output=True, text=True)
# parse percentage
usage = int(result.stdout.split("\n")[1].split()[4].rstrip("%"))
return usage

def notify(msg):
if WEBHOOK:
requests.post(WEBHOOK, json={"content": msg})

if __name__ == "__main__":
if not check_miner_pm2():
notify("๐Ÿšจ Miner PM2 process DOWN!")
if check_disk() > 85:
notify(f"โš ๏ธ Disk usage {check_disk()}% โ€” cleanup needed")

Schedule via cron:

crontab -e
# tambahkan:
*/10 * * * * /home/miner/data-universe/venv/bin/python /home/miner/data-universe/monitoring/health_check.py

Opsi 2: Full Stack โ€” Prometheus + Grafanaโ€‹

Untuk serious miner:

# Install Prometheus (singkatnya)
wget https://github.com/prometheus/prometheus/releases/download/v2.51.0/prometheus-2.51.0.linux-amd64.tar.gz
tar xvf prometheus-*.tar.gz && cd prometheus-*

# Expose metrics dari miner (pakai prometheus_client library)
pip install prometheus_client

Di miner:

from prometheus_client import start_http_server, Counter, Gauge

scraped_total = Counter('sn13_scraped_total', 'Total entities scraped', ['source'])
uploaded_bytes = Counter('sn13_uploaded_bytes', 'Bytes uploaded to S3')
validator_queries = Counter('sn13_validator_queries', 'OnDemand queries received')
current_incentive = Gauge('sn13_incentive', 'Current incentive score from metagraph')

start_http_server(9100) # metrics endpoint di :9100

Grafana dashboard: monitor scraped rate, upload rate, incentive trend.

Shortcut

Untuk CLC9 graduation, Opsi 1 (Discord webhook) sudah cukup. Grafana setup butuh 2-3 jam extra.


๐Ÿ”„ PM2 Configurationโ€‹

Ecosystem Fileโ€‹

// ecosystem.config.js
module.exports = {
apps: [
{
name: "sn13-miner",
script: "venv/bin/python",
args: "neurons/miner.py --netuid 13 --subtensor.network finney --wallet.name my_cold --wallet.hotkey sn13_miner --axon.port 8091 --logging.info",
cwd: "/home/miner/data-universe",
autorestart: true,
watch: false,
max_memory_restart: "4G",
restart_delay: 10000,
env: {
PYTHONUNBUFFERED: "1"
},
error_file: "/home/miner/logs/miner-err.log",
out_file: "/home/miner/logs/miner-out.log",
log_date_format: "YYYY-MM-DD HH:mm:ss"
}
]
};

Start & Persistโ€‹

cd ~/data-universe
pm2 start ecosystem.config.js
pm2 save
pm2 startup # ikuti instruksi untuk auto-start saat VPS reboot

Commands Cheatsheetโ€‹

pm2 list               # lihat status
pm2 logs sn13-miner # tail log realtime
pm2 restart sn13-miner # restart
pm2 stop sn13-miner # stop
pm2 delete sn13-miner # remove from PM2
pm2 monit # live dashboard

Log Rotationโ€‹

pm2 install pm2-logrotate
pm2 set pm2-logrotate:max_size 100M
pm2 set pm2-logrotate:retain 7
pm2 set pm2-logrotate:compress true

Tanpa ini, log bisa isi disk sampai penuh dalam 2 minggu.


๐Ÿงช End-to-End Smoke Testโ€‹

Final test sebelum klaim graduation:

# 1. Miner running
pm2 list
# Status: online, uptime > 1 jam

# 2. Chain registration OK
btcli wallet overview --wallet.name my_cold --netuid 13
# UID terdaftar, stake > 0

# 3. Incentive naik
btcli subnet metagraph --netuid 13 | grep <your_uid>
# Incentive > 0 (biarpun kecil)

# 4. S3 bucket terisi
rclone size r2:sn13-miner-<uid>
# Total > 0 bytes, Count > 0 file

# 5. Axon reachable dari luar
curl -v http://<VPS_IP>:8091/
# Harus return something (bukan timeout)

# 6. Log bersih (no recurring ERROR)
pm2 logs sn13-miner --lines 100 --nostream | grep -i error | wc -l
# < 5 error dalam 100 baris = acceptable

๐ŸŽ“ Checklist Graduation Submissionโ€‹

Untuk graduate CLC9 SN13 (dan dapat NFT + Quack Believers invite):

Bukti yang Harus Di-submit di HackQuest Learning Trackโ€‹

  1. โœ… Hotkey SS58 Address

    btcli wallet overview --wallet.name my_cold
    # Copy SS58 dari row hotkey sn13_miner
  2. โœ… NetUID โ€” 13

  3. โœ… Miner UID

    btcli wallet overview --wallet.name my_cold --netuid 13
    # Angka di kolom UID
  4. โœ… Screenshot miner running

    • Open 2 terminal:
      • Terminal 1: pm2 list (tunjukkan sn13-miner online)
      • Terminal 2: pm2 logs sn13-miner --lines 20 (tunjukkan log hidup)
    • Screenshot keduanya, upload sebagai 1 image
  5. โœ… Screenshot taostats.io/subnets/13 โ€” browser open halaman metagraph, UID kamu highlighted

  6. โœ… Screenshot bucket R2 โ€” Cloudflare dashboard, bucket menampilkan file-file terupload

  7. โœ… X (Twitter) reflection post โ€” tulis refleksi belajar, tag @HackQuest_ dan @bittensor, paste link

Screenshot Pro Tips
  • Crop & annotate pakai tool seperti Snipaste atau Flameshot
  • Tambahkan panah merah ke UID/hotkey kamu biar reviewer mudah verify
  • Resolusi min 1280x720

๐Ÿ Production Checklist Lengkapโ€‹

Sebelum bilang "miner saya production-ready":

  • VPS Singapore region, 4+ vCPU, 8+ GB RAM, 500+ GB SSD
  • Ubuntu 22.04, firewall ufw enabled, port 8091 open
  • Non-root user miner, SSH key-based auth only
  • Python venv dengan semua deps terinstall
  • Hotkey (bukan coldkey) di VPS
  • Registered di NetUID 13, incentive > 0
  • Config scraper 3 source (Reddit + X + YT) dengan label diversity
  • Dedup SQLite persist across restart
  • S3 bucket (R2), access key di .env (gitignored!)
  • Upload cadence 15-30 menit, lifecycle 14 hari
  • Axon handler dengan timeout + graceful degrade
  • PM2 ecosystem config, autorestart, log rotate
  • Monitoring script Discord webhook cron /10 min
  • NTP sync (untuk signature S3 correct)
  • Smoke test end-to-end passed
  • Bukti graduation terkumpul

๐ŸŽฏ Rangkumanโ€‹

  • Validator berinteraksi 2 jalur: passive (via S3 + chain metadata) dan active (axon HTTP queries)
  • Axon = FastAPI wrapper Bittensor untuk handle synapse RPC
  • Timeout handling: fast fail > slow success โ€” selalu respon, empty list OK
  • PM2 ecosystem = auto-restart + log rotate + persist across reboot
  • Monitoring: Discord webhook cukup untuk CLC; Prometheus untuk serious miner
  • Graduation submission butuh 6-7 bukti: hotkey, UID, screenshots, X post

โœ… Quick Checkโ€‹

  1. Apa bedanya jalur passive dan active validator โ†” miner?
  2. Apa yang harus dilakukan miner kalau query validator mendekati timeout?
  3. Kenapa kita pakai PM2 bukan systemd langsung?
  4. Apa yang terjadi kalau miner sering timeout terhadap validator?
  5. File apa yang WAJIB di-gitignore di repo miner kamu?
๐Ÿ’ก Jawaban
  1. Passive: miner push data โ†’ S3 + chain commit, validator pull. Active: validator kirim synapse request (OnDemand) โ†’ miner harus respond via axon. Passive dominant.
  2. Respond dengan data partial / empty list. Late response lebih buruk dari empty response โ€” validator set weight 0 kalau timeout.
  3. PM2 native Node tool dengan ecosystem config portable, log rotate plugin, live monit dashboard, zero-downtime restart. Systemd bisa tapi config lebih verbose.
  4. Validator weight ke UID kamu jatuh ke 0 โ†’ incentive turun โ†’ emission TAO minimal / 0.
  5. .env (credentials S3, Reddit, Twitter) dan wallets/ folder jika somehow bocor. Jangan pernah commit file dengan secret!

๐Ÿ› Troubleshootingโ€‹

GejalaPenyebabSolusi
Axon listen tapi validator gak reachIP di belakang NAT / firewall cloudVerify curl http://<public_ip>:8091 dari luar VPS. Cek security group provider.
Address already in use port 8091Miner lama masih runningpm2 delete sn13-miner lalu restart. Atau lsof -i :8091 untuk cari PID.
Handler crash, PM2 restart loopUnhandled exception di queryWrap semua di try/except, return empty list. Check pm2 logs untuk stack trace.
Disk penuh setelah 1 mingguLog tidak rotateInstall pm2-logrotate, purge old log di /home/miner/logs/
Metrics Prometheus 404start_http_server dipanggil tapi port closed di firewallufw allow 9100 (atau jangan expose public, akses via localhost/tunnel)
Submission ditolak reviewerScreenshot blur / UID tidak terlihatUlangi screenshot dengan annotation jelas

๐ŸŽ‰ Selamat!โ€‹

Kamu sudah sampai di ujung Guided Project II โ€” Data Universe (SN13). Kalau miner kamu sudah running stabil >24 jam, bukti submission lengkap, dan log bersih: kamu siap graduate!

Langkah terakhir: submit semua bukti di HackQuest Learning Track sebelum TH4 (Graduation Day).

Next: Phase 3 โ€” More Bittensor Resources โ†’

In miners we trust. In TAO we thrive. ๐Ÿฆ†โšก