#44 — Anomaly Detection (Règles Suricata Détection)
PLANIFIÉ
Priorité: 🟠 HAUTE · Type: TYPE E · Conteneur: rgz-ids · Code: config/suricata/rules/rgz_anomaly.rules
Dépendances: #8 rgz-ids, #38 prometheus-alert
Description
Module de détection d'anomalies réseau basé sur règles Suricata custom. Analyse le trafic en temps réel pour identifier patterns anormaux : spikes de bande passante (>100MB/h), scans de port massifs (>50 ports en 10s), DNS tunneling (requêtes >200 caractères), HTTP floods (>100 req/min), spoofing MAC/IP détecté contre base RADIUS.
Les alertes sont classifiées par priorité (HIGH, MEDIUM, LOW) et converties en événements Prometheus. Les seuils sont ajustés par NAS-ID et revendeur (V1/V2/V3) via table alerts_rules. Chaque détection déclenche notification SMS #61 si CRITICAL, email #63 si HIGH, logging ELK #40 pour analyse post-mortem.
Suricata expose EVE JSON logs vers Logstash, qui enrichit avec GeoIP, calcule statistiques et alimente Elasticsearch pour Kibana dashboards.
Architecture Interne
Anomaly Detection Dataflow:
1. Suricata rgz-ids:
└─ Mode IDS (inline inspect sans block)
└─ Input: mirrored traffic via vlan ou af-packet
└─ Rules engine: eval 1000+ conditions contre chaque packet
└─ Output: EVE JSON (file + syslog)
2. Custom Rule Set (config/suricata/rules/rgz_anomaly.rules):
├─ SPIKE_BANDWIDTH:
│ alert http any any -> any any (msg: "Bandwidth spike >100MB/h";
│ flow: established; content: "HTTP"; threshold: type limit, track by_src, count 1000, seconds 3600;
│ classtype: anomaly; sid: 400001; rev: 1;)
├─ PORT_SCAN:
│ alert tcp any any -> any !80,443,22,8000 (msg: "Port scan detected";
│ flow: new; threshold: type limit, track by_src, count 50, seconds 10;
│ classtype: attempted-reconnaissance; sid: 400002; rev: 1;)
├─ DNS_TUNNEL:
│ alert dns any any -> any 53 (msg: "DNS tunnel attempt";
│ dns.query; content: |"; http_client_body; pcre: "/.{200,}/";
│ classtype: policy-violation; sid: 400003; rev: 1;)
├─ HTTP_FLOOD:
│ alert http any any -> any any (msg: "HTTP GET flood";
│ flow: established; http.method; content: "GET"; threshold: type limit, track by_src, count 100, seconds 60;
│ classtype: denial-of-service; sid: 400004; rev: 1;)
├─ MAC_SPOOF:
│ alert eth any any -> any any (msg: "MAC spoofing detected vs RADIUS";
│ ether-src-mac; pcre: "/unknown_vendor/"; classtype: suspicious-traffic; sid: 400005; rev: 1;)
└─ CROSS_SITE_GEOGRAPHY:
│ alert tcp any any -> any any (msg: "Same MAC 2 sites simultaneously";
│ flow: established; pcre: "/impossible_distance/"; threshold: type limit, track by_src, count 2, seconds 60;
│ classtype: suspicious-traffic; sid: 400006; rev: 1;)
3. Suricata EVE output:
└─ JSON syslog → /var/log/suricata/eve.json
└─ Fields: timestamp, event_type, src_ip, dest_ip, src_port, dest_port,
proto, alert.action, alert.gid, alert.signature_id, alert.category, alert.severity
4. Logstash pipeline (config/logstash/pipelines/):
├─ Input: file { path => "/var/log/suricata/eve.json" }
├─ Filter:
│ • json { source => "message" }
│ • geoip { source => "src_ip" target => "geoip_src" }
│ • geoip { source => "dest_ip" target => "geoip_dest" }
│ • mutate { add_field => {"nas_id" => "${NAS_ID}"} } # injecté via envsubst
│ • aggregate (window_size: 300s) // 5-min rolling window
└─ Output:
• elasticsearch { hosts => ["rgz-elasticsearch:9200"] }
• kafka (optionnel: streaming analytics)
5. Elasticsearch + Kibana:
└─ Index: suricata-anomaly-*
└─ Dashboards:
• Alert timeline (HIGH/MEDIUM/LOW)
• Top sources (src_ip by count)
• Top destinations (by severity)
• Protocol breakdown
• Geographic distribution (GeoIP map)
• Time-of-day heatmap
6. Prometheus metrics (exporter custom):
└─ suricata_alert_total{severity, category, nas_id}
└─ suricata_http_flood_per_min{src_ip}
└─ suricata_port_scan_count{src_ip, time_window}
└─ suricata_dns_tunnel_count{src_ip}
└─ suricata_mac_spoof_count{mac_address}
7. AlertManager routing:
└─ severity=HIGH → SMS #61 + PagerDuty escalade
└─ severity=MEDIUM → email #63 + log ELK
└─ severity=LOW → log only (no notification)
8. Feedback loop (optional):
└─ Suricata→Prometheus→AlertManager→rgz-api (POST /api/v1/anomaly/alert)
└─ Enregistre alerte en DB → NOC dashboard #52 → historical analysisConfiguration
# Suricata Eve JSON output
EVE_LOG_ENABLED=true
EVE_LOG_FILE=/var/log/suricata/eve.json
EVE_LOG_JSON=yes
EVE_LOG_ALERT=yes
EVE_LOG_HTTP=yes
EVE_LOG_DNS=yes
EVE_LOG_TLS=yes
EVE_LOG_FILES=yes
# Suricata rule activation
SURICATA_RULE_BANDWIDTH_SPIKE_ENABLED=true
SURICATA_RULE_BANDWIDTH_SPIKE_THRESHOLD_MB=100 # per hour
SURICATA_RULE_PORT_SCAN_ENABLED=true
SURICATA_RULE_PORT_SCAN_THRESHOLD=50 # ports in 10s
SURICATA_RULE_DNS_TUNNEL_ENABLED=true
SURICATA_RULE_DNS_TUNNEL_QUERY_LENGTH=200 # max chars
SURICATA_RULE_HTTP_FLOOD_ENABLED=true
SURICATA_RULE_HTTP_FLOOD_THRESHOLD=100 # req/min
SURICATA_RULE_MAC_SPOOF_ENABLED=true
SURICATA_RULE_CROSS_GEOGRAPHY_ENABLED=true
# Alerting thresholds
ANOMALY_ALERT_SEVERITY_SMS=HIGH # Send SMS for HIGH+ alerts
ANOMALY_ALERT_SEVERITY_EMAIL=MEDIUM # Send email for MEDIUM+ alerts
ANOMALY_ALERT_RETENTION_DAYS=90 # ELK retention
# Logstash enrichment
LOGSTASH_GEOIP_ENABLED=true
LOGSTASH_GEOIP_DB=/usr/share/GeoIP/GeoLite2-City.mmdb
LOGSTASH_AGGREGATE_WINDOW_SECONDS=300 # 5-min windows
# NAS-ID injection (per container)
NAS_ID=access_kossou # dynamically set per instanceEndpoints API
| Méthode | Route | Réponse |
|---|---|---|
| GET | /api/v1/anomalies/current?severity=high&nas_id= | {items: [{id, type, src_ip, dest_ip, severity, timestamp}], total} |
| GET | /api/v1/anomalies/summary?period=24h | {total_alerts, by_severity: {high, medium, low}, by_type: {port_scan, dns_tunnel...}} |
| GET | /api/v1/anomalies/{alert_id} | {id, timestamp, src_ip, dest_ip, protocol, rule_sid, eve_json, geoip, severity, acknowledged} |
| PUT | /api/v1/anomalies/{alert_id}/acknowledge | {status: acknowledged, acknowledged_by, timestamp} |
| GET | /api/v1/anomalies/rules | {items: [{sid, msg, category, enabled, threshold}]} |
| POST | /api/v1/anomalies/webhook/suricata | Webhook entry Suricata EVE : {event_type, alert, flow, timestamp} |
Commandes Utiles
# Vérifier Suricata status et rules chargées
docker exec rgz-ids suricatactl --socket /var/run/suricata/suricata-command.socket \
stats show
docker exec rgz-ids suricatactl --socket /var/run/suricata/suricata-command.socket \
rule-list
# Recharger rules sans redémarrage (hot reload)
docker exec rgz-ids suricatactl --socket /var/run/suricata/suricata-command.socket \
rule-list add /etc/suricata/rules/rgz_anomaly.rules
# Monitor EVE alerts en temps réel
docker exec rgz-ids tail -f /var/log/suricata/eve.json | \
jq 'select(.alert) | {timestamp: .timestamp, action: .alert.action, msg: .alert.signature}'
# Valider syntax règles Suricata
docker exec rgz-ids suricata -r /etc/suricata/rules/rgz_anomaly.rules -T
# Requête Elasticsearch pour alertes dernière heure
curl -s 'http://rgz-elasticsearch:9200/suricata-anomaly-*/_search' \
-H 'Content-Type: application/json' \
-d '{
"query": {
"range": {
"@timestamp": {"gte": "now-1h"}
}
},
"aggs": {
"by_severity": {"terms": {"field": "alert.severity"}}
}
}' | jq .
# Kibana dashboard anomalies
curl -H "Authorization: Bearer ${KIBANA_API_TOKEN}" \
http://kibana-rgz:5601/api/saved_objects/dashboard/anomaly_overview
# Simuler alerte port scan (test)
docker exec rgz-ids bash -c '
for port in {1000..1050}; do
timeout 1 bash -c "echo > /dev/tcp/192.168.1.1/$port" 2>/dev/null &
done
wait
' || echo "Scan simulation complete"
# Consulter Prometheus metrics anomaly
curl 'http://localhost:9090/api/v1/query?query=suricata_alert_total'
# Exporter alertes en CSV (Elasticsearch)
docker exec rgz-db psql -U rgz -d rgz \
-c "COPY (SELECT timestamp, severity, src_ip, dest_ip, rule_sid, message \
FROM suricata_alerts WHERE timestamp > NOW() - INTERVAL '7 days') \
TO STDOUT CSV HEADER;" > anomalies_7days.csvImplémentation TODO
- [ ] Créer fichier règles Suricata :
config/suricata/rules/rgz_anomaly.rules(6 règles core) - [ ] Configurer Suricata EVE JSON output vers
/var/log/suricata/eve.json - [ ] Créer Logstash pipeline : grok EVE JSON + geoip enrichment + aggregation
- [ ] Table PostgreSQL
suricata_alerts: stockage des alertes pour historique - [ ] Prometheus exporter custom (Python) : 6 métriques principales
- [ ] Créer AlertManager rules : classify par severity → SMS/email routing
- [ ] Implémenter endpoints API : GET /anomalies/current, /anomalies/{id}, POST /acknowledge
- [ ] Développer Kibana dashboard : timeline, top sources, protocol breakdown, GeoIP map
- [ ] Intégrer webhook Suricata→AlertManager→rgz-api (POST /api/v1/anomalies/webhook)
- [ ] Tests : générer traffic patterns anormaux (port scan, DNS tunnel, HTTP flood), vérifier détection
Dernière mise à jour: 2026-02-21