CZAI
27,898 imgs · 450 TN

Edge AI · Triton inference insights

SPARROW runs MegaDetector + a species classifier on-device via NVIDIA Triton on a Pi-class compute board. This page surfaces where the edge models are weakest, where they're slowest, and which cameras' labels human curators should review first. Detector: md_v5b.0.0.pt · classifier: speciesnet-east-africa-v1 · format v1.5.

Animal detections21,329across 16 cameras
Low-confidence11.9%< 0.6 threshold
Latency p5078 msmedian across cameras
Latency p95107 msmedian p95
Slowest p95108 msUG-BFE-CAM-02

Overall confidence distribution

Histogram of top-1 species-classifier confidence across every animal detection. Bars left of the dashed line are flagged low-confidence and feed the curator queue.

Confidence by species

sorted: most-confident first (these are the species the classifier knows; the bottom of the list is where label verification matters most)
Hippopotamus
μ 0.77 · 758 dets
low-conf: 12.1%
Bushpig
μ 0.77 · 1269 dets
low-conf: 10.6%
Giant pangolin
μ 0.77 · 636 dets
low-conf: 11.9%
Black-and-white colobus
μ 0.77 · 1122 dets
low-conf: 11.9%
African civet
μ 0.76 · 1893 dets
low-conf: 10.9%
Waterbuck
μ 0.76 · 918 dets
low-conf: 10.9%
Bushbuck
μ 0.76 · 1481 dets
low-conf: 12.0%
Aardvark
μ 0.76 · 1674 dets
low-conf: 10.9%
African buffalo
μ 0.76 · 1381 dets
low-conf: 12.3%
Olive baboon
μ 0.76 · 2201 dets
low-conf: 11.5%
Common warthog
μ 0.76 · 2002 dets
low-conf: 12.2%
Crested porcupine
μ 0.76 · 935 dets
low-conf: 11.9%
Common duiker
μ 0.76 · 992 dets
low-conf: 12.7%
African elephant
μ 0.76 · 778 dets
low-conf: 13.4%
Spotted hyena
μ 0.76 · 1516 dets
low-conf: 12.4%
Leopard
μ 0.75 · 698 dets
low-conf: 13.8%
Common genet
μ 0.75 · 422 dets
low-conf: 13.5%
Honey badger
μ 0.75 · 653 dets
low-conf: 12.1%

Per-camera inference latency

Hourly Triton p50 samples from system_metricsaggregated to per-camera percentiles. Bottom of the table (highest p95) often correlates with thermal throttling or a failing SD card.

Camerameanp50p90p95max
UG-BFE-CAM-0277.777.7104.6107.8109.9
UG-BFE-CAM-0578.178.9104.1107.5110
UG-ZWR-CAM-0576.975.2104.6107.3110
UG-MFR-CAM-0378.979.8104.7107.3110
UG-ZWR-CAM-0177.978.4103106.9110
UG-MFR-CAM-0178.679104106.9110
UG-MFR-CAM-027878.5103.8106.9109.8
UG-MFR-CAM-0477.176.8103.5106.9109.9
UG-ZWR-CAM-0377.276.5103.2106.8109.9
UG-BFE-CAM-0377.577.3103.3106.7109.5
UG-MFR-CAM-057778.2104.1106.7109.8
UG-ZWR-CAM-0677.478.2102.4106.3109.9
UG-ZWR-CAM-0477.176.9103.5106.2109.9
UG-ZWR-CAM-0277.377.9103.1105.8109.9
UG-BFE-CAM-047777.9101.9105.7109.9
UG-BFE-CAM-0177.780.2102.5105.5109.9

Model uncertainty heatmap — verify these cameras first

% of detections at each camera that fell below the 0.6 confidence floor. Sorted desc — the top entries are where human-in-the-loop verification has the highest yield.

CameraSiteTotalLow-conf%
UG-ZWR-CAM-05UG-ZWR1,27816913.2%
UG-MFR-CAM-02UG-MFR1,26416413%
UG-BFE-CAM-04UG-BFE1,32916512.4%
UG-MFR-CAM-03UG-MFR1,44817912.4%
UG-MFR-CAM-05UG-MFR1,49018312.3%
UG-BFE-CAM-02UG-BFE1,20914712.2%
UG-ZWR-CAM-03UG-ZWR1,45717712.1%
UG-ZWR-CAM-06UG-ZWR1,08413112.1%
UG-BFE-CAM-03UG-BFE1,40316812%
UG-BFE-CAM-01UG-BFE1,09812811.7%
UG-ZWR-CAM-04UG-ZWR1,48517311.6%
UG-ZWR-CAM-01UG-ZWR1,39116011.5%
UG-MFR-CAM-04UG-MFR1,34315511.5%
UG-ZWR-CAM-02UG-ZWR1,44215710.9%
UG-MFR-CAM-01UG-MFR1,32714210.7%
UG-BFE-CAM-05UG-BFE1,28113410.5%

Next steps

  • Wire the top of the uncertainty table into a verification queue — a keyboard-driven review UI that shows the bbox- cropped image and lets the curator accept / reject / relabel. The verified subset becomes the next round's training data.
  • If a single camera consistently produces high p95 latency (≥ 110 ms), schedule a maintenance visit; the SoC is likely throttling or the SD card is failing.
  • If a single species sits at the bottom of the confidence-by-species list, it's a candidate for a targeted fine-tune on programme-collected images.