the[yantr].in

Platform

We label your defects.
Train your model.
Run it on your line.

Or bring your own ONNX model — same platform. Single Jetson Orin at the edge. No cloud dependency, no vendor lock-in.

What it is

A turnkey vision-inference platform on a single Jetson Orin.

We label your defects, train a model on your data, and fine-tune it on your line — or you upload your own ONNX model. Either way, the platform handles ingest, GPU inference, on-screen overlay, structured event emission, and local archiving — in real time, on the factory floor, without internet.

Core features

Seven things every device gives you.

Models, labeled or BYO.

We label your defects, train, and fine-tune on your line — or you bring your own ONNX model. Detection, classification, semantic segmentation — all run as optimized TensorRT engines. One device can run one model across many cameras, or different tasks per camera.

Real-time annotated video.

Every camera gets a live RTSP feed with detection/classification overlays burned in. Plug it into existing IP-camera infrastructure or operator dashboards.

Open event stream.

Inference results flow to MQTT in a uniform, versioned JSON envelope. Fan out to your MES, SCADA, dashboards, or PLCs — pick your stack. We don't dictate.

On-device archive.

Rolling encoded-video buffer plus structured event log on local disk. Configurable retention, optional async sync to S3/MinIO/NAS.

Web admin console.

Per-device login. Upload models, configure sources and sinks, watch live preview, hit "apply." All over a private mesh — no factory-network exposure.

Production observability.

Prometheus metrics, structured JSON logs, central Grafana dashboards. We see fleet health without exposing customer data.

Industrial-grade reliability.

Bounded latency, explicit frame-drop policies, graceful degradation on camera failure, atomic config rollouts, no silent data loss.

How we're different

Four ways we're not the other thing.

vs. cloud vision platforms

AWS Panorama, Azure Percept

Your data never leaves the factory. No internet dependency at inference time. No per-frame inference fees. No vendor-sunset risk — those services have a habit of being deprecated.

vs. raw DeepStream

DIY GStreamer pipelines

DeepStream is powerful but unforgiving — GStreamer expertise, hand-rolled config, custom C++ probes for anything beyond reference apps. We've hidden that surface behind an ONNX-upload UX. Your ML team trains; we own the plumbing.

vs. proprietary ML platforms

Cogniac, Landing AI

No model lock-in. We train your model — or you bring your own — and it ships as portable ONNX that you own. Your training data stays with you. No platform-locked weights, no per-prediction fees, no lifecycle dependency.

vs. SI custom builds

Bespoke integrator stacks

SI builds are bespoke per customer, expensive to maintain, and turn into legacy code the moment the original engineer leaves. Our platform is the same codebase on every site — when we ship a fix, every customer gets it.

Single-tenant by design.

No central control-plane fee. Each Orin is independent and customer-owned. We administer remotely via secure mesh networking when you want us to — and you can lock us out anytime.

Where it shines

Three workloads we see most often.

Quality control

Defect detection, pass/reject classification, defect-area segmentation for coverage metrics.

Safety & compliance

PPE detection, restricted-zone monitoring, intrusion alerting.

Inventory & process

Bin-level monitoring, throughput counting, assembly verification, tool-position tracking.

Where we're explicit about scope

What we don't do.

  • ×

    We don't keep your training data. After fine-tuning, weights, labels, and source data stay yours.

  • ×

    We don't replace your factory IT. We feed it.

  • ×

    We don't sell hardware. You buy the Jetson; we make it useful.

  • ×

    We don't run inference in the cloud. The work happens at the edge, where your cameras are.

Ready to evaluate?

A pilot runs on a single Jetson Orin Nano dev kit and your existing RTSP cameras. With your own ONNX model, you're live the same day. With ours — labeled, trained, and tuned to your line — typical pilots land in 1–2 weeks.