Any AI agent can analyze, fingerprint, and monitor video streams. Rust-powered. WASM-compiled. MCP-native.
MCP tools, CLI pipelines, WASM modules, or Python bindings. Give any AI agent video superpowers.
8 tools for Claude, GPT, or any MCP-compatible agent. Analyze, fingerprint, QC, and monitor streams.
14 subcommands. Pipe into any agentic workflow. JSON output for structured reasoning.
Run video analysis in-browser. No server round-trip. Perfect for multi-modal AI applications.
PyO3-powered. Drop into LangChain, CrewAI, or any Python agent framework.
7 independent crates. Import the full stack or cherry-pick modules. Zero-copy, zero-alloc hot paths.
Deploy as serverless video processing. WASM for Cloudflare Workers, native for Lambda/Cloud Run.
Video intelligence primitives that agents use to see, hear, and reason about media at scale.
Agents analyze live streams in real time — fingerprint audio, detect quality anomalies, and flag policy violations before they reach viewers.
Agents fingerprint audio libraries, auto-tag genre and mood, and build searchable indexes across millions of media files.
Continuous quality assurance — agents monitor bitrate ladders, segment accessibility, DRM status, and caption completeness 24/7.
Agents select optimal encoding presets per-device, generate adaptive HLS/DASH ladders, and validate output automatically.
Give vision-language models structured video metadata. Stream analysis, codec details, and quality metrics feed into reasoning chains.
Build Mux-like video APIs backed by Kino. WASM for edge processing, Rust for throughput, MCP for agent orchestration.
Video is the last unstructured frontier. Kino gives agents the primitives to reason about it natively.
8 structured tools that any MCP-compatible agent can call. JSON in, JSON out. No wrapper scripts, no prompt hacking.
4ms startup. Zero-copy parsing. Process thousands of streams concurrently without the memory leaks that crash Python pipelines.
TLA+ specifications with 25+ invariants. When agents orchestrate video infrastructure, correctness is not optional.
WASM for Cloudflare Workers, native binary for Lambda, Python bindings for notebooks, MCP for agents. Same engine everywhere.
| Capability | Kino | ffprobe / ffmpeg | Cloud APIs |
|---|---|---|---|
| Agent Interface (MCP) | Native | None | REST only |
| Audio Fingerprinting | Built-in | External | Separate service |
| Stream QC / Validation | 14 checks | Manual | Limited |
| Edge Deployment (WASM) | Native target | N/A | Server only |
| Structured JSON Output | All tools | Partial | Yes |
| Memory Safety | Rust-native | C (unsafe) | Managed |
| Formal Verification | TLA+ specs | None | None |
| Self-Hosted / Air-Gap | Single binary | Yes | Cloud-only |
Independent Rust crates. Import the full stack or cherry-pick modules.
Production-grade features. No plugins, no third-party runtime.
VA-API, VideoToolbox, NVDEC, D3D11VA. Auto-detects the best backend on every platform.
HLS and DASH with hybrid ABR. Buffer-aware quality switching. Zero-stall playback.
Shazam-style spectral peak matching. Content ID, duplicate detection, similarity scoring.
Full player compiled to WASM. One script tag, adaptive streaming. No install needed.
Widevine, FairPlay, PlayReady integration. Content protection for commercial deployments.
TLA+ specs for critical paths. 25+ invariants including NoDoubleCharge and ExactlyOnceDelivery.
Rust, Python, JavaScript, or CLI. Pick the interface that fits.
use kino_core::{PlayerConfig, PlayerSession}; use kino_desktop::{DesktopPlayer, DesktopPlayerConfig}; fn main() -> anyhow::Result<()> { // Hardware acceleration is automatic let mut player = DesktopPlayer::new( DesktopPlayerConfig::default() )?; // Load HLS stream player.load("https://stream.example.com/live.m3u8")?; player.play(); // Quality metrics are always available let metrics = player.quality_metrics(); println!("Resolution: {:?}", metrics.resolution); Ok(()) }
from kino_frequency import FrequencyAnalyzer, Fingerprinter # Analyze audio content analyzer = FrequencyAnalyzer(fft_size=2048) result = analyzer.analyze_file("content.wav") print(f"Dominant: {result.dominant_hz} Hz") print(f"Genre: {result.tags.genre}") # Audio fingerprinting fp = Fingerprinter() hash = fp.fingerprint("content.wav") score = fp.compare("a.wav", "b.wav") print(f"Similarity: {score:.1%}")
import init, { KinoAbrController } from '@kino/wasm'; await init(); const abr = new KinoAbrController(); // Drop-in ABR for hls.js hls.on(Hls.Events.FRAG_LOADED, (e, data) => { abr.record_download(data.frag.loaded, data.frag.duration); const level = abr.select_level( hls.levels, videoEl.buffered, bandwidth ); hls.nextLoadLevel = level; });
# Fingerprint a video file $ kino fingerprint video.mp4 Hash: a7f3b2c9e1d045... Peaks: 1,247 # Analyze audio frequencies $ kino analyze audio.wav --format json { "dominant_hz": 440.0, "genre": "electronic", "bpm": 128 } # Transcode with Kino presets $ kino encode input.mov --preset hls-adaptive Encoding: 1080p, 720p, 480p Output: ./output/stream.m3u8 Done in 4.2s (hw accelerated)
Core SDK is MIT/Apache-2.0. Commercial modules for teams that need DRM, SLA, and dedicated support.
Install the MCP server and your AI agents can analyze any video stream in seconds.