MCP-Native · Rust SDK · WASM-Compiled

Video Intelligence
for AI Agents

Any AI agent can analyze, fingerprint, and monitor video streams. Rust-powered. WASM-compiled. MCP-native.

terminal
Integration Surfaces
One SDK, every agent

MCP tools, CLI pipelines, WASM modules, or Python bindings. Give any AI agent video superpowers.

🤖
MCP Server

8 tools for Claude, GPT, or any MCP-compatible agent. Analyze, fingerprint, QC, and monitor streams.

CLI Pipelines

14 subcommands. Pipe into any agentic workflow. JSON output for structured reasoning.

🧩
WASM Module

Run video analysis in-browser. No server round-trip. Perfect for multi-modal AI applications.

🐍
Python Bindings

PyO3-powered. Drop into LangChain, CrewAI, or any Python agent framework.

🦀
Rust SDK

7 independent crates. Import the full stack or cherry-pick modules. Zero-copy, zero-alloc hot paths.

Cloud Functions

Deploy as serverless video processing. WASM for Cloudflare Workers, native for Lambda/Cloud Run.

Agent Use Cases
What AI agents build with Kino

Video intelligence primitives that agents use to see, hear, and reason about media at scale.

Content Safety
Agent-Powered Content Moderation

Agents analyze live streams in real time — fingerprint audio, detect quality anomalies, and flag policy violations before they reach viewers.

Real-time analysis · MCP-native
Media Intelligence
Autonomous Media Indexing

Agents fingerprint audio libraries, auto-tag genre and mood, and build searchable indexes across millions of media files.

Spectral fingerprinting · Sub-second
Reliability
QA Agents for Live Streams

Continuous quality assurance — agents monitor bitrate ladders, segment accessibility, DRM status, and caption completeness 24/7.

25+ QC checks · Zero downtime
Optimization
Transcoding Pipeline Agents

Agents select optimal encoding presets per-device, generate adaptive HLS/DASH ladders, and validate output automatically.

Mobile / Desktop / 4K presets
Multi-modal AI
Multi-Modal AI Applications

Give vision-language models structured video metadata. Stream analysis, codec details, and quality metrics feed into reasoning chains.

Structured JSON · Agent-ready
Platform Play
Video Infrastructure APIs

Build Mux-like video APIs backed by Kino. WASM for edge processing, Rust for throughput, MCP for agent orchestration.

WebAssembly · Edge-native
Why Kino
Why agents choose Kino

Video is the last unstructured frontier. Kino gives agents the primitives to reason about it natively.

🤖

MCP-Native Tool Interface

8 structured tools that any MCP-compatible agent can call. JSON in, JSON out. No wrapper scripts, no prompt hacking.

Rust Performance, Agent Scale

4ms startup. Zero-copy parsing. Process thousands of streams concurrently without the memory leaks that crash Python pipelines.

Formally Verified State Machines

TLA+ specifications with 25+ invariants. When agents orchestrate video infrastructure, correctness is not optional.

🧩

Run Anywhere: Edge to Cloud

WASM for Cloudflare Workers, native binary for Lambda, Python bindings for notebooks, MCP for agents. Same engine everywhere.

Capability Kino ffprobe / ffmpeg Cloud APIs
Agent Interface (MCP) Native None REST only
Audio Fingerprinting Built-in External Separate service
Stream QC / Validation 14 checks Manual Limited
Edge Deployment (WASM) Native target N/A Server only
Structured JSON Output All tools Partial Yes
Memory Safety Rust-native C (unsafe) Managed
Formal Verification TLA+ specs None None
Self-Hosted / Air-Gap Single binary Yes Cloud-only
Architecture
Seven crates. Use what you need.

Independent Rust crates. Import the full stack or cherry-pick modules.

Agent Layer
kino-mcp8 MCP tools for AI agents
Interfaces
kino-cli14 subcommands
kino-wasmBrowser / Edge
kino-pythonPyO3 bindings
kino-tauriDesktop GUI
│    │    │    │
Core Engine
kino-coreHLS / DASH / ABR / Buffers / DRM
kino-frequencyFingerprint / Autotag / FFT
Capabilities
Everything a video engine needs

Production-grade features. No plugins, no third-party runtime.

Hardware Acceleration

VA-API, VideoToolbox, NVDEC, D3D11VA. Auto-detects the best backend on every platform.

📡
Adaptive Streaming

HLS and DASH with hybrid ABR. Buffer-aware quality switching. Zero-stall playback.

🎵
Audio Fingerprinting

Shazam-style spectral peak matching. Content ID, duplicate detection, similarity scoring.

🧩
WebAssembly

Full player compiled to WASM. One script tag, adaptive streaming. No install needed.

🔐
DRM Support

Widevine, FairPlay, PlayReady integration. Content protection for commercial deployments.

Formally Verified

TLA+ specs for critical paths. 25+ invariants including NoDoubleCharge and ExactlyOnceDelivery.

Integration
Works with your stack

Rust, Python, JavaScript, or CLI. Pick the interface that fits.

use kino_core::{PlayerConfig, PlayerSession};
use kino_desktop::{DesktopPlayer, DesktopPlayerConfig};

fn main() -> anyhow::Result<()> {
    // Hardware acceleration is automatic
    let mut player = DesktopPlayer::new(
        DesktopPlayerConfig::default()
    )?;

    // Load HLS stream
    player.load("https://stream.example.com/live.m3u8")?;
    player.play();

    // Quality metrics are always available
    let metrics = player.quality_metrics();
    println!("Resolution: {:?}", metrics.resolution);
    Ok(())
}
from kino_frequency import FrequencyAnalyzer, Fingerprinter

# Analyze audio content
analyzer = FrequencyAnalyzer(fft_size=2048)
result = analyzer.analyze_file("content.wav")

print(f"Dominant: {result.dominant_hz} Hz")
print(f"Genre:    {result.tags.genre}")

# Audio fingerprinting
fp = Fingerprinter()
hash = fp.fingerprint("content.wav")
score = fp.compare("a.wav", "b.wav")
print(f"Similarity: {score:.1%}")
import init, { KinoAbrController } from '@kino/wasm';

await init();
const abr = new KinoAbrController();

// Drop-in ABR for hls.js
hls.on(Hls.Events.FRAG_LOADED, (e, data) => {
  abr.record_download(data.frag.loaded, data.frag.duration);
  const level = abr.select_level(
    hls.levels, videoEl.buffered, bandwidth
  );
  hls.nextLoadLevel = level;
});
# Fingerprint a video file
$ kino fingerprint video.mp4
Hash:  a7f3b2c9e1d045...
Peaks: 1,247

# Analyze audio frequencies
$ kino analyze audio.wav --format json
{ "dominant_hz": 440.0, "genre": "electronic", "bpm": 128 }

# Transcode with Kino presets
$ kino encode input.mov --preset hls-adaptive
Encoding: 1080p, 720p, 480p
Output:   ./output/stream.m3u8
Done in 4.2s (hw accelerated)
Performance
Numbers that matter
8
MCP tools
14
CLI subcommands
7
Rust crates
4ms
Startup latency
Licensing
Open core, commercial extensions

Core SDK is MIT/Apache-2.0. Commercial modules for teams that need DRM, SLA, and dedicated support.

Open Source
Community
Free forever
  • Full SDK + MCP server
  • All 7 crates + CLI
  • Audio fingerprinting
  • HLS/DASH analysis
  • WASM build target
  • MIT / Apache-2.0 license
Clone the Repo
Platform
Enterprise
Custom
  • Everything in Professional
  • Dedicated support engineer
  • Custom SLA (99.99%+)
  • Unlimited agent calls
  • All DRM: Widevine + FairPlay + PlayReady
  • White-label MCP server
  • On-prem / air-gapped deployment
Contact Us

Give your agents video intelligence

Install the MCP server and your AI agents can analyze any video stream in seconds.

$ npm install -g kino-mcp