Drop files. See what they have in common.

latent arranges your images, text, code, and PDFs in 3D based on what's in them. Things that are about the same topic cluster together. Runs entirely on your Mac, nothing leaves your device.

macOS 15+ · Apple Silicon · Free tier available

What it does

You give it files. It figures out what each one is about, then places them in a 3D space accordingly.

On-device processing

MobileCLIP runs locally on your Mac's neural engine. Your files never touch a server. No accounts, works offline.

Mix any file type

Drop images, text files, source code, and PDFs into the same space. The model understands all of them and places them relative to each other.

3D visualization

Embeddings are reduced to 3D and rendered with Metal. Rotate, zoom, pan around. Handles hundreds of items without breaking a sweat.

Drag and drop

Drop files onto the window. They show up in the visualization as embeddings are computed. No import wizard, no project setup.

Native macOS

SwiftUI app. Small download, opens fast. Looks and feels like the rest of your Mac.

Semantic clustering

Files about similar things end up near each other. Useful for spotting patterns you wouldn't notice otherwise, or just getting a feel for what's in a folder.

How it works

1

Drop your files

Drag anything onto the window — images, text, code, PDFs. Mix types freely.

2

Embeddings are computed

MobileCLIP processes each file on-device and generates a 512-dimensional vector. This takes a few seconds depending on how many files you add.

3

Explore the result

Vectors are reduced to 3D and rendered in real time. Orbit around, zoom in on clusters, click items to inspect them.

Pricing

Free tier has no time limit. Pro is a one-time purchase. No subscription, because there's nothing running on our end.

Free

$0
No time limit
  • Up to 300 items
  • Images, text, code, PDFs
  • 3D visualization
  • On-device processing

Pro

$9
One-time purchase, includes updates
  • Unlimited items
  • Everything in Free

Questions

What are embeddings, in plain terms? +

An embedding is a list of numbers that represents what a file "means." A photo of a dog and the text "golden retriever" would have similar numbers, so they'd appear near each other in the visualization. The model we use (MobileCLIP) understands both images and text.

Is my data actually private? +

Yes. The ML model runs on your Mac. Nothing is sent anywhere. There's no server, no analytics, no telemetry. Works fully offline.

How many files can it handle? +

The free tier supports 300 items. Pro is unlimited, though performance depends on your hardware. We've tested with 1,500+ items without issues.

What file types are supported? +

Images (JPEG, PNG, WebP), plain text, source code, and PDFs. You can mix them all in the same space. MobileCLIP handles both visual and text content.

Why a one-time purchase instead of a subscription? +

Everything runs on your Mac, so there's no ongoing cost on our side. If we add cloud features later, that'd be separate.

What macOS version do I need? +

macOS 15 (Sequoia) or later, on Apple Silicon. Intel Macs aren't supported. The neural engine is what makes on-device inference fast enough to actually use.

Try it now

Free on the Mac App Store. No account required.

Download on the App Store