>Dev
/// SYSTEM_INITIALIZATION Status: Standing By...
- OS: macOS / Linux / Windows (WSL2)
- PYTHON: v3.12
- AI ENGINE: Ollama (Local)
- MODEL: qwen2.5vl:3b
brew install fixxer
ollama pull qwen2.5vl:3b
cd fixxer
source venv/bin/activate # Windows: venv\Scripts\activate
# Also registers the global 'fixxer' command
pip install -e .
ollama pull qwen2.5vl:3b
fixxer
/// THE_BLUEPRINTS
├── RUNTIME: Python 3.12
│ ├── logic: engine.py (Orchestration)
│ └── state: config.py (Persistent Config)
│
├── INTERFACE: Textual TUI
│ ├── render: Rich (ANSI Compositing)
│ └── theme: Modular CSS (Warez/Pro)
│
├── INTELLIGENCE: vision_providers/
│ ├── adapter: OpenAI-Compatible API
│ ├── backends: Ollama, llama.cpp, vLLM, LocalAI
│ ├── model: qwen2.5-vl:3b (default)
│ ├── embed: CLIP (Semantic Clustering)
│ └── raw: RawPy (In-Memory Demosaic)
│
└── INTEGRITY: security.py
├── hash: SHA256 (Zero-Trust Move)
└── audit: Sidecar JSON (.fixxer.json)
> DEPENDENCY_MATRIX
[02] RAWPY // LibRaw wrapper (120+ Formats)
[03] OPENCV // BRISQUE & Laplacian Variance
[04] SCIKIT-LEARN // DBSCAN clustering algorithms
/// BUILDING_FIXXER
status: Field Notes from the Dev Log
I don't use Lightroom. I don't trust proprietary catalogs that hold my work hostage. I believe in Folder First structures—universal, cross-platform, and future-proof.
The problem wasn't editing; it was the chaos before the edit. I needed a tool that respected the file system as the ultimate source of truth.
The goal was simple: Give me my bandwidth back. I wanted to click [AUTO] and trust the machine to handle the logic.
The "Auto" Pipeline:
01 INTELLIGENT INGEST
The system grabs EXIF session stats immediately. No waiting on previews.
02 SEMANTIC STACKING
Bursts are detected, stacked, and given context-aware names by local AI. They land in AI-named folders, making them searchable by human logic, not just timestamps.
03 TIERED CULLING
Photos are analyzed and sorted into three distinct buckets:
- TIER A: The keepers.
- TIER B: The maybes.
- TIER C: The technical failures.
04 HERO EXTRACTION
The system automatically promotes the best image from a burst and the top-ranked singles to your "Hero" destination.
FIXXER respects your process. It isn't an all-or-nothing black box; it's modular by design.
Modular Execution:
You don't always need the full nuclear launch sequence. Point FIXXER at a folder and just run [BURSTS] to stack raw files, or just [CULL] to separate the winners from the noise. It fits into your workflow, it doesn't replace it.
[EASY_ARCHIVE] Mode:
Not every shoot needs the surgical "Pro" treatment. For family pics or quick assets, this mode skips the heavy math. It simply AI-names your photos and sorts them into keyword-based folders. Point, click, sorted.
[CRITIQUE] Engine:
Stuck on an edit? The Critique module acts as a second pair of eyes. It uses the local vision model to analyze your photo creatively, offering offline advice on composition, lighting, and mood.
Provider Agnostic (v1.2):
FIXXER doesn't lock you into a single inference backend. The new vision_providers abstraction layer speaks fluent OpenAI-compatible API. Run Ollama out of the box, or point it at llama.cpp, vLLM, LocalAI, or Jan—whatever fits your rig. One config change. Zero code edits. Your hardware, your rules.
Speed is nothing without safety.
Real-Time Hash Verification:
Every single file operation is backed by a real-time SHA256 audit trail. A .fixxer.json sidecar travels with the individual photo forever. It is a certificate of authenticity that proves the file hasn't rot or corrupted since the moment it left the card.
* No Internet Required: Air-gapped safe.
* No Subscription: You own the code.
* No Cloud: Your data never leaves your workstation.
FIXXER exists to solve the "Pre-Edit" fatigue. It respects your privacy, enforces your folder structure, and gives you back the one thing you can't buy: Time.