>Dev

Fixxer Warez Mode Fixxer Pro Mode
The Offline AI Archival Engine
Automate the chaos. Leveraging Computer Vision, Machine Learning, and Local AI, FIXXER organizes bursts, culls rejects, and auto-names your photos. Powered by your hardware, not the cloud. Get your Fixx. Get Organized.
> INSTALL_FIXXER_
100% LOCAL // FREE // NO SUBSCRIPTION
> SELECT MODULE: [ INSTALL_GUIDE ] [ SOURCE_CODE ] [ DEV_LOGS ] [ LIVE_FEED ]

/// SYSTEM_INITIALIZATION Status: Standing By...

================================================================================ > INITIATING_INSTALL_SEQUENCE ================================================================================
> SYSTEM REQUIREMENTS ________________________
  • OS: macOS / Linux / Windows (WSL2)
  • PYTHON: v3.12
  • AI ENGINE: Ollama (Local)
  • MODEL: qwen2.5vl:3b
> SELECT INSTALL METHOD ______________________
> STEP 1: TAP THE FIXXER REPO
brew tap BandwagonVibes/fixxer
> STEP 2: INSTALL FIXXER
# Installs FIXXER + All Dependencies
brew install fixxer
> STEP 3: WAKE THE VISION MODEL
# Required for AI naming & critique. (Approx 2.2GB)
ollama pull qwen2.5vl:3b
> STEP 4: LAUNCH SEQUENCE
fixxer
> STEP 1: DEPLOY REPOSITORY
git clone https://github.com/BandwagonVibes/fixxer.git
cd fixxer
> STEP 2: INITIALIZE VIRTUAL ENVIRONMENT
python3.12 -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
> STEP 3: INSTALL FIXXER CORE
# Installs FIXXER + All Dependencies (CLIP, BRISQUE)
# Also registers the global 'fixxer' command
pip install -e .
> STEP 4: WAKE THE VISION MODEL
# Required for AI naming & critique. (Approx 2.2GB)
ollama pull qwen2.5vl:3b
> STEP 5: LAUNCH SEQUENCE
# Launch directly using the alias
fixxer
> HINT: Press [F12] to toggle between 'Warez Mode' and 'Phantom Redline' HUD.

/// THE_BLUEPRINTS

> SYSTEM_ARCHITECTURE_V1.2
FIXXER_CORE/
├── RUNTIME: Python 3.12
│ ├── logic: engine.py (Orchestration)
│ └── state: config.py (Persistent Config)

├── INTERFACE: Textual TUI
│ ├── render: Rich (ANSI Compositing)
│ └── theme: Modular CSS (Warez/Pro)

├── INTELLIGENCE: vision_providers/
│ ├── adapter: OpenAI-Compatible API
│ ├── backends: Ollama, llama.cpp, vLLM, LocalAI
│ ├── model: qwen2.5-vl:3b (default)
│ ├── embed: CLIP (Semantic Clustering)
│ └── raw: RawPy (In-Memory Demosaic)

└── INTEGRITY: security.py
├── hash: SHA256 (Zero-Trust Move)
└── audit: Sidecar JSON (.fixxer.json)

> DEPENDENCY_MATRIX
[01] TEXTUAL // TUI Framework & Event Loop
[02] RAWPY // LibRaw wrapper (120+ Formats)
[03] OPENCV // BRISQUE & Laplacian Variance
[04] SCIKIT-LEARN // DBSCAN clustering algorithms
REPOSITORY STATUS: [ v1.2.0 STABLE ]

/// BUILDING_FIXXER

status: Field Notes from the Dev Log

> THE PHILOSOPHY: FOLDER FIRST

I don't use Lightroom. I don't trust proprietary catalogs that hold my work hostage. I believe in Folder First structures—universal, cross-platform, and future-proof.

The problem wasn't editing; it was the chaos before the edit. I needed a tool that respected the file system as the ultimate source of truth.

// VISUALIZATION: Surgical Automation in Real-Time
> SURGICAL AUTOMATION

The goal was simple: Give me my bandwidth back. I wanted to click [AUTO] and trust the machine to handle the logic.

The "Auto" Pipeline:

01 INTELLIGENT INGEST
The system grabs EXIF session stats immediately. No waiting on previews.

02 SEMANTIC STACKING
Bursts are detected, stacked, and given context-aware names by local AI. They land in AI-named folders, making them searchable by human logic, not just timestamps.

03 TIERED CULLING
Photos are analyzed and sorted into three distinct buckets:
- TIER A: The keepers.
- TIER B: The maybes.
- TIER C: The technical failures.

04 HERO EXTRACTION
The system automatically promotes the best image from a burst and the top-ranked singles to your "Hero" destination.

> ADAPTIVE WORKFLOWS

FIXXER respects your process. It isn't an all-or-nothing black box; it's modular by design.

// TERMINAL_OUTPUT: Local Vision Model Analysis

Modular Execution:
You don't always need the full nuclear launch sequence. Point FIXXER at a folder and just run [BURSTS] to stack raw files, or just [CULL] to separate the winners from the noise. It fits into your workflow, it doesn't replace it.

[EASY_ARCHIVE] Mode:
Not every shoot needs the surgical "Pro" treatment. For family pics or quick assets, this mode skips the heavy math. It simply AI-names your photos and sorts them into keyword-based folders. Point, click, sorted.

[CRITIQUE] Engine:
Stuck on an edit? The Critique module acts as a second pair of eyes. It uses the local vision model to analyze your photo creatively, offering offline advice on composition, lighting, and mood.

Provider Agnostic (v1.2):
FIXXER doesn't lock you into a single inference backend. The new vision_providers abstraction layer speaks fluent OpenAI-compatible API. Run Ollama out of the box, or point it at llama.cpp, vLLM, LocalAI, or Jan—whatever fits your rig. One config change. Zero code edits. Your hardware, your rules.

> CONTRIBUTORS_v1.2 [PR #8] @saksham-jain177 // vision provider architecture [EARLY] u/noctrex // community signal boost
> THE SECURITY LAYER

Speed is nothing without safety.

Real-Time Hash Verification:
Every single file operation is backed by a real-time SHA256 audit trail. A .fixxer.json sidecar travels with the individual photo forever. It is a certificate of authenticity that proves the file hasn't rot or corrupted since the moment it left the card.

// SECURITY_PROTOCOL: SHA256 Integrity Verification
> THE PROMISE

* No Internet Required: Air-gapped safe.
* No Subscription: You own the code.
* No Cloud: Your data never leaves your workstation.

> CONCLUSION

FIXXER exists to solve the "Pre-Edit" fatigue. It respects your privacy, enforces your folder structure, and gives you back the one thing you can't buy: Time.