Powered by local AI — no cloud, no API keys

Build websites with
AI that runs locally

Mizzy is a deep immersive Windows app that connects to your local LM Studio models to generate, edit, and preview websites in real time.

v1.9.23· Windows x64· 74 MB· Free & open
Mizzy — my-portfolio
🌐 index.html
🎨 style.css
⚡ script.js
📁 assets/
<!-- index.html — generated by Mizzy AI -->
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <title>My Portfolio</title>
  <link rel="stylesheet" href="style.css"/>
</head>
<body>
  <header class="hero">
    <h1>Hello, I'm <span>Alex</span></h1>
    <p>Full-stack developer & designer</p>
    <a href="#work" class="btn">View work</a>
  </header>
</body>
</html>
AI Assistant
You
Build me a modern portfolio with a hero, projects grid, and contact form
Mizzy AI
Here's your complete portfolio site with a sleek dark hero, animated project cards, and a working contact form…
Ask Mizzy AI…
Everything you need to build the web
From idea to working website, powered entirely by AI running on your own machine.
🤖
Local AI, zero cloud
Connects directly to LM Studio on your machine. Your code and prompts never leave your computer. Works offline once your model is loaded.
📝
Monaco editor
The same engine that powers VS Code. Full syntax highlighting for HTML, CSS, JS, TypeScript, JSON, and more. Bracket matching, auto-indent, find & replace.
👁
Live side-by-side preview
See your site instantly as you build. Toggle between desktop and mobile viewports. Preview refreshes automatically when you save.
💬
Context-aware AI chat
Switch between modes: generate a full site, modify the current file, create components, debug issues, or get your code explained.
📁
Full project management
Open any folder as a project. Create, rename, and delete files and folders. AI can write directly to multiple files at once.
One-click AI file writing
When AI generates multiple files, save them all to your project with a single click. No copy-paste needed.
🎨
Immersive dark UI
A deep, purple-tinted dark theme designed for focus. Resizable panels, glow accents, smooth transitions.
⚙️
Fully customisable
Choose any model loaded in LM Studio. Adjust temperature, max tokens, and write your own system prompt to shape the AI's behaviour.
Start building in 3 steps
No API keys, no sign-ups, no monthly fees.
01
Install LM Studio & load a model
Download LM Studio, grab any instruction-tuned model (Llama 3, Mistral 7B, etc.) and start the local server. Takes about 5 minutes.
02
Open or create a project
Open any folder as your project workspace, or let Mizzy scaffold a starter site with index.html, style.css, and script.js.
03
Describe what you want
Type a prompt in the AI panel. Mizzy sends it to your local model and lets you insert the code, preview it live, or save it to your files — all with one click.
What you need
🪟
Windows 10 or 11 (x64)
The app is a native Windows desktop application.
🤖
LM Studio (free)
Download from lmstudio.ai. Start the local server before launching Mizzy.
🧠
8 GB RAM minimum
16 GB+ recommended for larger models. 4-bit quantised models run on less.
💾
~2 GB disk for the app
Plus model size (4–8 GB per model typically). GPU optional but speeds things up.
What's new
All improvements shipped in the latest release.
v1.9.23 Mar 2026
  • Fixed: Enhance Prompt now uses the correct LM Studio URL from settings — previously always fell back to 127.0.0.1:1234 regardless of configured server address
v1.9.22 Mar 2026
  • Fixed: IPC listener leak in preload — lm:chunk and updater event listeners are now correctly removed after each stream, preventing accumulation over a session
v1.9.21 Mar 2026
  • New: Live prompt-processing progress — shows "Processing context… X%" in the generation bar and status bar during the LM Studio prompt-processing phase before generation begins
v1.9.20 Mar 2026
  • Fixed: Live streaming decorations no longer show the entire original file in red — only genuinely new lines appear green during streaming; deletions are shown in the diff view after streaming completes
  • Fixed: Multi-file planning no longer loops back to index.html — the file planner now runs without chat history context so the model correctly returns the full planned file list
v1.9.19 Mar 2026
  • Fixed: Live streaming decorations now show deleted lines as inline red ghost rows (view zones) and new lines as green — matching a standard diff view instead of a red gutter triangle
v1.9.18 Mar 2026
  • Fixed: Context history now stores the full AI response — removed the 1,000 token (~4,000 character) cap so the complete reply is always available for review and condensing
v1.9.17 Mar 2026
  • Improved: Multi-file generation now works sequentially — each file gets its own streaming request with full context of previously generated files, ensuring correct cross-references in every file
v1.9.16 Mar 2026
  • New: AI can create project folders — @MKDIR directives are executed automatically after code is written
  • New: AI can propose Windows shell commands — shown as approve/deny cards in the chat before any command runs
  • Fixed: Live streaming decorations now correctly show green (new lines), red gutter markers (deleted lines) and unchanged lines uncoloured
v1.9.15 Mar 2026
  • Fixed: Context history panel now pops out as a fixed-position modal — no longer clipped by the chat panel's overflow:hidden container
v1.9.14 Mar 2026
  • Improved: Context history panel — wider, shows full message content in scrollable blocks, improved layout
  • New: Files are now written to disk as each code block completes during streaming — tabs open progressively as the AI generates each file
v1.9.13 Mar 2026
  • Improved: Multi-file generation now injects all existing project files into the prompt so cross-references (link, script tags) are always correct in one shot — no second fixup request
  • Improved: Context history now stores full generated code (up to 4 000 chars) so the AI knows exactly what it previously built
  • New: Context Condense — button in context dropdown asks AI to summarise chat history into a compact brief, replacing all turns so you can continue well past the token limit without losing continuity
  • New: Auto-condense triggers automatically when context exceeds 85% of the token budget
v1.9.12 Mar 2026
  • Fix: Live streaming decorations restored — reverted heavy LCS DP (was blocking JS thread) back to O(n) multiset algorithm; full-precision diff shown post-stream in Monaco diff editor
  • Fix: Context reliability — AI now receives the current file content in full_site mode and edits existing sites instead of generating from scratch
v1.9.11 Mar 2026
  • Fix: Live streaming decorations now use LCS diff — green=added, amber=changed, red gutter marker=deleted lines
  • Fix: Critical context reliability bug — AI was generating entirely new sites instead of editing the current file because full_site mode never injected the existing file content into the prompt
v1.9.10 Mar 2026
  • Fix: Live streaming decorations now correctly highlight only genuinely new lines green — replaced broken index-based comparison with a multiset frequency-map algorithm so unchanged lines are never falsely marked as changed
v1.9.9 Mar 2026
  • New: Context history dropdown — click the token counter in the chat input to see all stored turns, their content and estimated token cost; includes a Clear button
v1.9.8 Mar 2026
  • Fix: Critical crash on startup — _updateTokenHint called before variables were initialized, preventing send button from working
  • Fix: Check for updates button now always visible in Settings panel (moved to static HTML, no longer dynamically injected)
v1.9.7 Mar 2026
  • New: Live streaming line decorations — green highlights show added/changed lines as AI types into an existing file
  • Improve: Multi-file fixup pass now auto-saves index.html directly instead of showing a diff review step
v1.9.6 Mar 2026
  • New: After generating multiple files, Mizzy automatically runs a fixup pass to wire up cross-references in index.html (CSS links, script tags)
  • Improve: Context mode dropdown removed — Mizzy is focused on building websites; mode is always full site generation
  • Fix: Context token budget now always visible in chat input (shows 0 / N on first load)
v1.9.5 Mar 2026
  • Fix: Revert broken live-diff-during-streaming — partial code streamed live now correctly updates the editor; full diff shown after generation completes
v1.9.4 Mar 2026
  • Improve: Post-stream diff no longer recreates the editor when diff was already active — final content applied in-place via updateDiffModified
v1.9.3 Mar 2026
  • Fix: Accept diff now reloads the file into the editor so the updated content is visible immediately after applying AI changes
  • Fix: Accept diff no longer fails with "path argument must be of type string. Received null" — file path captured before diff close
v1.9.1 Mar 2026
  • Improve: Replace fixed history turns with a context token budget — history is pruned by estimated tokens, not turn count
  • New: Live context token usage counter shown in chat input (yellow at 70%, red at 90% of budget)
v1.9.0 Mar 2026
  • New: Top-P, Frequency penalty, Presence penalty controls in Settings
  • New: Thinking budget tokens — limit how many tokens reasoning models spend thinking (0 = disabled)
  • New: Chat history turns setting — control how many message pairs are kept as context (0 = no history)
  • Improve: All new AI settings persisted to disk and passed to every chat stream request
v1.8.15 Mar 2026
  • Fix: Recover unclosed final code fence when model hits token limit — last code block no longer silently dropped
v1.8.14 Mar 2026
  • Improve: Always show Monaco diff view when AI edits an existing file, regardless of context mode — no longer limited to edit/current modes only
v1.8.13 Mar 2026
  • Fix: Enhance Prompt — increase max_tokens from 1024 to 2048 to prevent mid-output truncation of detailed prompts
v1.8.12 Mar 2026
  • Fix: Enhance Prompt — detect imperative prompt start ("Act as", "Create a", etc.) within numbered chain-of-thought output and extract from that point forward, discarding all reasoning preamble
v1.8.11 Mar 2026
  • Fix: Enhance Prompt — strip trailing stray step numbers left over from model chain-of-thought output
v1.8.10 Mar 2026
  • Fix: Enhance Prompt — strip plain-text numbered reasoning output and extract final prompt paragraph when model outputs chain-of-thought instead of using think tags
  • Fix: Rewrote Enhance Prompt system message to discourage chain-of-thought preamble in output
v1.8.9 March 2026 March 2026
  • 🧠
    Chat history context — The AI now remembers everything it wrote in the current session. When you ask for edits, it has full context of prior exchanges so changes are accurate and consistent.
  • 🔀
    Monaco diff view — When the AI edits an existing file in your project, changes are shown side-by-side using Monaco's built-in diff editor. Accept to save, or Discard to keep the original — full review before anything is written to disk.
  • 🐛
    Stats card description fix// FILE: index.html hint lines no longer appear in the AI response summary card below each chat message.
v1.7.0 March 2026
  • Enhance Prompt — New ✨ Enhance button in the chat input area rewrites your rough idea into a detailed, production-quality prompt using the AI. Streams the result directly back into the input so you can review and edit before sending.
v1.6.1 February 2026
  • 🐛
    Fixed absolute path saving error — When AI models output full paths like C:\Users\...\index.html instead of relative filenames, Mizzy now strips the absolute prefix and saves files correctly to the project folder.
  • 🖥️
    Preview full-height fix — Preview webview now sized explicitly via ResizeObserver in JavaScript, fixing the long-standing issue where only a small portion rendered.
v1.5.9 February 2026
  • 🐛
    Preview full-height fix — The preview webview now uses position: absolute; inset: 0 so it always fills the entire preview panel rather than collapsing to a small region.
v1.5.8 February 2026
  • 🐛
    File path parsing fix (subfolders) — Fixed a bug where paths like css/styles.css were truncated to just css because the filename regex excluded the / character. Subdirectory-based file paths in multi-file projects now save correctly.
v1.5.7 February 2026
  • 🧠
    Live thinking stream — AI reasoning (<think> tokens from DeepSeek-R1 and compatible models) now streams live into the chat as it is generated, with a real-time character counter. The thinking block auto-collapses when the model switches to code generation.
  • 🐛
    File path parsing fix — Fixed a bug where code fence hints like // FILE: test/test.html were being used as literal filenames (including the comment prefix), causing ENOENT errors. Mizzy now correctly strips the comment marker and FILE: prefix before saving.
v1.4.0 February 2026
  • Immersive live-typing experience — As the AI generates code, it types directly into the Monaco editor in real time — you watch your files being written character by character.
  • 🧠
    Thinking block — AI reasoning (from models that support <think> tokens, like DeepSeek-R1) is shown in a collapsible block in the chat, keeping the interface clean while still letting you inspect the model's reasoning.
  • 📊
    Stats card instead of raw code — The chat shows a compact stats card after generation (character count, block count, file names). You can expand a "View code" section if you want to inspect the raw output, but the focus stays on the editor.
v1.2.0 February 2026
  • AI auto-writes to the editor and project — When AI finishes generating code, it is now automatically applied. Named files are saved directly to your open project folder and the file tree refreshes instantly. If no named files are detected, the active editor tab is updated and saved. No copy-paste needed.
  • 🗂️
    Project-aware AI context — When a project folder is open, the AI is told the project root and instructed to label every code block with // FILE: path/to/file.ext so Mizzy can save each file in the right place automatically.
v1.1.0 February 2026
  • Real-time AI streaming — Responses now stream token-by-token directly into the chat panel with a live blinking cursor, so you see the AI writing instead of waiting for the full reply.
  • 🔌
    LM Studio connection pill — A status indicator in the bottom bar shows whether Mizzy is connected to your local LM Studio server. Click it at any time to re-check the connection and refresh the model list without restarting the app.
  • 🖊️
    Editor placeholder fix — The "open or create a file" hint no longer overlays existing code. It now correctly hides as soon as content is present in the active file.
  • 🔄
    Improved auto-updater UI — The in-app update banner now shows a live progress bar while downloading. A manual "Check for updates" button was added to the settings panel so you can trigger a check on demand.
  • 🐛
    IPv6 / ECONNREFUSED fix — Fixed a connection error on Windows where localhost resolved to the IPv6 loopback (::1) instead of 127.0.0.1, causing Mizzy to fail to reach LM Studio.

Ready to build something?

Download Mizzy for free and start building with local AI today.

Download Mizzy v1.9.17 for Windows