Unleash Your Imagination with aiMultiFool

Whether you’re a storyteller, roleplay fan, or just love exploring what AI can do, aiMultiFool is your offline creative sandbox. Run powerful language models directly on your PC—no internet, no data tracking, just pure, private AI interaction.

Create. Roleplay. Experiment. It’s all up to you.
Are you ready to take your creativity to the next level?

The Frozen World roleplaying character card
Alien Invasion roleplaying character card
Scarlett the Goth roleplaying character card

Latest News

Step Into Your Favorite Story

Vector Chat now lets you import books and documents in PDF, EPUB, TXT, or DOCX formats.

Want to be a character in your favorite novel? Just upload the book and choose a role. While the story won’t follow the original word-for-word, your role-play will stay true to the world, tone, and key details of the story—making it feel like you’re really part of it.

Why Choose aiMultiFool?

Privacy First

Your content, your rules.
aiMultiFool runs 100% offline, meaning your data never leaves your device. Unlike cloud-based alternatives, all interactions happen locally—giving you full control over your privacy. Nothing is written to disk without your permission, except for Vector Chat, which stores local databases to support persistent memory.


Endless Customization and Possibilities

Shape your AI experience your way.
Design custom Character Cards, scenarios, and fully configurable Action Menus using the built-in editor. Need quick access to a specific weapon like an MP5? Create a menu item for it. The possibilities are limitless—you’re in control.


Local Roleplay at Its Best

Dive into immersive, offline storytelling.
aiMultiFool supports creating, editing, and using SillyTavern v2 Character Cards, letting you build rich roleplay worlds. Whether it’s high-stakes alien invasions or deep, emotional narratives, your imagination sets the boundaries.


Virtual Friends with Memory

Create persistent AI companions.
Vector Chat lets you build virtual friends who remember your preferences, past conversations, and story developments. Whether it’s a helpful assistant, a virtual girlfriend, or even Donald Trump, your AI evolves with every chat.


Play a Role in Your Favorite Novel

Import your favorite books and become a character.
Vector Chat supports PDF, EPUB, TXT, and DOCX formats. Choose a role from any story and start roleplaying. It won’t follow the book word-for-word, but it will preserve the feel, theme, and key facts—giving you a personalized spin on classic tales.


RAG (Retrieval-Augmented Generation) Support

Import multiple documents and have natural conversations with them using AI. With RAG, the assistant doesn’t just guess answers — it retrieves relevant information from your uploaded content in real time to provide accurate, context-aware responses.


Seamless GPU Acceleration

Have an NVIDIA GPU with CUDA cores?
Ollama will automatically leverage it for faster, smoother performance. Enjoy the full power of your hardware.

aria and the end of the world roleplaying character card
akane hoshino roleplaying character card
zombie outbreak roleplaying character card
The interface of aiMultiFool is powerful and configurable

What can you do?

Epic Adventures

Battle Chuck Norris, explore alien worlds, or become the hero of your own blockbuster movie. We’ve even included a few ready-made scenarios to kickstart your journey.

Custom Scenarios

Create rich, immersive stories using the built-in Character Card tools. Whether you’re crafting sci-fi thrillers or fantasy epics, aiMultiFool acts as your private, offline writing assistant—no cloud, no compromise.

Collaborative Storytelling

Easily share your Character Cards, Action Menus, or even full chat sessions with friends or the wider aiMultiFool community. Storytelling is better together.

Your Personal AI Assistant

Need help with work, planning, or daily tasks? Use aiMultiFool as your private, local ChatGPT alternative—powerful, flexible, and always offline.

Roxy the Cultist Advocate roleplaying character card
The Conscript roleplaying character card
The Night Club roleplaying character card
Island Encounter roleplaying character card
Lilith the Witch roleplaying character card
The Ruined Church Ritual roleplaying character card

Download & Installation

⚠️ Important Notice:
aiMultiFool supports uncensored AI models via Ollama, ideal for open-ended roleplay. These models may generate sensitive, explicit, or NSFW content. As such, the app is strictly intended for users 18 years and older.
By downloading or using aiMultiFool, you confirm that you are over 18 and agree to the terms outlined in the End User License Agreement.

System Requirements

  • Operating System: Windows 11 or Linux x64
  • Disk Space: Minimum 400MB for aiMultiFool, but several GB needed for Ollama models.
  • Dependencies:
    • Ollama and required models must be installed and running locally or on your network.

Step 1 – Install Ollama

  • Download and install Ollama.
  • To verify it’s working, open your browser and visit: http://localhost:11434
  • You should see a confirmation that Ollama is running.

Step 2 – Install Roleplay AI Models

Open a Command Prompt or Terminal and run the following commands:

ollama pull hf.co/bartowski/L3-8B-Stheno-v3.2-GGUF:Q4_K_M
ollama pull hf.co/bartowski/Llama-3.1-8B-Lexi-Uncensored-V2-GGUF:Q4_K_M
ollama pull hf.co/bartowski/Llama-3.2-3B-Instruct-uncensored-GGUF:Q4_K_M

  • The first model is strictly 18+. The last one is ideal for users without an Nvidia GPU

Step 3 – Install the Embeddings Model for aiMultiFool Vector Chat

This model enables memory and context features in Vector Chat:

ollama pull nomic-embed-text

Step 4 – aiMultiFool Installation

aiMultiFool is simple and portable. It comes in a zipped folder.

To get started:

  • Download the ZIP file (~150MB).
  • Extract the contents by right-clicking and selecting “Extract.”

Windows

  • Navigate to the windows folder.
  • Run the file named aimultifool.exe.
⚠️Security Note: If Windows Defender SmartScreen blocks the program: Click “More Info.” Select “Run Anyway.”

Linux

  • Open a terminal window inside the linux folder.
  • Run the following command:
chmod +x aimultifool && ./aimultifool
 

Current Version

  • Current Version: aiMultiFool 0.3.0 – Many tweaks and changes, most under the hood.
  • Release Date: June 16, 2025

Source Code

Though aiMultiFool is not open source all Python source is included in the ZIP. This means you can inspect the inherent privacy of our code, or just be nosy.

Join the aiMultiFool Community

Connect with like-minded AI enthusiasts on Discord! Share tips, tricks, and character cards, or simply hang out and exchange ideas. Happy to add ANY feature you want!

Support us With Coffee

aiMultiFool is COFFEEWARE. Love what aiMultiFool offers? Show your support by buying the developer a coffee! 🍻 Your contribution will help make aiMultiFool even better, and coffee makes code good!

Frequently Asked Questions

Why does aiMultiFool need Ollama?

Previously, aiMultiFool included its own built-in LLM inference engine using C# and Llamasharp. While powerful, it was difficult to maintain. We’ve simplified things—aiMultiFool is now a lightweight, Python-based front end for Ollama, making everything faster, cleaner, and easier to update.


How does the new Vector Chat tool work? It seems shhh magic!

Vector Chat uses Qdrant, a vector database, to store and retrieve past chats as embeddings—mathematical representations of text. These embeddings are created using a small worker model running inside Ollama.

This enables a technique called retrieval-augmented generation (RAG), where similar past messages are pulled in to give context to new ones. This makes the AI feel more consistent and personal over time—it “remembers” you.

Vector Chat Workflow

  • You send a message.
  • The app creates an embedding.
  • Qdrant searches for similar past exchanges.
  • Those are sent to Ollama along with your message.
  • Ollama responds using the retrieved context
  • The new exchange is stored as a vector in Qdrant.

How can Vector Chat import a novel? Have you lost your mind?

This is an experimental feature still in early development. When you import an EPUB file or book, we scan for chapters and paragraphs, breaking the book into manageable chunks and turning them into vectorized entries. While it’s not perfect yet, it gives the AI searchable context to simulate the theme and tone of the book.


Why does the AI get confused about which role it’s playing in my scenario?

When using the Card Tools’ Action Menu to create a new scenario, make sure to set the first_mes field—the first line your character says. This is crucial for establishing your role.

Example:
"Barman, give me a whiskey please."
Without it, the AI may try to control the scene instead of letting you take the lead.


Why aren’t the export or play buttons working after I create a custom scenario JSON?

Sometimes, the AI might produce an invalid JSON due to formatting errors or lazy outputs. If this happens, go to Card Tools → Fix Character Card JSON to automatically repair it.

This issue is more likely to occur with tired or overloaded models. Starting fresh with a clean model and a new chat usually prevents it.


What’s the fastest way to create and edit Character Cards with AI help?

Use a small, fast model—like a 3B parameter model—for building and editing your Character Cards. Once everything is set up, save the card and switch to a larger, more powerful model for actual roleplaying. It saves time and keeps things smooth.

About the Author

Hi, I’m Dan Bailey from Bournemouth, UK. By day I work in IT, but coding has always been a lifelong passion. From the humble VIC20 to today, I’ve built projects like EmotiPad, SmileyPad, and MSFS Aircraft Pinner.

This Ollama edition of aiMultiFool was developed using Python Tkinter as a standalone desktop app. We chose this route because, unlike browser-based UIs, thick clients are better for privacy and local control. AI also excels at Python code over all others and it’s cross platform!

The codebase was largely written with the help of GitHub Copilot AI with GPT-4o, GPT-4.1, Claude Sonnet 3.7 and Gemini 2.5 Pro

The app is still actively in development, with plenty of new features and ideas in the pipeline. I genuinely hope you enjoy using aiMultiFool as much as I’ve enjoyed building it.

Feel free to join our Discord to share ideas, chat, or provide feedback. I’m always open to suggestions and happy to add features you’d like to see.

Huge thanks to the Ollama team for building what I believe is the best AI backend out there.

Contact me: dan@aiMultiFool.com

Disclaimer

aiMultiFool is a locally run AI application created by Dan Bailey. By using this software, you agree to the following terms:

User Responsibility

The owner assumes no responsibility for content generated, shared, or used within the aiMultiFool application. Users are solely accountable for ensuring that their use of the app complies with all applicable laws, regulations, and ethical standards.

Generated Content

aiMultiFool provides tools for content generation and interaction, but it does not monitor, validate, or endorse any user-generated material. Since the app uses artificial intelligence, it may occasionally produce inaccurate, unintended, or inappropriate results.

Privacy and Data Handling

Your privacy is a priority.
aiMultiFool operates entirely offline, and no chat data or user input is transmitted to external servers—except in cases where Ollama is hosted on a different machine, in which case data is sent only to that local server.

The app does not keep logs or store user inputs, except for the Vector Chat tool, which writes to a local vector database solely to support context and memory features. No data is ever uploaded or tracked externally.

Limitation of Liability

By using aiMultiFool, you acknowledge and agree that Dan Bailey and any contributors are not liable for any consequences—legal, financial, or otherwise—that may result from your use or misuse of the application.

Users are strongly encouraged to use the software responsibly, apply discretion, and review generated content before sharing or relying on it.