Melvin Ragusa 55aad289bc docs(readme): Comprehensive update to AI Sidebar documentation
- Completely restructure README with detailed project overview
- Add comprehensive installation and usage instructions
- Include architecture diagram and component descriptions
- Expand on features, requirements, and configuration options
- Provide clear examples for keyboard shortcuts and slash commands
- Improve formatting and readability of technical documentation
- Describe core system components and their responsibilities
- Add details about conversation management and reasoning mode
2025-10-26 10:05:33 +01:00

AI Sidebar for Ignis

A local AI chat interface for the Ignis desktop environment with streaming responses, conversation management, and reasoning mode support.

Features

  • Streaming Responses: Real-time token-by-token display with smooth scrolling
  • Reasoning Mode: Toggle between standard and thinking models for enhanced problem-solving
  • Conversation Management: Auto-archiving, conversation history, and resume capabilities
  • Slash Commands: Quick actions for managing conversations and models
  • Graceful Degradation: Automatic Ollama availability monitoring with user notifications
  • Material Design 3: Seamless integration with Ignis theming

Requirements

  • Ignis desktop environment
  • Python 3.10+
  • Ollama running locally with models installed
  • GTK4 (provided by Ignis)

Installation

  1. Clone or copy this module to your Ignis modules directory:

    ~/.config/ignis/modules/aisidebar/
    
  2. Install Ollama and pull the required models:

    # Install Ollama (if not already installed)
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Pull the default models
    ollama pull hf.co/unsloth/Qwen3-4B-Instruct-2507-GGUF:Q8_K_XL
    ollama pull hf.co/unsloth/Qwen3-4B-Thinking-2507-GGUF:Q8_K_XL
    
  3. Start Ollama:

    ollama serve
    
  4. Reload Ignis:

    ignis reload
    

Usage

Keyboard Shortcuts

Bind a key in your window manager configuration:

Niri (~/.config/niri/config.kdl):

binds {
    Mod+G { spawn "ignis" "toggle-window" "AISidebar"; }
}

Chat Interface

  • Enter: Send message
  • Shift+Enter: New line in input
  • Click outside: Close sidebar

Reasoning Mode

Toggle the reasoning mode button (🧠) to switch between:

  • Standard Mode: Fast responses using Qwen3-4B-Instruct
  • Reasoning Mode: Detailed thinking process using Qwen3-4B-Thinking

When reasoning mode is enabled, responses include a collapsible thinking section showing the model's internal reasoning process.

Slash Commands

Execute commands by typing them in the chat:

  • /new or /clear - Start a new conversation (archives current)
  • /models - List available Ollama models
  • /model <name> - Switch to a specific model
  • /list - Show archived conversations
  • /resume <id> - Resume an archived conversation

Architecture

aisidebar/
├── __init__.py              # Module entry point
├── aisidebar.py             # RevealerWindow implementation
├── chat_widget.py           # Main chat UI and message handling
├── ollama_client.py         # HTTP client for Ollama REST API
├── ollama_monitor.py        # Availability monitoring with callbacks
├── conversation_manager.py  # Active conversation persistence
├── conversation_archive.py  # Multi-conversation management
├── command_processor.py     # Slash command system
├── reasoning_controller.py  # Reasoning mode state management
├── streaming_handler.py     # Token-by-token streaming display
├── style.css                # GTK4 CSS styling
└── data/
    └── conversations/       # JSON conversation files
        ├── default.json     # Active conversation
        └── archive_*.json   # Archived conversations

Core Components

AISidebar (aisidebar.py)

  • RevealerWindow that slides in from the left
  • Manages window visibility and keyboard focus
  • Integrates with Ignis WindowManager

ChatWidget (chat_widget.py)

  • Complete chat interface with multi-line input
  • Message list with auto-scrolling
  • Background threading for AI requests
  • Command processing and reasoning mode toggle

OllamaClient (ollama_client.py)

  • Direct HTTP calls to Ollama REST API
  • Streaming and non-streaming chat support
  • Model listing with caching
  • Graceful error handling

StreamingHandler (streaming_handler.py)

  • Token buffering for smooth UI updates
  • Separate handling for thinking and response content
  • Thread-safe GLib.idle_add integration

ConversationManager (conversation_manager.py)

  • JSON-based persistence with atomic writes
  • Message validation and timestamp tracking
  • Auto-trimming to keep recent messages

ConversationArchive (conversation_archive.py)

  • Multi-conversation support with unique IDs
  • Metadata extraction for conversation lists
  • Archive and resume functionality

OllamaMonitor (ollama_monitor.py)

  • Periodic availability checking (30s interval)
  • Callback-based state change notifications
  • Non-blocking GLib timeout integration

CommandProcessor (command_processor.py)

  • Extensible slash command system
  • Command registration and execution
  • Structured result handling

ReasoningController (reasoning_controller.py)

  • Reasoning mode state persistence
  • Model selection based on mode
  • Model-specific parameter optimization

Configuration

Data Storage

Conversations are stored in:

~/.config/ignis/modules/aisidebar/data/conversations/

User preferences are stored in:

~/.config/aisidebar/preferences.json

Auto-Archiving

The sidebar automatically archives old messages when the conversation exceeds 20 messages, keeping only the most recent ones in the active conversation. Archived messages are saved with timestamps for later retrieval.

Customization

Change sidebar width (edit aisidebar.py):

self.content_box.width_request = 400  # Default: 400px

Change animation speed (edit aisidebar.py):

transition_duration=300,  # Default: 300ms

Modify styling (edit style.css):

  • TextView background and colors
  • Thinking box appearance
  • Message bubble styling

Troubleshooting

Sidebar doesn't appear:

# Reload Ignis
ignis reload

# Check for errors
ignis

No AI responses:

# Verify Ollama is running
curl http://127.0.0.1:11434/api/tags

# Check installed models
ollama list

# Start Ollama if not running
ollama serve

Ollama connection issues:

  • The sidebar will display "Ollama not running" in the model label
  • Input will be disabled until connection is restored
  • Automatic monitoring checks every 30 seconds
  • A notification appears when connection is restored

Reasoning mode not working:

  • Ensure both models are installed (see Installation)
  • Check that Ollama is running
  • Try toggling reasoning mode off and on

Development

Adding New Commands

Register commands in chat_widget.py:

def _register_commands(self):
    self._command_processor.register_command("/mycommand", self._cmd_my_handler)

def _cmd_my_handler(self, args: str) -> CommandResult:
    # Implementation
    return CommandResult(success=True, message="Done!")

Modifying Model Parameters

Edit reasoning_controller.py to adjust temperature, top_p, and other parameters for each model mode.

Custom Styling

The sidebar uses GTK4 CSS classes:

  • .ai-sidebar - Main container
  • .ai-sidebar-content - Content area
  • .thinking-box - Reasoning display
  • .thinking-header - Collapsible header
  • .thinking-content - Reasoning text

License

This project is provided as-is for use with the Ignis desktop environment.

Description
No description provided
Readme 184 KiB
Languages
Python 98.9%
CSS 1.1%