refactor(aisidebar): restructure project and implement reasoning mode toggle

- Reorganize project structure and file locations
- Add ReasoningController to manage model selection and reasoning mode
- Update design and requirements for reasoning mode toggle
- Implement model switching between Qwen3-4B-Instruct and Qwen3-4B-Thinking models
- Remove deprecated files and consolidate project layout
- Add new steering and specification documentation
- Clean up and remove unnecessary files and directories
- Prepare for enhanced AI sidebar functionality with more flexible model handling
This commit is contained in:
Melvin Ragusa
2025-10-26 09:10:31 +01:00
parent 58bd935af0
commit 239242e2fc
73 changed files with 3094 additions and 2348 deletions

23
.kiro/steering/product.md Normal file
View File

@@ -0,0 +1,23 @@
---
inclusion: always
---
# Product Overview
AI Sidebar is a slide-in chat interface for the Ignis desktop environment that provides local AI assistance through Ollama integration.
## Core Features
- Slide-in sidebar from the left side with smooth animations
- Local AI chat using Ollama models
- Automatic conversation persistence across sessions
- Material Design 3 theming that matches Ignis
- Keyboard shortcut toggle support
- Automatic Ollama availability monitoring with graceful degradation
## User Experience
- Clicking outside the sidebar closes it (same as QuickCenter)
- Conversations are automatically saved to `~/.config/ignis/modules/aisidebar/data/conversations/`
- The UI gracefully handles Ollama being unavailable and notifies users when connectivity is restored
- Default width is 400px to match QuickCenter

View File

@@ -0,0 +1,70 @@
---
inclusion: always
---
# Project Structure
## File Organization
```
aisidebar/
├── __init__.py # Module exports (AISidebar class)
├── aisidebar.py # Main RevealerWindow implementation
├── chat_widget.py # Chat UI widget with message handling
├── ollama_client.py # HTTP client for Ollama REST API
├── ollama_monitor.py # Availability monitoring with callbacks
├── conversation_manager.py # Conversation persistence layer
└── data/
└── conversations/ # JSON conversation files (auto-created)
└── default.json # Default conversation transcript
```
## Module Responsibilities
### `aisidebar.py`
- Main window class extending `widgets.RevealerWindow`
- Handles slide-in animation from left side
- Manages window visibility and keyboard focus
- Integrates with Ignis WindowManager
### `chat_widget.py`
- Complete chat UI implementation
- Message list rendering and scrolling
- Input handling and submission
- Background thread management for AI requests
- Ollama availability monitoring integration
### `ollama_client.py`
- Low-level HTTP client for Ollama API
- Model listing with caching
- Blocking chat API calls
- Connection health checking
- Graceful error handling without exceptions
### `ollama_monitor.py`
- Periodic availability checking (30s interval)
- Callback-based state change notifications
- GLib timeout integration for non-blocking checks
### `conversation_manager.py`
- JSON-based conversation persistence
- Atomic file writes for data safety
- Message validation (system/user/assistant roles)
- Timestamp tracking for messages
## Naming Conventions
- Private methods/attributes: `_method_name`, `_attribute_name`
- Widget references: `self._widget_name` (e.g., `self._entry`, `self._message_list`)
- CSS classes: kebab-case (e.g., `ai-sidebar`, `ai-sidebar-content`)
- Constants: UPPER_SNAKE_CASE (e.g., `VALID_ROLES`, `DEFAULT_CONVERSATION_ID`)
## Code Style
- Type hints on function signatures
- Docstrings for classes and public methods
- Dataclasses for structured data (`ConversationState`)
- Context managers for file operations
- Property decorators for computed attributes
- Threading: daemon threads for background work
- Error messages: user-friendly with actionable instructions

63
.kiro/steering/tech.md Normal file
View File

@@ -0,0 +1,63 @@
---
inclusion: always
---
# Technology Stack
## Framework & Environment
- **Platform**: Ignis desktop environment (Python-based GTK4 framework)
- **Python Version**: 3.10+
- **UI Framework**: GTK4 via Ignis widgets
- **Async/Threading**: GLib for main loop, Python threading for background tasks
## Key Dependencies
- `ignis` - Desktop environment framework providing widgets and window management
- `ollama` - Python package for Ollama API integration
- GTK4 (`gi.repository.GLib`) - UI toolkit and event loop
## Architecture Patterns
### Widget System
- Uses Ignis widget abstractions (`widgets.Box`, `widgets.RevealerWindow`, etc.)
- Material Design 3 styling via CSS classes
- Revealer-based slide animations
### API Communication
- Direct HTTP calls to Ollama REST API (no external HTTP library)
- Uses `urllib.request` for HTTP operations
- Timeout handling: 2s for health checks, 5s for model lists, 120s for chat
### State Management
- Conversation persistence via JSON files
- Atomic file writes using `tempfile` and `os.replace()`
- In-memory caching for model lists
### Threading Model
- UI operations on GLib main thread
- AI requests in background daemon threads
- `GLib.idle_add()` for thread-safe UI updates
### Error Handling
- Graceful degradation when Ollama is unavailable
- Availability monitoring with 30-second polling interval
- User-facing error messages instead of exceptions
## Common Commands
Since this is an Ignis module, there are no build/test commands. The module is loaded directly by Ignis:
```bash
# Reload Ignis to apply changes
ignis reload
# Run Ignis with console output for debugging
ignis
# Check Ollama status
curl http://127.0.0.1:11434/api/tags
# List installed Ollama models
ollama list
```