feat(aisidebar): implement Ollama availability handling and graceful startup

- Add comprehensive Ollama connection error handling strategy
- Implement OllamaClient with non-blocking initialization and connection checks
- Create OllamaAvailabilityMonitor for periodic Ollama connection tracking
- Update design and requirements to support graceful Ollama unavailability
- Add new project structure for AI sidebar module with initial implementation
- Enhance error handling to prevent application crashes when Ollama is not running
- Prepare for future improvements in AI sidebar interaction and resilience
This commit is contained in:
Melvin Ragusa
2025-10-25 22:28:54 +02:00
parent 935b800221
commit 58bd935af0
11 changed files with 895 additions and 11 deletions

View File

@@ -71,3 +71,15 @@ This document outlines the requirements for enhancing the AI sidebar module for
3. WHEN reasoning mode is disabled, THE AI Sidebar SHALL request and display only the final answer without intermediate reasoning
4. THE AI Sidebar SHALL persist the reasoning mode preference across conversation sessions
5. THE AI Sidebar SHALL visually distinguish reasoning content from final answer content when reasoning mode is enabled
### Requirement 6: Graceful Ollama Unavailability Handling
**User Story:** As a user, I want the AI Sidebar to start and function even when Ollama is not running, so that Ignis can launch successfully and I can start Ollama when I'm ready to use the AI features.
#### Acceptance Criteria
1. WHEN Ollama is not running at startup, THE AI Sidebar SHALL initialize successfully without blocking Ignis startup
2. WHEN Ollama is unavailable, THE AI Sidebar SHALL display a clear message instructing the user to start Ollama
3. WHEN the user attempts to send a message while Ollama is unavailable, THE AI Sidebar SHALL display a helpful error message instead of crashing
4. WHEN Ollama becomes available after startup, THE AI Sidebar SHALL detect the availability and enable chat functionality without requiring a restart
5. THE AI Sidebar SHALL handle Ollama connection failures gracefully during model listing, switching, and chat operations