feat(aisidebar): implement Ollama availability handling and graceful startup
- Add comprehensive Ollama connection error handling strategy - Implement OllamaClient with non-blocking initialization and connection checks - Create OllamaAvailabilityMonitor for periodic Ollama connection tracking - Update design and requirements to support graceful Ollama unavailability - Add new project structure for AI sidebar module with initial implementation - Enhance error handling to prevent application crashes when Ollama is not running - Prepare for future improvements in AI sidebar interaction and resilience
This commit is contained in:
@@ -329,6 +329,31 @@ class PreferencesState:
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Ollama Unavailability
|
||||
|
||||
- **Startup Without Ollama**: Initialize all components successfully, show status message in UI
|
||||
- **Model List Failure**: Return empty list, display "Ollama not running" in model label
|
||||
- **Chat Request Without Ollama**: Display friendly message: "Please start Ollama to use AI features"
|
||||
- **Connection Lost Mid-Stream**: Display partial response + reconnection instructions
|
||||
- **Periodic Availability Check**: Attempt to reconnect every 30s when unavailable (non-blocking)
|
||||
|
||||
#### Implementation Strategy
|
||||
|
||||
```python
|
||||
class OllamaClient:
|
||||
def __init__(self, host: str | None = None) -> None:
|
||||
# Never raise exceptions during initialization
|
||||
# Set _available = False if connection fails
|
||||
|
||||
def list_models(self) -> list[str]:
|
||||
# Return empty list instead of raising on connection failure
|
||||
# Log warning but don't crash
|
||||
|
||||
def chat(self, ...) -> dict[str, str] | None:
|
||||
# Return error message dict instead of raising
|
||||
# {"role": "assistant", "content": "Ollama unavailable..."}
|
||||
```
|
||||
|
||||
### Streaming Errors
|
||||
|
||||
- **Connection Lost**: Display partial response + error message, allow retry
|
||||
@@ -422,6 +447,32 @@ class PreferencesState:
|
||||
- Preferences file is optional; defaults work without it
|
||||
- Graceful degradation if gtk4-layer-shell unavailable
|
||||
|
||||
### Ollama Availability Detection
|
||||
|
||||
Add periodic checking mechanism to detect when Ollama becomes available:
|
||||
|
||||
```python
|
||||
class OllamaAvailabilityMonitor:
|
||||
"""Monitors Ollama availability and notifies UI of state changes."""
|
||||
|
||||
def __init__(self, client: OllamaClient, callback: Callable[[bool], None]):
|
||||
self._client = client
|
||||
self._callback = callback
|
||||
self._last_state = False
|
||||
self._check_interval = 30 # seconds
|
||||
|
||||
def start_monitoring(self) -> None:
|
||||
"""Begin periodic availability checks via GLib.timeout_add."""
|
||||
|
||||
def _check_availability(self) -> bool:
|
||||
"""Check if Ollama is available and notify on state change."""
|
||||
```
|
||||
|
||||
Integration in SidebarWindow:
|
||||
- Initialize monitor on startup
|
||||
- Update UI state when availability changes (enable/disable input, update status message)
|
||||
- Show notification when Ollama becomes available: "Ollama connected - AI features enabled"
|
||||
|
||||
### Future Enhancements
|
||||
|
||||
- Command history with up/down arrow navigation
|
||||
|
||||
@@ -71,3 +71,15 @@ This document outlines the requirements for enhancing the AI sidebar module for
|
||||
3. WHEN reasoning mode is disabled, THE AI Sidebar SHALL request and display only the final answer without intermediate reasoning
|
||||
4. THE AI Sidebar SHALL persist the reasoning mode preference across conversation sessions
|
||||
5. THE AI Sidebar SHALL visually distinguish reasoning content from final answer content when reasoning mode is enabled
|
||||
|
||||
### Requirement 6: Graceful Ollama Unavailability Handling
|
||||
|
||||
**User Story:** As a user, I want the AI Sidebar to start and function even when Ollama is not running, so that Ignis can launch successfully and I can start Ollama when I'm ready to use the AI features.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN Ollama is not running at startup, THE AI Sidebar SHALL initialize successfully without blocking Ignis startup
|
||||
2. WHEN Ollama is unavailable, THE AI Sidebar SHALL display a clear message instructing the user to start Ollama
|
||||
3. WHEN the user attempts to send a message while Ollama is unavailable, THE AI Sidebar SHALL display a helpful error message instead of crashing
|
||||
4. WHEN Ollama becomes available after startup, THE AI Sidebar SHALL detect the availability and enable chat functionality without requiring a restart
|
||||
5. THE AI Sidebar SHALL handle Ollama connection failures gracefully during model listing, switching, and chat operations
|
||||
|
||||
@@ -97,7 +97,37 @@
|
||||
- Update message rendering to handle reasoning metadata
|
||||
- _Requirements: 5.5_
|
||||
|
||||
- [ ] 8. Add error handling and edge cases
|
||||
- [-] 8. Implement graceful Ollama unavailability handling
|
||||
- [ ] 8.1 Update OllamaClient initialization
|
||||
- Modify `__init__()` to never raise exceptions during initialization
|
||||
- Add connection check that sets internal availability flag
|
||||
- Update `list_models()` to return empty list instead of raising on connection failure
|
||||
- Update `chat()` and `stream_chat()` to return error messages instead of raising
|
||||
- _Requirements: 6.1, 6.3, 6.5_
|
||||
|
||||
- [ ] 8.2 Create OllamaAvailabilityMonitor
|
||||
- Create `ollama_monitor.py` with OllamaAvailabilityMonitor class
|
||||
- Implement periodic availability checking using GLib.timeout_add (30s interval)
|
||||
- Add callback mechanism to notify UI of state changes
|
||||
- Ensure checks are non-blocking and don't impact UI responsiveness
|
||||
- _Requirements: 6.4_
|
||||
|
||||
- [ ] 8.3 Update SidebarWindow for Ollama unavailability
|
||||
- Initialize OllamaAvailabilityMonitor in SidebarWindow
|
||||
- Display "Ollama not running" status message when unavailable at startup
|
||||
- Update model label to show connection status
|
||||
- Disable input field when Ollama unavailable, show helpful message
|
||||
- Add callback to re-enable features when Ollama becomes available
|
||||
- _Requirements: 6.1, 6.2, 6.4_
|
||||
|
||||
- [ ] 8.4 Add user-friendly error messages
|
||||
- Display clear instructions when user tries to chat without Ollama
|
||||
- Show notification when Ollama connection is restored
|
||||
- Update all command handlers to check Ollama availability
|
||||
- Provide actionable error messages (e.g., "Start Ollama with: ollama serve")
|
||||
- _Requirements: 6.2, 6.3_
|
||||
|
||||
- [ ] 9. Add error handling and edge cases
|
||||
- Implement stream timeout handling (60s limit) with cancellation
|
||||
- Add connection error recovery for streaming failures
|
||||
- Handle command execution during active streaming
|
||||
@@ -105,10 +135,11 @@
|
||||
- Implement graceful degradation for missing preferences file
|
||||
- _Requirements: 1.4, 3.5, 4.3, 4.4_
|
||||
|
||||
- [ ] 9. Polish and integration
|
||||
- [ ] 10. Polish and integration
|
||||
- Add CSS styling for system messages, reasoning content, and streaming indicator
|
||||
- Implement `/help` command to display available commands
|
||||
- Add visual feedback for command execution (loading states)
|
||||
- Ensure all UI updates maintain smooth scrolling behavior
|
||||
- Test keyboard focus management across all new widgets
|
||||
- _Requirements: 1.3, 2.3, 3.5, 5.5_
|
||||
- Add status indicator in UI showing Ollama connection state
|
||||
- _Requirements: 1.3, 2.3, 3.5, 5.5, 6.2_
|
||||
|
||||
Reference in New Issue
Block a user