feat(aisidebar): implement Ollama availability handling and graceful startup
- Add comprehensive Ollama connection error handling strategy - Implement OllamaClient with non-blocking initialization and connection checks - Create OllamaAvailabilityMonitor for periodic Ollama connection tracking - Update design and requirements to support graceful Ollama unavailability - Add new project structure for AI sidebar module with initial implementation - Enhance error handling to prevent application crashes when Ollama is not running - Prepare for future improvements in AI sidebar interaction and resilience
This commit is contained in:
@@ -329,6 +329,31 @@ class PreferencesState:
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Ollama Unavailability
|
||||
|
||||
- **Startup Without Ollama**: Initialize all components successfully, show status message in UI
|
||||
- **Model List Failure**: Return empty list, display "Ollama not running" in model label
|
||||
- **Chat Request Without Ollama**: Display friendly message: "Please start Ollama to use AI features"
|
||||
- **Connection Lost Mid-Stream**: Display partial response + reconnection instructions
|
||||
- **Periodic Availability Check**: Attempt to reconnect every 30s when unavailable (non-blocking)
|
||||
|
||||
#### Implementation Strategy
|
||||
|
||||
```python
|
||||
class OllamaClient:
|
||||
def __init__(self, host: str | None = None) -> None:
|
||||
# Never raise exceptions during initialization
|
||||
# Set _available = False if connection fails
|
||||
|
||||
def list_models(self) -> list[str]:
|
||||
# Return empty list instead of raising on connection failure
|
||||
# Log warning but don't crash
|
||||
|
||||
def chat(self, ...) -> dict[str, str] | None:
|
||||
# Return error message dict instead of raising
|
||||
# {"role": "assistant", "content": "Ollama unavailable..."}
|
||||
```
|
||||
|
||||
### Streaming Errors
|
||||
|
||||
- **Connection Lost**: Display partial response + error message, allow retry
|
||||
@@ -422,6 +447,32 @@ class PreferencesState:
|
||||
- Preferences file is optional; defaults work without it
|
||||
- Graceful degradation if gtk4-layer-shell unavailable
|
||||
|
||||
### Ollama Availability Detection
|
||||
|
||||
Add periodic checking mechanism to detect when Ollama becomes available:
|
||||
|
||||
```python
|
||||
class OllamaAvailabilityMonitor:
|
||||
"""Monitors Ollama availability and notifies UI of state changes."""
|
||||
|
||||
def __init__(self, client: OllamaClient, callback: Callable[[bool], None]):
|
||||
self._client = client
|
||||
self._callback = callback
|
||||
self._last_state = False
|
||||
self._check_interval = 30 # seconds
|
||||
|
||||
def start_monitoring(self) -> None:
|
||||
"""Begin periodic availability checks via GLib.timeout_add."""
|
||||
|
||||
def _check_availability(self) -> bool:
|
||||
"""Check if Ollama is available and notify on state change."""
|
||||
```
|
||||
|
||||
Integration in SidebarWindow:
|
||||
- Initialize monitor on startup
|
||||
- Update UI state when availability changes (enable/disable input, update status message)
|
||||
- Show notification when Ollama becomes available: "Ollama connected - AI features enabled"
|
||||
|
||||
### Future Enhancements
|
||||
|
||||
- Command history with up/down arrow navigation
|
||||
|
||||
Reference in New Issue
Block a user