feat(aisidebar): implement Ollama availability handling and graceful startup
- Add comprehensive Ollama connection error handling strategy - Implement OllamaClient with non-blocking initialization and connection checks - Create OllamaAvailabilityMonitor for periodic Ollama connection tracking - Update design and requirements to support graceful Ollama unavailability - Add new project structure for AI sidebar module with initial implementation - Enhance error handling to prevent application crashes when Ollama is not running - Prepare for future improvements in AI sidebar interaction and resilience
This commit is contained in:
@@ -329,6 +329,31 @@ class PreferencesState:
|
|||||||
|
|
||||||
## Error Handling
|
## Error Handling
|
||||||
|
|
||||||
|
### Ollama Unavailability
|
||||||
|
|
||||||
|
- **Startup Without Ollama**: Initialize all components successfully, show status message in UI
|
||||||
|
- **Model List Failure**: Return empty list, display "Ollama not running" in model label
|
||||||
|
- **Chat Request Without Ollama**: Display friendly message: "Please start Ollama to use AI features"
|
||||||
|
- **Connection Lost Mid-Stream**: Display partial response + reconnection instructions
|
||||||
|
- **Periodic Availability Check**: Attempt to reconnect every 30s when unavailable (non-blocking)
|
||||||
|
|
||||||
|
#### Implementation Strategy
|
||||||
|
|
||||||
|
```python
|
||||||
|
class OllamaClient:
|
||||||
|
def __init__(self, host: str | None = None) -> None:
|
||||||
|
# Never raise exceptions during initialization
|
||||||
|
# Set _available = False if connection fails
|
||||||
|
|
||||||
|
def list_models(self) -> list[str]:
|
||||||
|
# Return empty list instead of raising on connection failure
|
||||||
|
# Log warning but don't crash
|
||||||
|
|
||||||
|
def chat(self, ...) -> dict[str, str] | None:
|
||||||
|
# Return error message dict instead of raising
|
||||||
|
# {"role": "assistant", "content": "Ollama unavailable..."}
|
||||||
|
```
|
||||||
|
|
||||||
### Streaming Errors
|
### Streaming Errors
|
||||||
|
|
||||||
- **Connection Lost**: Display partial response + error message, allow retry
|
- **Connection Lost**: Display partial response + error message, allow retry
|
||||||
@@ -422,6 +447,32 @@ class PreferencesState:
|
|||||||
- Preferences file is optional; defaults work without it
|
- Preferences file is optional; defaults work without it
|
||||||
- Graceful degradation if gtk4-layer-shell unavailable
|
- Graceful degradation if gtk4-layer-shell unavailable
|
||||||
|
|
||||||
|
### Ollama Availability Detection
|
||||||
|
|
||||||
|
Add periodic checking mechanism to detect when Ollama becomes available:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class OllamaAvailabilityMonitor:
|
||||||
|
"""Monitors Ollama availability and notifies UI of state changes."""
|
||||||
|
|
||||||
|
def __init__(self, client: OllamaClient, callback: Callable[[bool], None]):
|
||||||
|
self._client = client
|
||||||
|
self._callback = callback
|
||||||
|
self._last_state = False
|
||||||
|
self._check_interval = 30 # seconds
|
||||||
|
|
||||||
|
def start_monitoring(self) -> None:
|
||||||
|
"""Begin periodic availability checks via GLib.timeout_add."""
|
||||||
|
|
||||||
|
def _check_availability(self) -> bool:
|
||||||
|
"""Check if Ollama is available and notify on state change."""
|
||||||
|
```
|
||||||
|
|
||||||
|
Integration in SidebarWindow:
|
||||||
|
- Initialize monitor on startup
|
||||||
|
- Update UI state when availability changes (enable/disable input, update status message)
|
||||||
|
- Show notification when Ollama becomes available: "Ollama connected - AI features enabled"
|
||||||
|
|
||||||
### Future Enhancements
|
### Future Enhancements
|
||||||
|
|
||||||
- Command history with up/down arrow navigation
|
- Command history with up/down arrow navigation
|
||||||
|
|||||||
@@ -71,3 +71,15 @@ This document outlines the requirements for enhancing the AI sidebar module for
|
|||||||
3. WHEN reasoning mode is disabled, THE AI Sidebar SHALL request and display only the final answer without intermediate reasoning
|
3. WHEN reasoning mode is disabled, THE AI Sidebar SHALL request and display only the final answer without intermediate reasoning
|
||||||
4. THE AI Sidebar SHALL persist the reasoning mode preference across conversation sessions
|
4. THE AI Sidebar SHALL persist the reasoning mode preference across conversation sessions
|
||||||
5. THE AI Sidebar SHALL visually distinguish reasoning content from final answer content when reasoning mode is enabled
|
5. THE AI Sidebar SHALL visually distinguish reasoning content from final answer content when reasoning mode is enabled
|
||||||
|
|
||||||
|
### Requirement 6: Graceful Ollama Unavailability Handling
|
||||||
|
|
||||||
|
**User Story:** As a user, I want the AI Sidebar to start and function even when Ollama is not running, so that Ignis can launch successfully and I can start Ollama when I'm ready to use the AI features.
|
||||||
|
|
||||||
|
#### Acceptance Criteria
|
||||||
|
|
||||||
|
1. WHEN Ollama is not running at startup, THE AI Sidebar SHALL initialize successfully without blocking Ignis startup
|
||||||
|
2. WHEN Ollama is unavailable, THE AI Sidebar SHALL display a clear message instructing the user to start Ollama
|
||||||
|
3. WHEN the user attempts to send a message while Ollama is unavailable, THE AI Sidebar SHALL display a helpful error message instead of crashing
|
||||||
|
4. WHEN Ollama becomes available after startup, THE AI Sidebar SHALL detect the availability and enable chat functionality without requiring a restart
|
||||||
|
5. THE AI Sidebar SHALL handle Ollama connection failures gracefully during model listing, switching, and chat operations
|
||||||
|
|||||||
@@ -97,7 +97,37 @@
|
|||||||
- Update message rendering to handle reasoning metadata
|
- Update message rendering to handle reasoning metadata
|
||||||
- _Requirements: 5.5_
|
- _Requirements: 5.5_
|
||||||
|
|
||||||
- [ ] 8. Add error handling and edge cases
|
- [-] 8. Implement graceful Ollama unavailability handling
|
||||||
|
- [ ] 8.1 Update OllamaClient initialization
|
||||||
|
- Modify `__init__()` to never raise exceptions during initialization
|
||||||
|
- Add connection check that sets internal availability flag
|
||||||
|
- Update `list_models()` to return empty list instead of raising on connection failure
|
||||||
|
- Update `chat()` and `stream_chat()` to return error messages instead of raising
|
||||||
|
- _Requirements: 6.1, 6.3, 6.5_
|
||||||
|
|
||||||
|
- [ ] 8.2 Create OllamaAvailabilityMonitor
|
||||||
|
- Create `ollama_monitor.py` with OllamaAvailabilityMonitor class
|
||||||
|
- Implement periodic availability checking using GLib.timeout_add (30s interval)
|
||||||
|
- Add callback mechanism to notify UI of state changes
|
||||||
|
- Ensure checks are non-blocking and don't impact UI responsiveness
|
||||||
|
- _Requirements: 6.4_
|
||||||
|
|
||||||
|
- [ ] 8.3 Update SidebarWindow for Ollama unavailability
|
||||||
|
- Initialize OllamaAvailabilityMonitor in SidebarWindow
|
||||||
|
- Display "Ollama not running" status message when unavailable at startup
|
||||||
|
- Update model label to show connection status
|
||||||
|
- Disable input field when Ollama unavailable, show helpful message
|
||||||
|
- Add callback to re-enable features when Ollama becomes available
|
||||||
|
- _Requirements: 6.1, 6.2, 6.4_
|
||||||
|
|
||||||
|
- [ ] 8.4 Add user-friendly error messages
|
||||||
|
- Display clear instructions when user tries to chat without Ollama
|
||||||
|
- Show notification when Ollama connection is restored
|
||||||
|
- Update all command handlers to check Ollama availability
|
||||||
|
- Provide actionable error messages (e.g., "Start Ollama with: ollama serve")
|
||||||
|
- _Requirements: 6.2, 6.3_
|
||||||
|
|
||||||
|
- [ ] 9. Add error handling and edge cases
|
||||||
- Implement stream timeout handling (60s limit) with cancellation
|
- Implement stream timeout handling (60s limit) with cancellation
|
||||||
- Add connection error recovery for streaming failures
|
- Add connection error recovery for streaming failures
|
||||||
- Handle command execution during active streaming
|
- Handle command execution during active streaming
|
||||||
@@ -105,10 +135,11 @@
|
|||||||
- Implement graceful degradation for missing preferences file
|
- Implement graceful degradation for missing preferences file
|
||||||
- _Requirements: 1.4, 3.5, 4.3, 4.4_
|
- _Requirements: 1.4, 3.5, 4.3, 4.4_
|
||||||
|
|
||||||
- [ ] 9. Polish and integration
|
- [ ] 10. Polish and integration
|
||||||
- Add CSS styling for system messages, reasoning content, and streaming indicator
|
- Add CSS styling for system messages, reasoning content, and streaming indicator
|
||||||
- Implement `/help` command to display available commands
|
- Implement `/help` command to display available commands
|
||||||
- Add visual feedback for command execution (loading states)
|
- Add visual feedback for command execution (loading states)
|
||||||
- Ensure all UI updates maintain smooth scrolling behavior
|
- Ensure all UI updates maintain smooth scrolling behavior
|
||||||
- Test keyboard focus management across all new widgets
|
- Test keyboard focus management across all new widgets
|
||||||
- _Requirements: 1.3, 2.3, 3.5, 5.5_
|
- Add status indicator in UI showing Ollama connection state
|
||||||
|
- _Requirements: 1.3, 2.3, 3.5, 5.5, 6.2_
|
||||||
|
|||||||
124
aisidebar/README.md
Normal file
124
aisidebar/README.md
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
## AI Sidebar for Ignis
|
||||||
|
|
||||||
|
A sleek AI chat sidebar that integrates with your Ignis desktop, sliding in from the left side with Ollama AI integration.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- **Slide-in Animation**: Smoothly slides in from the left side (opposite of QuickCenter)
|
||||||
|
- **Ollama Integration**: Chat with local AI models via Ollama
|
||||||
|
- **Conversation Persistence**: Your conversations are automatically saved and restored
|
||||||
|
- **Material Design 3**: Matches your existing Ignis theme perfectly
|
||||||
|
- **Keyboard Toggle**: Bind a key to toggle the sidebar visibility
|
||||||
|
|
||||||
|
### How to Use
|
||||||
|
|
||||||
|
#### Open/Close the Sidebar
|
||||||
|
|
||||||
|
You can toggle the sidebar using:
|
||||||
|
1. **Python/Script**: Call `window_manager.toggle_window("AISidebar")`
|
||||||
|
2. **Keyboard Shortcut**: Add a binding in your window manager config
|
||||||
|
|
||||||
|
#### Setting up a Keyboard Shortcut
|
||||||
|
|
||||||
|
For **Niri**, add this to your `~/.config/niri/config.kdl`:
|
||||||
|
|
||||||
|
```kdl
|
||||||
|
binds {
|
||||||
|
// ... your other bindings
|
||||||
|
|
||||||
|
// Toggle AI Sidebar with Super+A (or any key you prefer)
|
||||||
|
Mod+A { spawn "ignis" "run" "ignis.window_manager.WindowManager.get_default().toggle_window('AISidebar')"; }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For **Hyprland**, add this to your `~/.config/hypr/hyprland.conf`:
|
||||||
|
|
||||||
|
```conf
|
||||||
|
# Toggle AI Sidebar with Super+A
|
||||||
|
bind = SUPER, A, exec, ignis run "ignis.window_manager.WindowManager.get_default().toggle_window('AISidebar')"
|
||||||
|
```
|
||||||
|
|
||||||
|
For **Sway**, add this to your `~/.config/sway/config`:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Toggle AI Sidebar with Super+A
|
||||||
|
bindsym $mod+A exec ignis run "ignis.window_manager.WindowManager.get_default().toggle_window('AISidebar')"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Requirements
|
||||||
|
|
||||||
|
- **Ignis** desktop environment
|
||||||
|
- **Python 3.10+**
|
||||||
|
- **Ollama** with at least one model installed
|
||||||
|
- **ollama Python package**: `pip install ollama`
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
The sidebar will automatically:
|
||||||
|
- Detect your default Ollama model
|
||||||
|
- Store conversations in `~/.config/ignis/modules/aisidebar/data/conversations/`
|
||||||
|
- Apply your current Ignis theme colors
|
||||||
|
|
||||||
|
### Customization
|
||||||
|
|
||||||
|
#### Change Width
|
||||||
|
|
||||||
|
Edit `aisidebar.py` line 19:
|
||||||
|
```python
|
||||||
|
self.content_box.width_request = 400 # Change to desired width
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Change Animation Speed
|
||||||
|
|
||||||
|
Edit `aisidebar.py` line 24:
|
||||||
|
```python
|
||||||
|
transition_duration=300, # Change to desired milliseconds
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Custom CSS Styling
|
||||||
|
|
||||||
|
Edit `~/.config/ignis/styles/aisidebar.scss` to customize:
|
||||||
|
- Colors (uses Material Design 3 color tokens)
|
||||||
|
- Border radius
|
||||||
|
- Padding/margins
|
||||||
|
- Message bubble styling
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
|
||||||
|
**Sidebar doesn't appear:**
|
||||||
|
- Restart Ignis: `ignis reload`
|
||||||
|
- Check Ollama is running: `curl http://127.0.0.1:11434/api/tags`
|
||||||
|
- Check console for errors: `ignis`
|
||||||
|
|
||||||
|
**No AI responses:**
|
||||||
|
- Ensure Ollama is running
|
||||||
|
- Ensure `ollama` Python package is installed in Ignis's Python environment
|
||||||
|
- Check that you have at least one model: `ollama list`
|
||||||
|
|
||||||
|
**CSS not applying:**
|
||||||
|
- Restart Ignis: `ignis reload`
|
||||||
|
- Check SCSS compilation: Look for errors in Ignis console output
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
~/.config/ignis/modules/aisidebar/
|
||||||
|
├── __init__.py # Module exports
|
||||||
|
├── aisidebar.py # Main RevealerWindow class
|
||||||
|
├── chat_widget.py # Chat UI widget
|
||||||
|
├── ollama_client.py # Ollama API wrapper
|
||||||
|
├── conversation_manager.py # Conversation persistence
|
||||||
|
└── data/
|
||||||
|
└── conversations/ # Saved conversations (auto-created)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Visual Design
|
||||||
|
|
||||||
|
The AI Sidebar follows the same visual language as QuickCenter:
|
||||||
|
- Material Design 3 color system
|
||||||
|
- 20px border radius on container
|
||||||
|
- Surface elevation with shadows
|
||||||
|
- Smooth slide-in transitions
|
||||||
|
- Translucent overlay backdrop
|
||||||
|
|
||||||
|
Clicking outside the sidebar will close it (same as QuickCenter behavior).
|
||||||
3
aisidebar/__init__.py
Normal file
3
aisidebar/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
from .aisidebar import AISidebar
|
||||||
|
|
||||||
|
__all__ = ["AISidebar"]
|
||||||
73
aisidebar/aisidebar.py
Normal file
73
aisidebar/aisidebar.py
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
from ignis import widgets
|
||||||
|
from ignis.window_manager import WindowManager
|
||||||
|
from ignis.services.niri import NiriService
|
||||||
|
from .chat_widget import ChatWidget
|
||||||
|
|
||||||
|
window_manager = WindowManager.get_default()
|
||||||
|
|
||||||
|
|
||||||
|
class AISidebar(widgets.RevealerWindow):
|
||||||
|
"""AI Chat Sidebar that slides in from the left side"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# Create chat interface
|
||||||
|
self.chat_widget = ChatWidget()
|
||||||
|
|
||||||
|
# Content box - 400px wide to match QuickCenter
|
||||||
|
self.content_box = widgets.Box(
|
||||||
|
vertical=True,
|
||||||
|
spacing=0,
|
||||||
|
hexpand=False,
|
||||||
|
css_classes=["ai-sidebar"],
|
||||||
|
child=[self.chat_widget],
|
||||||
|
)
|
||||||
|
self.content_box.width_request = 400
|
||||||
|
self.content_box.set_halign("start") # Align to left side
|
||||||
|
|
||||||
|
# Revealer for slide animation
|
||||||
|
revealer = widgets.Revealer(
|
||||||
|
child=self.content_box,
|
||||||
|
transition_duration=300,
|
||||||
|
transition_type="slide_right", # Slide in from left
|
||||||
|
halign="start", # Align revealer to left
|
||||||
|
)
|
||||||
|
|
||||||
|
# Close button overlay (click outside to close)
|
||||||
|
close_button = widgets.Button(
|
||||||
|
vexpand=True,
|
||||||
|
hexpand=True,
|
||||||
|
can_focus=False,
|
||||||
|
on_click=lambda x: window_manager.close_window("AISidebar"),
|
||||||
|
)
|
||||||
|
|
||||||
|
main_overlay = widgets.Overlay(
|
||||||
|
css_classes=["popup-close"],
|
||||||
|
child=close_button,
|
||||||
|
overlays=[revealer],
|
||||||
|
)
|
||||||
|
|
||||||
|
super().__init__(
|
||||||
|
revealer=revealer,
|
||||||
|
child=main_overlay,
|
||||||
|
css_classes=["popup-close"],
|
||||||
|
hide_on_close=True,
|
||||||
|
visible=False,
|
||||||
|
namespace="AISidebar",
|
||||||
|
popup=True,
|
||||||
|
layer="overlay", # Same as QuickCenter
|
||||||
|
kb_mode="exclusive", # Same as QuickCenter
|
||||||
|
anchor=["left", "right", "top", "bottom"], # Anchor to ALL edges like QuickCenter
|
||||||
|
)
|
||||||
|
|
||||||
|
self.window_manager = window_manager
|
||||||
|
self.revealer = revealer
|
||||||
|
self.niri = NiriService.get_default()
|
||||||
|
|
||||||
|
self.connect("notify::visible", self._toggle_revealer)
|
||||||
|
|
||||||
|
def _toggle_revealer(self, *_):
|
||||||
|
"""Toggle revealer when window visibility changes"""
|
||||||
|
self.revealer.reveal_child = self.visible
|
||||||
|
if self.visible:
|
||||||
|
# Focus on input when opened
|
||||||
|
self.chat_widget.focus_input()
|
||||||
192
aisidebar/chat_widget.py
Normal file
192
aisidebar/chat_widget.py
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
import threading
|
||||||
|
from ignis import widgets, app
|
||||||
|
from gi.repository import GLib
|
||||||
|
from .ollama_client import OllamaClient
|
||||||
|
from .conversation_manager import ConversationManager
|
||||||
|
|
||||||
|
|
||||||
|
class ChatWidget(widgets.Box):
|
||||||
|
"""Chat interface widget with Ollama integration"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._conversation_manager = ConversationManager()
|
||||||
|
self._ollama_client = OllamaClient()
|
||||||
|
self._current_model = self._ollama_client.default_model
|
||||||
|
|
||||||
|
# Header with title and model
|
||||||
|
header_title = widgets.Label(
|
||||||
|
label="AI Sidebar",
|
||||||
|
halign="start",
|
||||||
|
css_classes=["title-2"],
|
||||||
|
)
|
||||||
|
|
||||||
|
model_name = self._current_model or "No local model detected"
|
||||||
|
|
||||||
|
self._model_label = widgets.Label(
|
||||||
|
label=f"Model: {model_name}",
|
||||||
|
halign="start",
|
||||||
|
css_classes=["dim-label"],
|
||||||
|
)
|
||||||
|
|
||||||
|
header_box = widgets.Box(
|
||||||
|
vertical=True,
|
||||||
|
spacing=4,
|
||||||
|
child=[header_title, self._model_label],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Message list
|
||||||
|
self._message_list = widgets.Box(
|
||||||
|
vertical=True,
|
||||||
|
spacing=8,
|
||||||
|
hexpand=True,
|
||||||
|
vexpand=True,
|
||||||
|
valign="start",
|
||||||
|
)
|
||||||
|
|
||||||
|
# Scrolled window for messages
|
||||||
|
self._scroller = widgets.Scroll(
|
||||||
|
hexpand=True,
|
||||||
|
vexpand=True,
|
||||||
|
min_content_height=300,
|
||||||
|
child=self._message_list,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Input entry
|
||||||
|
self._entry = widgets.Entry(
|
||||||
|
hexpand=True,
|
||||||
|
placeholder_text="Ask a question…",
|
||||||
|
on_accept=lambda x: self._on_submit(),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Send button
|
||||||
|
self._send_button = widgets.Button(
|
||||||
|
label="Send",
|
||||||
|
on_click=lambda x: self._on_submit(),
|
||||||
|
)
|
||||||
|
|
||||||
|
# Input box
|
||||||
|
input_box = widgets.Box(
|
||||||
|
spacing=8,
|
||||||
|
hexpand=True,
|
||||||
|
child=[self._entry, self._send_button],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Main container
|
||||||
|
super().__init__(
|
||||||
|
vertical=True,
|
||||||
|
spacing=12,
|
||||||
|
hexpand=True,
|
||||||
|
vexpand=True,
|
||||||
|
child=[header_box, self._scroller, input_box],
|
||||||
|
css_classes=["ai-sidebar-content"],
|
||||||
|
)
|
||||||
|
|
||||||
|
# Set margins
|
||||||
|
self.set_margin_top(16)
|
||||||
|
self.set_margin_bottom(16)
|
||||||
|
self.set_margin_start(16)
|
||||||
|
self.set_margin_end(16)
|
||||||
|
|
||||||
|
# Load initial messages
|
||||||
|
self._populate_initial_messages()
|
||||||
|
|
||||||
|
def _populate_initial_messages(self):
|
||||||
|
"""Load conversation history"""
|
||||||
|
for message in self._conversation_manager.messages:
|
||||||
|
self._append_message(message["role"], message["content"], persist=False)
|
||||||
|
|
||||||
|
if not self._conversation_manager.messages:
|
||||||
|
self._append_message(
|
||||||
|
"assistant",
|
||||||
|
"Welcome! Ask a question to start a conversation.",
|
||||||
|
persist=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _append_message(self, role: str, content: str, *, persist: bool = True):
|
||||||
|
"""Add a message bubble to the chat"""
|
||||||
|
label_prefix = "You" if role == "user" else "Assistant"
|
||||||
|
|
||||||
|
message_label = widgets.Label(
|
||||||
|
label=f"{label_prefix}: {content}",
|
||||||
|
halign="start",
|
||||||
|
xalign=0.0,
|
||||||
|
wrap=True,
|
||||||
|
wrap_mode="word_char", # Fixed: use underscore not hyphen
|
||||||
|
justify="left",
|
||||||
|
)
|
||||||
|
|
||||||
|
self._message_list.append(message_label)
|
||||||
|
self._scroll_to_bottom()
|
||||||
|
|
||||||
|
if persist and self._conversation_manager:
|
||||||
|
self._conversation_manager.append_message(role, content)
|
||||||
|
|
||||||
|
def _scroll_to_bottom(self):
|
||||||
|
"""Scroll to the latest message"""
|
||||||
|
def _scroll():
|
||||||
|
adjustment = self._scroller.get_vadjustment()
|
||||||
|
if adjustment:
|
||||||
|
adjustment.set_value(adjustment.get_upper() - adjustment.get_page_size())
|
||||||
|
return False
|
||||||
|
|
||||||
|
GLib.idle_add(_scroll)
|
||||||
|
|
||||||
|
def _set_input_enabled(self, enabled: bool):
|
||||||
|
"""Enable/disable input controls"""
|
||||||
|
self._entry.set_sensitive(enabled)
|
||||||
|
self._send_button.set_sensitive(enabled)
|
||||||
|
|
||||||
|
def _on_submit(self):
|
||||||
|
"""Handle message submission"""
|
||||||
|
text = self._entry.text.strip()
|
||||||
|
if not text:
|
||||||
|
return
|
||||||
|
|
||||||
|
self._entry.text = ""
|
||||||
|
self._append_message("user", text, persist=True)
|
||||||
|
self._request_response()
|
||||||
|
|
||||||
|
def _request_response(self):
|
||||||
|
"""Request AI response in background thread"""
|
||||||
|
model = self._current_model or self._ollama_client.default_model
|
||||||
|
if not model:
|
||||||
|
self._append_message(
|
||||||
|
"assistant",
|
||||||
|
"No Ollama models are available. Install a model to continue.",
|
||||||
|
persist=True,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
history = self._conversation_manager.chat_messages
|
||||||
|
self._set_input_enabled(False)
|
||||||
|
|
||||||
|
def _worker(messages):
|
||||||
|
response = self._ollama_client.chat(model=model, messages=list(messages))
|
||||||
|
GLib.idle_add(self._handle_response, response, priority=GLib.PRIORITY_DEFAULT)
|
||||||
|
|
||||||
|
thread = threading.Thread(target=_worker, args=(history,), daemon=True)
|
||||||
|
thread.start()
|
||||||
|
|
||||||
|
def _handle_response(self, response):
|
||||||
|
"""Handle AI response"""
|
||||||
|
self._set_input_enabled(True)
|
||||||
|
|
||||||
|
if not response:
|
||||||
|
self._append_message(
|
||||||
|
"assistant",
|
||||||
|
"The model returned an empty response.",
|
||||||
|
persist=True,
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
|
||||||
|
role = response.get("role", "assistant")
|
||||||
|
content = response.get("content") or ""
|
||||||
|
if not content:
|
||||||
|
content = "[No content received from Ollama]"
|
||||||
|
|
||||||
|
self._append_message(role, content, persist=True)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def focus_input(self):
|
||||||
|
"""Focus the input entry"""
|
||||||
|
self._entry.grab_focus()
|
||||||
173
aisidebar/conversation_manager.py
Normal file
173
aisidebar/conversation_manager.py
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
"""Conversation state management and persistence helpers."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import ClassVar, Dict, Iterable, List, MutableMapping
|
||||||
|
|
||||||
|
DEFAULT_CONVERSATION_ID = "default"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class ConversationState:
|
||||||
|
"""In-memory representation of a conversation transcript."""
|
||||||
|
|
||||||
|
conversation_id: str
|
||||||
|
created_at: str
|
||||||
|
updated_at: str
|
||||||
|
messages: List[Dict[str, str]] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
class ConversationManager:
|
||||||
|
"""Load and persist conversation transcripts as JSON files."""
|
||||||
|
|
||||||
|
VALID_ROLES: ClassVar[set[str]] = {"system", "user", "assistant"}
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
storage_dir: str | Path | None = None,
|
||||||
|
conversation_id: str | None = None,
|
||||||
|
) -> None:
|
||||||
|
module_root = Path(__file__).resolve().parent
|
||||||
|
default_storage = module_root / "data" / "conversations"
|
||||||
|
self._storage_dir = Path(storage_dir) if storage_dir else default_storage
|
||||||
|
self._storage_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
self._conversation_id = conversation_id or DEFAULT_CONVERSATION_ID
|
||||||
|
self._path = self._storage_dir / f"{self._conversation_id}.json"
|
||||||
|
|
||||||
|
self._state = self._load_state()
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ properties
|
||||||
|
@property
|
||||||
|
def conversation_id(self) -> str:
|
||||||
|
return self._state.conversation_id
|
||||||
|
|
||||||
|
@property
|
||||||
|
def messages(self) -> List[Dict[str, str]]:
|
||||||
|
return list(self._state.messages)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def chat_messages(self) -> List[Dict[str, str]]:
|
||||||
|
"""Return messages formatted for the Ollama chat API."""
|
||||||
|
return [
|
||||||
|
{"role": msg["role"], "content": msg["content"]}
|
||||||
|
for msg in self._state.messages
|
||||||
|
]
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ public API
|
||||||
|
def append_message(self, role: str, content: str) -> Dict[str, str]:
|
||||||
|
"""Append a new message and persist the updated transcript."""
|
||||||
|
normalized_role = role.lower()
|
||||||
|
if normalized_role not in self.VALID_ROLES:
|
||||||
|
raise ValueError(f"Invalid role '{role}'. Expected one of {self.VALID_ROLES}.")
|
||||||
|
|
||||||
|
timestamp = datetime.now(timezone.utc).isoformat()
|
||||||
|
message = {
|
||||||
|
"role": normalized_role,
|
||||||
|
"content": content,
|
||||||
|
"timestamp": timestamp,
|
||||||
|
}
|
||||||
|
|
||||||
|
self._state.messages.append(message)
|
||||||
|
self._state.updated_at = timestamp
|
||||||
|
self._write_state()
|
||||||
|
return message
|
||||||
|
|
||||||
|
def replace_messages(self, messages: Iterable[Dict[str, str]]) -> None:
|
||||||
|
"""Replace the transcript contents. Useful for loading fixtures."""
|
||||||
|
normalized: List[Dict[str, str]] = []
|
||||||
|
for item in messages:
|
||||||
|
role = item.get("role", "").lower()
|
||||||
|
content = item.get("content", "")
|
||||||
|
if role not in self.VALID_ROLES:
|
||||||
|
continue
|
||||||
|
normalized.append(
|
||||||
|
{
|
||||||
|
"role": role,
|
||||||
|
"content": content,
|
||||||
|
"timestamp": item.get("timestamp")
|
||||||
|
or datetime.now(timezone.utc).isoformat(),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
now = datetime.now(timezone.utc).isoformat()
|
||||||
|
self._state.messages = normalized
|
||||||
|
self._state.created_at = self._state.created_at or now
|
||||||
|
self._state.updated_at = now
|
||||||
|
self._write_state()
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ persistence
|
||||||
|
def _load_state(self) -> ConversationState:
|
||||||
|
"""Load the transcript from disk or create a fresh default."""
|
||||||
|
if self._path.exists():
|
||||||
|
try:
|
||||||
|
with self._path.open("r", encoding="utf-8") as fh:
|
||||||
|
payload = json.load(fh)
|
||||||
|
return self._state_from_payload(payload)
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
timestamp = datetime.now(timezone.utc).isoformat()
|
||||||
|
return ConversationState(
|
||||||
|
conversation_id=self._conversation_id,
|
||||||
|
created_at=timestamp,
|
||||||
|
updated_at=timestamp,
|
||||||
|
messages=[],
|
||||||
|
)
|
||||||
|
|
||||||
|
def _state_from_payload(self, payload: MutableMapping[str, object]) -> ConversationState:
|
||||||
|
"""Normalize persisted data into ConversationState instances."""
|
||||||
|
conversation_id = str(payload.get("id") or self._conversation_id)
|
||||||
|
created_at = str(payload.get("created_at") or datetime.now(timezone.utc).isoformat())
|
||||||
|
updated_at = str(payload.get("updated_at") or created_at)
|
||||||
|
|
||||||
|
messages_payload = payload.get("messages", [])
|
||||||
|
messages: List[Dict[str, str]] = []
|
||||||
|
if isinstance(messages_payload, list):
|
||||||
|
for item in messages_payload:
|
||||||
|
if not isinstance(item, dict):
|
||||||
|
continue
|
||||||
|
role = str(item.get("role", "")).lower()
|
||||||
|
content = str(item.get("content", ""))
|
||||||
|
if role not in self.VALID_ROLES:
|
||||||
|
continue
|
||||||
|
timestamp = str(
|
||||||
|
item.get("timestamp") or datetime.now(timezone.utc).isoformat()
|
||||||
|
)
|
||||||
|
messages.append({"role": role, "content": content, "timestamp": timestamp})
|
||||||
|
|
||||||
|
return ConversationState(
|
||||||
|
conversation_id=conversation_id,
|
||||||
|
created_at=created_at,
|
||||||
|
updated_at=updated_at,
|
||||||
|
messages=messages,
|
||||||
|
)
|
||||||
|
|
||||||
|
def _write_state(self) -> None:
|
||||||
|
"""Persist the conversation state atomically."""
|
||||||
|
payload = {
|
||||||
|
"id": self._state.conversation_id,
|
||||||
|
"created_at": self._state.created_at,
|
||||||
|
"updated_at": self._state.updated_at,
|
||||||
|
"messages": self._state.messages,
|
||||||
|
}
|
||||||
|
|
||||||
|
with tempfile.NamedTemporaryFile(
|
||||||
|
"w",
|
||||||
|
encoding="utf-8",
|
||||||
|
dir=self._storage_dir,
|
||||||
|
delete=False,
|
||||||
|
prefix=f"{self._conversation_id}.",
|
||||||
|
suffix=".tmp",
|
||||||
|
) as tmp_file:
|
||||||
|
json.dump(payload, tmp_file, indent=2, ensure_ascii=False)
|
||||||
|
tmp_file.flush()
|
||||||
|
os.fsync(tmp_file.fileno())
|
||||||
|
|
||||||
|
os.replace(tmp_file.name, self._path)
|
||||||
52
aisidebar/data/conversations/default.json
Normal file
52
aisidebar/data/conversations/default.json
Normal file
File diff suppressed because one or more lines are too long
130
aisidebar/ollama_client.py
Normal file
130
aisidebar/ollama_client.py
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
"""Client utilities for interacting with the Ollama API via direct HTTP calls."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
from typing import Any, Dict, Iterable, Iterator
|
||||||
|
from urllib.request import Request, urlopen
|
||||||
|
from urllib.error import URLError, HTTPError
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaClientError(RuntimeError):
|
||||||
|
"""Base exception raised when Ollama operations fail."""
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaUnavailableError(OllamaClientError):
|
||||||
|
"""Raised when the Ollama server is not available."""
|
||||||
|
|
||||||
|
|
||||||
|
class OllamaClient:
|
||||||
|
"""HTTP client for interacting with Ollama's REST API."""
|
||||||
|
|
||||||
|
def __init__(self, host: str | None = None) -> None:
|
||||||
|
self._host = host or "http://localhost:11434"
|
||||||
|
self._cached_models: list[str] | None = None
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ helpers
|
||||||
|
@property
|
||||||
|
def is_available(self) -> bool:
|
||||||
|
"""Check if Ollama server is reachable."""
|
||||||
|
try:
|
||||||
|
req = Request(f"{self._host}/api/tags", method="GET")
|
||||||
|
with urlopen(req, timeout=2) as response:
|
||||||
|
return response.status == 200
|
||||||
|
except (URLError, HTTPError, TimeoutError):
|
||||||
|
return False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def default_model(self) -> str | None:
|
||||||
|
"""Get the first available model."""
|
||||||
|
models = self.list_models()
|
||||||
|
return models[0] if models else None
|
||||||
|
|
||||||
|
def list_models(self, force_refresh: bool = False) -> list[str]:
|
||||||
|
"""Return the available model names, caching the result for quick reuse."""
|
||||||
|
if self._cached_models is not None and not force_refresh:
|
||||||
|
return list(self._cached_models)
|
||||||
|
|
||||||
|
try:
|
||||||
|
req = Request(f"{self._host}/api/tags", method="GET")
|
||||||
|
with urlopen(req, timeout=5) as response:
|
||||||
|
data = json.loads(response.read().decode())
|
||||||
|
except (URLError, HTTPError, TimeoutError) as exc:
|
||||||
|
raise OllamaClientError(f"Failed to list models: {exc}") from exc
|
||||||
|
|
||||||
|
models: list[str] = []
|
||||||
|
for item in data.get("models", []):
|
||||||
|
name = item.get("name") or item.get("model")
|
||||||
|
if name:
|
||||||
|
models.append(name)
|
||||||
|
|
||||||
|
self._cached_models = models
|
||||||
|
return list(models)
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ chat APIs
|
||||||
|
def chat(
|
||||||
|
self,
|
||||||
|
*,
|
||||||
|
model: str,
|
||||||
|
messages: Iterable[Dict[str, str]],
|
||||||
|
) -> dict[str, str] | None:
|
||||||
|
"""Execute a blocking chat call against Ollama."""
|
||||||
|
payload = {
|
||||||
|
"model": model,
|
||||||
|
"messages": list(messages),
|
||||||
|
"stream": False,
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
req = Request(
|
||||||
|
f"{self._host}/api/chat",
|
||||||
|
data=json.dumps(payload).encode("utf-8"),
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
with urlopen(req, timeout=120) as response:
|
||||||
|
result = json.loads(response.read().decode())
|
||||||
|
except (URLError, HTTPError, TimeoutError) as exc:
|
||||||
|
return {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": f"Unable to reach Ollama: {exc}",
|
||||||
|
}
|
||||||
|
|
||||||
|
# Parse the response
|
||||||
|
message = result.get("message")
|
||||||
|
if not message:
|
||||||
|
return {"role": "assistant", "content": ""}
|
||||||
|
|
||||||
|
role = message.get("role", "assistant")
|
||||||
|
content = message.get("content", "")
|
||||||
|
|
||||||
|
return {"role": role, "content": content}
|
||||||
|
|
||||||
|
def stream_chat(
|
||||||
|
self, *, model: str, messages: Iterable[Dict[str, str]]
|
||||||
|
) -> Iterator[dict[str, Any]]:
|
||||||
|
"""Placeholder for streaming API - not yet implemented."""
|
||||||
|
raise NotImplementedError("Streaming chat is not yet implemented")
|
||||||
|
|
||||||
|
# ------------------------------------------------------------------ internals
|
||||||
|
def _make_request(
|
||||||
|
self, endpoint: str, method: str = "GET", data: dict | None = None
|
||||||
|
) -> dict:
|
||||||
|
"""Make an HTTP request to the Ollama API."""
|
||||||
|
url = f"{self._host}{endpoint}"
|
||||||
|
|
||||||
|
if data:
|
||||||
|
req = Request(
|
||||||
|
url,
|
||||||
|
data=json.dumps(data).encode("utf-8"),
|
||||||
|
headers={"Content-Type": "application/json"},
|
||||||
|
method=method,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
req = Request(url, method=method)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with urlopen(req, timeout=30) as response:
|
||||||
|
return json.loads(response.read().decode())
|
||||||
|
except (URLError, HTTPError) as exc:
|
||||||
|
raise OllamaClientError(f"Request failed: {exc}") from exc
|
||||||
@@ -25,17 +25,39 @@ class OllamaClient:
|
|||||||
self._host = host
|
self._host = host
|
||||||
self._client = None
|
self._client = None
|
||||||
self._cached_models: list[str] | None = None
|
self._cached_models: list[str] | None = None
|
||||||
|
self._is_available = False
|
||||||
|
|
||||||
if ollama is None:
|
if ollama is None:
|
||||||
return
|
return
|
||||||
|
|
||||||
if host and hasattr(ollama, "Client"):
|
# Try to initialize client and check connection
|
||||||
self._client = ollama.Client(host=host) # type: ignore[call-arg]
|
try:
|
||||||
|
if host and hasattr(ollama, "Client"):
|
||||||
|
self._client = ollama.Client(host=host) # type: ignore[call-arg]
|
||||||
|
|
||||||
|
# Test connection by attempting to list models
|
||||||
|
self._check_connection()
|
||||||
|
except Exception:
|
||||||
|
# Silently fail - availability flag remains False
|
||||||
|
pass
|
||||||
|
|
||||||
# ------------------------------------------------------------------ helpers
|
# ------------------------------------------------------------------ helpers
|
||||||
|
def _check_connection(self) -> None:
|
||||||
|
"""Check if Ollama is available and update internal flag."""
|
||||||
|
if ollama is None:
|
||||||
|
self._is_available = False
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Attempt a simple list call to verify connection
|
||||||
|
self._call_sdk("list") # type: ignore[arg-type]
|
||||||
|
self._is_available = True
|
||||||
|
except Exception:
|
||||||
|
self._is_available = False
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def is_available(self) -> bool:
|
def is_available(self) -> bool:
|
||||||
return ollama is not None
|
return self._is_available
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def default_model(self) -> str | None:
|
def default_model(self) -> str | None:
|
||||||
@@ -52,7 +74,13 @@ class OllamaClient:
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
response = self._call_sdk("list") # type: ignore[arg-type]
|
response = self._call_sdk("list") # type: ignore[arg-type]
|
||||||
|
# Update availability flag on successful call
|
||||||
|
self._is_available = True
|
||||||
except OllamaClientError:
|
except OllamaClientError:
|
||||||
|
self._is_available = False
|
||||||
|
return []
|
||||||
|
except Exception:
|
||||||
|
self._is_available = False
|
||||||
return []
|
return []
|
||||||
|
|
||||||
models: list[str] = []
|
models: list[str] = []
|
||||||
@@ -84,10 +112,16 @@ class OllamaClient:
|
|||||||
) -> dict[str, str] | None:
|
) -> dict[str, str] | None:
|
||||||
"""Execute a blocking chat call against Ollama."""
|
"""Execute a blocking chat call against Ollama."""
|
||||||
if not self.is_available:
|
if not self.is_available:
|
||||||
return {
|
if ollama is None:
|
||||||
"role": "assistant",
|
return {
|
||||||
"content": "Ollama SDK is not installed; install `ollama` to enable responses.",
|
"role": "assistant",
|
||||||
}
|
"content": "Ollama SDK is not installed; install `ollama` to enable responses.",
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": "Ollama is not running. Start Ollama with: ollama serve",
|
||||||
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
result = self._call_sdk(
|
result = self._call_sdk(
|
||||||
@@ -96,10 +130,19 @@ class OllamaClient:
|
|||||||
messages=list(messages),
|
messages=list(messages),
|
||||||
stream=False,
|
stream=False,
|
||||||
)
|
)
|
||||||
|
# Update availability flag on successful call
|
||||||
|
self._is_available = True
|
||||||
except OllamaClientError as exc:
|
except OllamaClientError as exc:
|
||||||
|
self._is_available = False
|
||||||
return {
|
return {
|
||||||
"role": "assistant",
|
"role": "assistant",
|
||||||
"content": f"Unable to reach Ollama: {exc}",
|
"content": f"Unable to reach Ollama: {exc}\n\nStart Ollama with: ollama serve",
|
||||||
|
}
|
||||||
|
except Exception as exc:
|
||||||
|
self._is_available = False
|
||||||
|
return {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": f"Unable to reach Ollama: {exc}\n\nStart Ollama with: ollama serve",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Handle both dict responses (old SDK) and Pydantic objects (new SDK)
|
# Handle both dict responses (old SDK) and Pydantic objects (new SDK)
|
||||||
|
|||||||
Reference in New Issue
Block a user