refactor(aisidebar): restructure project and implement reasoning mode toggle
- Reorganize project structure and file locations - Add ReasoningController to manage model selection and reasoning mode - Update design and requirements for reasoning mode toggle - Implement model switching between Qwen3-4B-Instruct and Qwen3-4B-Thinking models - Remove deprecated files and consolidate project layout - Add new steering and specification documentation - Clean up and remove unnecessary files and directories - Prepare for enhanced AI sidebar functionality with more flexible model handling
This commit is contained in:
@@ -233,7 +233,11 @@ class ConversationArchive:
|
||||
|
||||
```python
|
||||
class ReasoningController:
|
||||
"""Manages reasoning mode state and API parameters."""
|
||||
"""Manages reasoning mode state and model selection."""
|
||||
|
||||
# Model names for reasoning toggle
|
||||
INSTRUCT_MODEL = "hf.co/unsloth/Qwen3-4B-Instruct-2507-GGUF:Q8_K_XL"
|
||||
THINKING_MODEL = "hf.co/unsloth/Qwen3-4B-Thinking-2507-GGUF:Q8_K_XL"
|
||||
|
||||
def __init__(self):
|
||||
self._enabled = False
|
||||
@@ -245,8 +249,9 @@ class ReasoningController:
|
||||
def toggle(self) -> bool:
|
||||
"""Toggle reasoning mode and persist preference."""
|
||||
|
||||
def get_chat_options(self) -> dict:
|
||||
"""Return Ollama API options for reasoning mode."""
|
||||
def get_model_name(self) -> str:
|
||||
"""Return the appropriate model name based on reasoning mode."""
|
||||
return self.THINKING_MODEL if self._enabled else self.INSTRUCT_MODEL
|
||||
```
|
||||
|
||||
#### UI Components
|
||||
@@ -254,41 +259,35 @@ class ReasoningController:
|
||||
Add toggle button to header area:
|
||||
|
||||
```python
|
||||
self._reasoning_toggle = Gtk.ToggleButton(label="🧠 Reasoning")
|
||||
self._reasoning_toggle.connect("toggled", self._on_reasoning_toggled)
|
||||
self._reasoning_toggle = widgets.Button(label="🧠 Reasoning: OFF")
|
||||
self._reasoning_toggle.connect("clicked", self._on_reasoning_toggled)
|
||||
```
|
||||
|
||||
#### Ollama Integration
|
||||
|
||||
When reasoning mode is enabled, pass additional options to Ollama:
|
||||
When reasoning mode is toggled, switch between models:
|
||||
|
||||
```python
|
||||
# Standard mode
|
||||
ollama.chat(model=model, messages=messages)
|
||||
# Get model based on reasoning mode
|
||||
model = self._reasoning_controller.get_model_name()
|
||||
|
||||
# Reasoning mode (model-dependent)
|
||||
ollama.chat(
|
||||
model=model,
|
||||
messages=messages,
|
||||
options={
|
||||
"temperature": 0.7,
|
||||
# Model-specific reasoning parameters
|
||||
}
|
||||
)
|
||||
# Use the selected model for chat
|
||||
ollama.chat(model=model, messages=messages)
|
||||
```
|
||||
|
||||
#### Message Formatting
|
||||
|
||||
When reasoning is enabled and model supports it:
|
||||
When using the thinking model:
|
||||
- Display thinking process in distinct style (italic, gray text)
|
||||
- Separate reasoning from final answer with visual divider
|
||||
- Use expandable/collapsible section for reasoning (optional)
|
||||
- Parse `<think>` tags from model output to extract reasoning content
|
||||
|
||||
#### Persistence
|
||||
|
||||
- Save reasoning preference to `~/.config/aisidebar/preferences.json`
|
||||
- Load preference on startup
|
||||
- Apply to all new conversations
|
||||
- Automatically switch models when preference changes
|
||||
|
||||
## Data Models
|
||||
|
||||
|
||||
Reference in New Issue
Block a user