diff --git a/PROVIDER_DEBUGGING.md b/PROVIDER_DEBUGGING.md
new file mode 100644
index 0000000..dd32747
--- /dev/null
+++ b/PROVIDER_DEBUGGING.md
@@ -0,0 +1,75 @@
+# Grunty AI Multi-Provider Debugging Report
+
+## Issue Summary
+The Grunty AI application was experiencing problems with its multi-provider support, particularly when switching between Anthropic and OpenAI providers. The issues included:
+
+1. Error handling during provider switching
+2. Lack of proper error feedback to users
+3. Initialization issues with the OpenAI provider
+4. Missing log functionality in the UI
+
+## Implemented Fixes
+
+### 1. Enhanced Error Logging
+- Added detailed logging throughout the application with file names and line numbers
+- Added console logging for immediate feedback during development
+- Added stack trace logging for better debugging
+- Improved log formatting for better readability
+
+### 2. Improved Provider Initialization
+- Added proper initialization checks in the OpenAI provider
+- Added verification of API key availability
+- Added API test call during initialization to verify connectivity
+- Better error handling during provider creation and initialization
+
+### 3. Enhanced Provider Switching
+- Added more robust provider switching logic in the store
+- Only recreate provider instances when necessary
+- Proper error handling and recovery during provider switching
+- Added user feedback through error dialogs when provider switching fails
+
+### 4. OpenAI Provider Improvements
+- Implemented proper computer control support
+- Fixed message handling for the OpenAI API responses
+- Added robust error handling for tool calls
+- Improved response handling for different message formats
+
+### 5. UI Improvements
+- Added missing log method to MainWindow class
+- Improved error message display in the UI
+- Added better user feedback during provider operations
+
+### 6. Dependency Management
+- Better handling of optional dependencies
+- Clear error messages when required packages are missing
+- Graceful degradation when non-essential packages are unavailable
+
+## Configuration
+The application requires proper configuration in a `.env` file:
+
+```
+ANTHROPIC_API_KEY=your_anthropic_key
+OPENAI_API_KEY=your_openai_key
+DEFAULT_AI_PROVIDER=anthropic
+```
+
+## Testing
+
+A new test script `test_providers.py` has been created to validate the provider functionality independently of the main application. This script tests:
+- Anthropic provider creation and initialization
+- OpenAI provider creation and initialization
+- Provider manager functionality
+
+All tests are passing, confirming that both providers are working correctly.
+
+## Recommendations for Future Work
+
+1. **Comprehensive Error Handling**: Add more specific error checks for different API errors
+2. **Provider Configuration UI**: Add a dedicated settings page for provider configuration
+3. **API Key Management**: Implement secure storage and management of API keys
+4. **Automated Testing**: Expand the test coverage to include more complex scenarios
+5. **New Providers**: Create a template for adding new AI providers easily
+
+## Conclusion
+
+The multi-provider support in Grunty AI is now working correctly. Users can switch between Anthropic and OpenAI providers with proper error handling and feedback. The application is more robust and user-friendly.
diff --git a/README.md b/README.md
index a06d51b..83dc5f2 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
 # 👨🏽‍💻 Grunty
 
-Self-hosted desktop app to have AI control your computer, powered by the new Claude [computer use](https://www.anthropic.com/news/3-5-models-and-computer-use) capability. Allow Claude to take over your laptop and do your tasks for you (or at least attempt to, lol). Written in Python, using PyQt.
+Self-hosted desktop app to have AI control your computer, powered by the Claude [computer use](https://www.anthropic.com/news/3-5-models-and-computer-use) capability and OpenAI's GPT models. Allow AI to take over your laptop and do your tasks for you (or at least attempt to, lol). Written in Python, using PyQt.
 
 ## Demo
 Here, I asked it to use [vim](https://vim.rtorr.com/) to create a game in Python, run it, and play it.
@@ -15,17 +15,22 @@ Video was sped up 8x btw. [Computer use](https://www.anthropic.com/news/3-5-mode
 
 2. **Tread Lightly** - If it wipes your computer, sends weird emails, or orders 100 pizzas... that's on you. 
 
-Anthropic can see your screen through screenshots during actions. Hide sensitive information or private stuff.
+AI providers can see your screen through screenshots during actions. Hide sensitive information or private stuff.
 
 ## 🎯 Features
 - Literally ask AI to do ANYTHING on your computer that you do with a mouse and keyboard. Browse the web, write code, blah blah.
+- **Multiple AI providers support**: Switch between Anthropic Claude and OpenAI models
+- **Model selection**: Choose from various models for each provider
+- **Theme toggling**: Light/Dark mode support
+- **System tray integration**: Minimize to tray and run in background
+- **Optional voice control**: Experimental voice input and text-to-speech support
 
 # 💻 Platforms
 - Anything you can run Python on: MacOS, Windows, Linux, etc.
 
 ## 🛠️ Setup
 
-Get an Anthropic API key [here]([https://console.anthropic.com/keys](https://console.anthropic.com/dashboard)).
+Get an Anthropic API key [here](https://console.anthropic.com/dashboard) and/or an OpenAI API key [here](https://platform.openai.com/api-keys).
 
 ```bash
 # Python 3.10+ recommended
@@ -33,13 +38,28 @@ python -m venv venv
 source venv/bin/activate  # or `venv\Scripts\activate` on Windows
 pip install -r requirements.txt
 
-# Add API key to .env
+# Add API keys to .env
 echo "ANTHROPIC_API_KEY=your-key-here" > .env
+echo "OPENAI_API_KEY=your-key-here" >> .env
+echo "DEFAULT_AI_PROVIDER=anthropic" >> .env  # or "openai"
 
 # Run
 python run.py
 ```
 
+## 🧠 Supported AI Providers and Models
+
+### Anthropic
+- Claude 3.5 Sonnet
+- Claude 3 Opus
+- Claude 3 Sonnet
+- Claude 3 Haiku
+
+### OpenAI
+- GPT-4o
+- GPT-4 Turbo
+- GPT-4
+
 ## 🔑 Productivity Keybindings
 - `Ctrl + Enter`: Execute the current instruction
 - `Ctrl + C`: Stop the current agent action
@@ -50,10 +70,13 @@ python run.py
 - Claude really loves Firefox. You might want to install it for better UI detection and accurate mouse clicks.
 - Be specific and explicit, help it out a bit
 - Always monitor the agent's actions
+- Different models have different capabilities for computer control - experiment to find the best one for your tasks
 
 ## 🐛 Known Issues
 
-- Sometimes, it doesn't take a screenshot to validate that the input is selected, and types stuff in the wrong place.. Press CMD+C to end the action when this happens, and quit and restart the agent. I'm working on a fix.
+- Sometimes, the AI doesn't take a screenshot to validate that the input is selected, and types stuff in the wrong place. Press CMD+C to end the action when this happens, and quit and restart the agent.
+- Not all models support full computer control with the same level of capability
+- Voice control is experimental and may not work reliably on all platforms
 
 ## 🤝 Contributing
 
diff --git a/requirements.txt b/requirements.txt
index a2de0b9..2704735 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,12 +1,18 @@
+# Core dependencies
 PyQt6
 pyautogui
-requests
-anthropic
 python-dotenv
 pillow
 numpy
 qtawesome
-SpeechRecognition
-pyttsx3
-keyboard
-pyaudio
+requests
+
+# AI Provider dependencies
+anthropic>=0.15.0  # Required for Anthropic Claude
+openai>=1.17.0    # Optional for OpenAI support
+
+# Voice control dependencies (optional)
+SpeechRecognition  # Optional for voice input
+pyttsx3           # Optional for text-to-speech
+pyaudio           # Optional for voice recording
+keyboard          # For keyboard shortcuts
diff --git a/src/ai_providers.py b/src/ai_providers.py
new file mode 100644
index 0000000..8bf34c7
--- /dev/null
+++ b/src/ai_providers.py
@@ -0,0 +1,172 @@
+import os
+import logging
+from abc import ABC, abstractmethod
+from typing import List, Dict, Any, Optional
+from dotenv import load_dotenv
+
+logger = logging.getLogger(__name__)
+
+class AIProvider(ABC):
+    """Base abstract class for AI providers that can control the computer."""
+    
+    def __init__(self, api_key: Optional[str] = None):
+        self.api_key = api_key
+    
+    @abstractmethod
+    def initialize(self) -> bool:
+        """Initialize the client with API key and any needed setup.
+        Returns True if successful, False otherwise."""
+        pass
+    
+    @abstractmethod
+    def get_next_action(self, run_history: List[Dict[str, Any]]) -> Any:
+        """Get the next action from the AI based on the conversation history.
+        
+        Args:
+            run_history: List of conversation messages.
+            
+        Returns:
+            Response object from the AI provider.
+        """
+        pass
+    
+    @abstractmethod
+    def extract_action(self, response: Any) -> Dict[str, Any]:
+        """Extract the action from the AI response.
+        
+        Args:
+            response: Response object from the AI provider.
+            
+        Returns:
+            Dict with the parsed action.
+        """
+        pass
+    
+    @abstractmethod
+    def display_assistant_message(self, message: Any, update_callback: callable) -> None:
+        """Format and display the assistant's message.
+        
+        Args:
+            message: The message from the assistant.
+            update_callback: Callback function to update the UI with the message.
+        """
+        pass
+    
+    @abstractmethod
+    def get_prompt_for_model(self, model_id: str) -> str:
+        """Get the prompt formatted for the specific model.
+        
+        Args:
+            model_id: The model ID to get the prompt for.
+            
+        Returns:
+            Formatted prompt string.
+        """
+        pass
+    
+    @staticmethod
+    def get_available_models() -> List[Dict[str, str]]:
+        """Get a list of available models for this provider.
+        
+        Returns:
+            List of dictionaries with model information.
+        """
+        return []
+    
+    @staticmethod
+    def default_model() -> str:
+        """Get the default model ID for this provider.
+        
+        Returns:
+            Default model ID string.
+        """
+        return ""
+
+# Manager class to handle multiple AI providers
+class AIProviderManager:
+    """Manager for different AI provider integrations."""
+    
+    PROVIDERS = {
+        "anthropic": "AnthropicProvider",
+        "openai": "OpenAIProvider"
+        # Add more providers here as they are implemented
+    }
+    
+    @staticmethod
+    def get_provider_names() -> List[str]:
+        """Get a list of available provider names.
+        
+        Returns:
+            List of provider name strings.
+        """
+        return list(AIProviderManager.PROVIDERS.keys())
+    
+    @staticmethod
+    def create_provider(provider_name: str, **kwargs) -> Optional[AIProvider]:
+        """Factory method to create an AI provider.
+        
+        Args:
+            provider_name: Name of the provider to create.
+            **kwargs: Additional arguments to pass to the provider constructor.
+            
+        Returns:
+            AIProvider instance or None if creation failed.
+        """
+        logger.info(f"Creating AI provider: {provider_name} with kwargs: {kwargs}")
+        
+        # Dynamically import providers without circular imports
+        if provider_name == "anthropic":
+            try:
+                from .anthropic_provider import AnthropicProvider
+                provider = AnthropicProvider(**kwargs)
+                success = provider.initialize()
+                if success:
+                    logger.info(f"Successfully created and initialized AnthropicProvider")
+                    return provider
+                else:
+                    logger.error(f"Failed to initialize AnthropicProvider")
+                    return None
+            except ImportError as e:
+                logger.error(f"Failed to import AnthropicProvider: {str(e)}")
+                return None
+            except Exception as e:
+                import traceback
+                logger.error(f"Error creating AnthropicProvider: {str(e)}\n{traceback.format_exc()}")
+                return None
+        elif provider_name == "openai":
+            try:
+                # First check if openai package is installed
+                try:
+                    import openai
+                    logger.info("OpenAI package found")
+                except ImportError as e:
+                    logger.error(f"OpenAI package not installed: {str(e)}")
+                    return None
+                    
+                # Then try to import our provider
+                from .openai_provider import OpenAIProvider
+                logger.info("Creating OpenAIProvider instance")
+                provider = OpenAIProvider(**kwargs)
+                
+                # Initialize the provider
+                logger.info("Initializing OpenAIProvider")
+                success = provider.initialize()
+                
+                if success:
+                    logger.info("Successfully created and initialized OpenAIProvider")
+                    return provider
+                else:
+                    logger.error("Failed to initialize OpenAIProvider")
+                    return None
+            except ImportError as e:
+                logger.error(f"Failed to import OpenAIProvider: {str(e)}")
+                return None
+            except Exception as e:
+                import traceback
+                logger.error(f"Error creating OpenAIProvider: {str(e)}\n{traceback.format_exc()}")
+                return None
+        
+        # Add more provider imports here as they are implemented
+        
+        logger.error(f"Unknown provider name: {provider_name}")
+        return None
diff --git a/src/main.py b/src/main.py
index 8c145fc..a92a797 100644
--- a/src/main.py
+++ b/src/main.py
@@ -1,25 +1,72 @@
 import sys
 import logging
-from PyQt6.QtWidgets import QApplication
+import traceback
+from PyQt6.QtWidgets import QApplication, QMessageBox
 from .window import MainWindow
 from .store import Store
 from .anthropic import AnthropicClient
 
-logging.basicConfig(filename='agent.log', level=logging.DEBUG, 
-                    format='%(asctime)s - %(levelname)s - %(message)s')
+# Set up more detailed logging
+logging.basicConfig(
+    filename='agent.log', 
+    level=logging.DEBUG, 
+    format='%(asctime)s - %(levelname)s - [%(filename)s:%(lineno)d] - %(message)s',
+    force=True
+)
+
+# Add console handler for immediate feedback
+console = logging.StreamHandler()
+console.setLevel(logging.DEBUG)
+formatter = logging.Formatter('%(asctime)s - %(levelname)s - [%(filename)s:%(lineno)d] - %(message)s')
+console.setFormatter(formatter)
+logging.getLogger('').addHandler(console)
+
+logger = logging.getLogger(__name__)
 
 def main():
+    logger.info("Starting Grunty application")
     app = QApplication(sys.argv)
     
     app.setQuitOnLastWindowClosed(False)  # Prevent app from quitting when window is closed
     
-    store = Store()
-    anthropic_client = AnthropicClient()
+    # Check for required dependencies
+    try:
+        import anthropic
+        logger.info("Anthropic package found")
+    except ImportError:
+        error_msg = "The anthropic package is required. Please install it with: pip install anthropic"
+        logger.error(error_msg)
+        QMessageBox.critical(None, "Missing Dependency", error_msg)
+        return
     
-    window = MainWindow(store, anthropic_client)
-    window.show()  # Just show normally, no maximize
+    # Optional dependency for OpenAI
+    try:
+        import openai
+        logger.info("OpenAI package found")
+    except ImportError:
+        logger.warning("OpenAI package not installed. OpenAI models will not be available.")
+        pass
     
-    sys.exit(app.exec())
+    try:
+        logger.info("Initializing store")
+        store = Store()
+        logger.info("Initializing Anthropic client")
+        anthropic_client = AnthropicClient()
+        
+        logger.info("Creating main window")
+        window = MainWindow(store, anthropic_client)
+        logger.info("Showing main window")
+        window.show()  # Just show normally, no maximize
+        
+        logger.info("Starting application event loop")
+        sys.exit(app.exec())
+    except Exception as e:
+        error_msg = f"Error starting application: {str(e)}"
+        stack_trace = traceback.format_exc()
+        logger.error(f"{error_msg}\n{stack_trace}")
+        QMessageBox.critical(None, "Application Error", 
+                           f"{error_msg}\n\nCheck agent.log for details.")
+        sys.exit(1)
 
 if __name__ == "__main__":
     main()
diff --git a/src/openai_provider.py b/src/openai_provider.py
new file mode 100644
index 0000000..d77a902
--- /dev/null
+++ b/src/openai_provider.py
@@ -0,0 +1,669 @@
+import os
+import json
+import base64
+import logging
+from typing import Dict, Any, List, Optional, Union
+
+try:
+    import openai
+    from openai.types.chat import ChatCompletionMessage
+    OPENAI_AVAILABLE = True
+except ImportError:
+    OPENAI_AVAILABLE = False
+
+from dotenv import load_dotenv
+from .prompt_manager import OpenAIPromptManager
+from .ai_providers import AIProvider
+
+logger = logging.getLogger(__name__)
+
+class OpenAIProvider(AIProvider):
+    """OpenAI provider implementation."""
+    
+    # Available models with display names and IDs
+    AVAILABLE_MODELS = [
+        {
+            "id": "gpt-4o",
+            "name": "GPT-4o",
+            "provider": "openai",
+            "features": ["computer_use"]
+        },
+        {
+            "id": "gpt-4-turbo",
+            "name": "GPT-4 Turbo",
+            "provider": "openai",
+            "features": []
+        },
+        {
+            "id": "gpt-4",
+            "name": "GPT-4",
+            "provider": "openai",
+            "features": []
+        }
+    ]
+    
+    def __init__(self, api_key: Optional[str] = None, model_id: Optional[str] = None):
+        """Initialize the OpenAI provider.
+        
+        Args:
+            api_key: OpenAI API key (optional, will use env var if not provided)
+            model_id: Model ID to use (optional, will use default if not provided)
+        """
+        load_dotenv()  # Load environment variables from .env file
+        self.api_key = api_key or os.getenv("OPENAI_API_KEY")
+        self.model_id = model_id or self.default_model()
+        self.client = None
+        self.prompt_manager = OpenAIPromptManager()
+        self.last_tool_use_id = None
+    
+    def initialize(self) -> bool:
+        """Initialize the OpenAI client.
+        
+        Returns:
+            True if initialization was successful, False otherwise.
+        """
+        # Add more detailed error logging
+        logger.info(f"Initializing OpenAI provider with model: {self.model_id}")
+        
+        # Check if OpenAI package is available
+        if not OPENAI_AVAILABLE:
+            error_msg = "OpenAI package not installed. Please install with 'pip install openai'"
+            logger.error(error_msg)
+            return False
+            
+        # Check if API key is available
+        if not self.api_key:
+            error_msg = "OPENAI_API_KEY not found in environment variables"
+            logger.error(error_msg)
+            return False
+        
+        # Try to initialize client
+        try:
+            logger.info("Creating OpenAI client")
+            self.client = openai.OpenAI(api_key=self.api_key)
+            
+            # Try a simple API call to verify the client works
+            logger.info("Testing OpenAI client with a models list request")
+            models = self.client.models.list()
+            logger.info(f"Successfully initialized OpenAI client, models available: {len(models.data)}")
+            
+            return True
+        except Exception as e:
+            import traceback
+            stack_trace = traceback.format_exc()
+            error_msg = f"Failed to initialize OpenAI client: {str(e)}"
+            logger.error(f"{error_msg}\n{stack_trace}")
+            return False
+    
+    def get_prompt_for_model(self, model_id: str) -> str:
+        """Get the prompt formatted for the specific OpenAI model.
+        
+        Args:
+            model_id: The model ID to get the prompt for.
+            
+        Returns:
+            Formatted prompt string.
+        """
+        current_prompt = self.prompt_manager.get_current_prompt()
+        return self.prompt_manager.format_prompt_for_model(current_prompt, model_id)
+    
+    def get_next_action(self, run_history: List[Dict[str, Any]]) -> Any:
+        """Get the next action from OpenAI.
+        
+        Args:
+            run_history: List of conversation messages.
+            
+        Returns:
+            Response object from OpenAI.
+        """
+        if not OPENAI_AVAILABLE:
+            raise ImportError("OpenAI package not installed. Please install with 'pip install openai'")
+            
+        if not self.client:
+            if not self.initialize():
+                raise ValueError("OpenAI client not initialized")
+        
+        try:
+            # Convert history to OpenAI format
+            messages = []
+            
+            # Add system message
+            messages.append({
+                "role": "system", 
+                "content": self.get_prompt_for_model(self.model_id)
+            })
+            
+            # Convert history messages
+            for message in run_history:
+                if message.get("role") == "user":
+                    # Handle user messages with potential images
+                    content = message.get("content", [])
+                    if isinstance(content, str):
+                        messages.append({"role": "user", "content": content})
+                    elif isinstance(content, list):
+                        # Format multi-part content (text and images)
+                        formatted_content = []
+                        for item in content:
+                            if item.get("type") == "text":
+                                formatted_content.append({"type": "text", "text": item.get("text", "")})
+                            elif item.get("type") == "image":
+                                # Handle base64 images
+                                if item.get("source", {}).get("type") == "base64":
+                                    formatted_content.append({
+                                        "type": "image_url",
+                                        "image_url": {
+                                            "url": f"data:image/png;base64,{item['source']['data']}",
+                                        }
+                                    })
+                            elif item.get("type") == "tool_result":
+                                # Handle tool results
+                                tool_content = []
+                                for tool_item in item.get("content", []):
+                                    if tool_item.get("type") == "text":
+                                        tool_content.append({"type": "text", "text": tool_item.get("text", "")})
+                                    elif tool_item.get("type") == "image":
+                                        if tool_item.get("source", {}).get("type") == "base64":
+                                            tool_content.append({
+                                                "type": "image_url",
+                                                "image_url": {
+                                                    "url": f"data:image/png;base64,{tool_item['source']['data']}",
+                                                }
+                                            })
+                                # Add tool message
+                                messages.append({
+                                    "role": "tool", 
+                                    "tool_call_id": item.get("tool_use_id", "tool_1"),
+                                    "content": tool_content if isinstance(tool_content, str) else json.dumps(tool_content)
+                                })
+                                continue
+                        
+                        if formatted_content:
+                            messages.append({"role": "user", "content": formatted_content})
+                elif message.get("role") == "assistant":
+                    # Handle assistant messages
+                    content = message.get("content", [])
+                    if isinstance(content, str):
+                        messages.append({"role": "assistant", "content": content})
+                    elif isinstance(content, list):
+                        # Look for tool use
+                        tool_calls = []
+                        text_content = ""
+                        
+                        for item in content:
+                            if item.get("type") == "text":
+                                text_content += item.get("text", "")
+                            elif item.get("type") == "tool_use":
+                                tool_calls.append({
+                                    "id": item.get("id", f"tool_{len(tool_calls)}"),
+                                    "type": "function",
+                                    "function": {
+                                        "name": item.get("name", ""),
+                                        "arguments": json.dumps(item.get("input", {}))
+                                    }
+                                })
+                        
+                        if tool_calls:
+                            messages.append({
+                                "role": "assistant",
+                                "content": text_content if text_content else None,
+                                "tool_calls": tool_calls
+                            })
+                        elif text_content:
+                            messages.append({"role": "assistant", "content": text_content})
+            
+            # Check if the selected model supports computer use
+            model_info = next((m for m in self.AVAILABLE_MODELS if m["id"] == self.model_id), None)
+            supports_computer_use = model_info and "computer_use" in model_info.get("features", [])
+            
+            # Define tools based on model capabilities
+            tools = []
+            
+            # Add computer use tool if supported
+            if supports_computer_use:
+                tools.append({
+                    "type": "function",
+                    "function": {
+                        "name": "computer",
+                        "description": "Control the computer with mouse and keyboard actions",
+                        "parameters": {
+                            "type": "object",
+                            "properties": {
+                                "action": {
+                                    "type": "string",
+                                    "enum": ["mouse_move", "left_click", "right_click", "middle_click", 
+                                            "double_click", "left_click_drag", "type", "key", 
+                                            "screenshot", "cursor_position"],
+                                    "description": "The action to perform"
+                                },
+                                "coordinate": {
+                                    "type": "array",
+                                    "items": {"type": "number"},
+                                    "description": "The x,y coordinates for mouse actions"
+                                },
+                                "text": {
+                                    "type": "string",
+                                    "description": "The text to type or key to press"
+                                }
+                            },
+                            "required": ["action"]
+                        }
+                    }
+                })
+            
+            # Always add finish_run tool
+            tools.append({
+                "type": "function",
+                "function": {
+                    "name": "finish_run",
+                    "description": "Call this function when you have achieved the goal of the task.",
+                    "parameters": {
+                        "type": "object",
+                        "properties": {
+                            "success": {
+                                "type": "boolean",
+                                "description": "Whether the task was successful"
+                            },
+                            "error": {
+                                "type": "string",
+                                "description": "The error message if the task was not successful"
+                            }
+                        },
+                        "required": ["success"]
+                    }
+                }
+            })
+            
+            # Create the completion request
+            response = self.client.chat.completions.create(
+                model=self.model_id,
+                messages=messages,
+                tools=tools,
+                temperature=0.7,
+                max_tokens=1024,
+                tool_choice="auto"
+            )
+            
+            return response.choices[0].message
+        
+        except Exception as e:
+            raise Exception(f"OpenAI API Error: {str(e)}")
+    
+    def generate_response(self, message: str, screenshot_path: Optional[str] = None, **kwargs) -> str:
+        """Generate a response from the OpenAI model.
+        
+        Args:
+            message: User message to respond to
+            screenshot_path: Path to screenshot to include (optional)
+            **kwargs: Additional arguments
+            
+        Returns:
+            Response string from the model
+        """
+        if not self.client:
+            logger.error("OpenAI client not initialized")
+            return "Error: OpenAI client not initialized. Please check your API key and connectivity."
+            
+        try:
+            logger.info(f"Generating response using model {self.model_id}")
+            
+            messages = self._prepare_messages(message, screenshot_path)
+            tools = self._prepare_tools()
+            
+            logger.info(f"Calling OpenAI API with {len(messages)} messages and {len(tools)} tools")
+            
+            # Call the OpenAI API with the prepared messages and tools
+            completion = self.client.chat.completions.create(
+                model=self.model_id,
+                messages=messages,
+                tools=tools,
+                tool_choice="auto",
+                max_tokens=kwargs.get("max_tokens", 2048),
+                temperature=kwargs.get("temperature", 0.7)
+            )
+            
+            # Process the response
+            response_message = completion.choices[0].message
+            logger.info(f"Received response with {len(response_message.content or '')} chars")
+            
+            # Check for tool calls
+            if hasattr(response_message, "tool_calls") and response_message.tool_calls:
+                tool_calls = response_message.tool_calls
+                logger.info(f"Response contains {len(tool_calls)} tool call(s)")
+                
+                for tool_call in tool_calls:
+                    if tool_call.function.name == "computer_control":
+                        return self._handle_computer_control(tool_call, messages)
+                
+                # If we get here, no computer control was performed
+                return "The AI attempted to use tools but none were for computer control."
+            
+            # Return the plain text response if there were no tool calls
+            return response_message.content or "No response generated."
+        
+        except Exception as e:
+            import traceback
+            logger.error(f"Error generating response: {str(e)}\n{traceback.format_exc()}")
+            return f"Error generating response: {str(e)}"
+            
+    def _handle_computer_control(self, tool_call, messages) -> str:
+        """Handle a computer control tool call.
+        
+        Args:
+            tool_call: The tool call object from OpenAI
+            messages: The current conversation messages
+            
+        Returns:
+            Response string from the model after executing the computer control
+        """
+        try:
+            # Extract the command from the tool call
+            function_args = json.loads(tool_call.function.arguments)
+            command = function_args.get("command")
+            
+            if not command:
+                return "Error: No command found in computer control request"
+                
+            logger.info(f"Executing computer control command: {command}")
+            
+            # Execute the command
+            result = self._execute_computer_control(command)
+            
+            if result.get("error"):
+                return f"Error executing command: {result['error']}"
+                
+            # Add the tool response to messages and generate a new response
+            tool_id = tool_call.id
+            tool_response = {
+                "tool_call_id": tool_id,
+                "role": "tool",
+                "name": "computer_control",
+                "content": json.dumps(result)
+            }
+            
+            # Get a response object to work with
+            response = self.client.chat.completions.create(
+                model=self.model_id,
+                messages=messages,
+                tools=self._prepare_tools(),
+                tool_choice="auto",
+                max_tokens=1024,
+            )
+            
+            # Create a new messages list with the tool response
+            new_messages = messages + [
+                self.response_message_to_dict(response.choices[0].message),
+                tool_response
+            ]
+            
+            # Generate a follow-up response
+            followup_completion = self.client.chat.completions.create(
+                model=self.model_id,
+                messages=new_messages,
+                max_tokens=1024,
+            )
+            
+            # Return the follow-up response
+            return followup_completion.choices[0].message.content or "No response after computer control."
+        
+        except Exception as e:
+            import traceback
+            logger.error(f"Error in computer control: {str(e)}\n{traceback.format_exc()}")
+            return f"Error processing computer control: {str(e)}"
+    
+    def extract_action(self, response: Any) -> Dict[str, Any]:
+        """Extract the action from the OpenAI response.
+        
+        Args:
+            response: Response message from OpenAI.
+            
+        Returns:
+            Dict with the parsed action.
+        """
+        if not response:
+            logger.error("Received empty response from OpenAI")
+            return {'type': 'error', 'message': 'Empty response from OpenAI'}
+        
+        # Check for tool calls
+        if hasattr(response, 'tool_calls') and response.tool_calls:
+            for tool_call in response.tool_calls:
+                function_name = tool_call.function.name
+                
+                if function_name == 'finish_run':
+                    return {'type': 'finish'}
+                
+                if function_name != 'computer':
+                    logger.error(f"Unexpected tool: {function_name}")
+                    return {'type': 'error', 'message': f"Unexpected tool: {function_name}"}
+                
+                try:
+                    # Parse arguments
+                    args = json.loads(tool_call.function.arguments)
+                    action_type = args.get('action')
+                    
+                    if action_type in ['mouse_move', 'left_click_drag']:
+                        if 'coordinate' not in args or len(args['coordinate']) != 2:
+                            logger.error(f"Invalid coordinate for mouse action: {args}")
+                            return {'type': 'error', 'message': 'Invalid coordinate for mouse action'}
+                        return {
+                            'type': action_type,
+                            'x': args['coordinate'][0],
+                            'y': args['coordinate'][1]
+                        }
+                    elif action_type in ['left_click', 'right_click', 'middle_click', 'double_click', 'screenshot', 'cursor_position']:
+                        return {'type': action_type}
+                    elif action_type in ['type', 'key']:
+                        if 'text' not in args:
+                            logger.error(f"Missing text for keyboard action: {args}")
+                            return {'type': 'error', 'message': 'Missing text for keyboard action'}
+                        return {'type': action_type, 'text': args['text']}
+                    else:
+                        logger.error(f"Unsupported action: {action_type}")
+                        return {'type': 'error', 'message': f"Unsupported action: {action_type}"}
+                except json.JSONDecodeError:
+                    logger.error(f"Failed to parse tool arguments: {tool_call.function.arguments}")
+                    return {'type': 'error', 'message': 'Failed to parse tool arguments'}
+        
+        # If no tool calls, return error
+        return {'type': 'error', 'message': 'No tool use found in message'}
+    
+    def display_assistant_message(self, message: Any, update_callback: callable) -> None:
+        """Format and display the assistant's message.
+        
+        Args:
+            message: The message from OpenAI.
+            update_callback: Callback function to update the UI with the message.
+        """
+        # Display content text if present
+        if hasattr(message, 'content') and message.content:
+            update_callback(f"Assistant: {message.content}")
+        
+        # Display tool calls
+        if hasattr(message, 'tool_calls') and message.tool_calls:
+            for tool_call in message.tool_calls:
+                function_name = tool_call.function.name
+                self.last_tool_use_id = tool_call.id
+                
+                try:
+                    args = json.loads(tool_call.function.arguments)
+                    
+                    if function_name == 'computer':
+                        action = {
+                            'type': args.get('action'),
+                            'x': args.get('coordinate', [0, 0])[0] if 'coordinate' in args else None,
+                            'y': args.get('coordinate', [0, 0])[1] if 'coordinate' in args else None,
+                            'text': args.get('text')
+                        }
+                        update_callback(f"Performed action: {json.dumps(action)}")
+                    elif function_name == 'finish_run':
+                        update_callback("Assistant: Task completed! ")
+                    else:
+                        update_callback(f"Assistant action: {function_name} - {tool_call.function.arguments}")
+                except json.JSONDecodeError:
+                    update_callback(f"Assistant action: {function_name} - (invalid JSON)")
+    
+    @staticmethod
+    def get_available_models() -> List[Dict[str, str]]:
+        """Get a list of available OpenAI models.
+        
+        Returns:
+            List of dictionaries with model information.
+        """
+        return OpenAIProvider.AVAILABLE_MODELS
+    
+    @staticmethod
+    def default_model() -> str:
+        """Get the default OpenAI model ID.
+        
+        Returns:
+            Default model ID string.
+        """
+        # Return the first model that supports computer use
+        for model in OpenAIProvider.AVAILABLE_MODELS:
+            if "computer_use" in model.get("features", []):
+                return model["id"]
+        
+        # Fallback to the first model if none support computer use
+        return OpenAIProvider.AVAILABLE_MODELS[0]["id"] if OpenAIProvider.AVAILABLE_MODELS else "gpt-4o"
+
+    def _prepare_messages(self, message: str, screenshot_path: Optional[str] = None) -> List[Dict[str, Any]]:
+        """Prepare the messages for the OpenAI API.
+        
+        Args:
+            message: User message to respond to
+            screenshot_path: Path to screenshot to include (optional)
+            
+        Returns:
+            List of messages in OpenAI format
+        """
+        messages = []
+        
+        # Add system message
+        messages.append({
+            "role": "system", 
+            "content": self.get_prompt_for_model(self.model_id)
+        })
+        
+        # Add user message
+        messages.append({
+            "role": "user", 
+            "content": message
+        })
+        
+        # Add screenshot if provided
+        if screenshot_path:
+            with open(screenshot_path, "rb") as image_file:
+                encoded_image = base64.b64encode(image_file.read()).decode("utf-8")
+                messages.append({
+                    "role": "user", 
+                    "content": {
+                        "type": "image",
+                        "image": {
+                            "url": f"data:image/png;base64,{encoded_image}",
+                        }
+                    }
+                })
+        
+        return messages
+    
+    def _prepare_tools(self) -> List[Dict[str, Any]]:
+        """Prepare the tools for the OpenAI API.
+        
+        Returns:
+            List of tools in OpenAI format
+        """
+        tools = []
+        
+        # Add computer use tool
+        tools.append({
+            "type": "function",
+            "function": {
+                "name": "computer_control",
+                "description": "Control the computer with mouse and keyboard actions",
+                "parameters": {
+                    "type": "object",
+                    "properties": {
+                        "command": {
+                            "type": "string",
+                            "description": "The command to execute"
+                        }
+                    },
+                    "required": ["command"]
+                }
+            }
+        })
+        
+        return tools
+    
+    def _execute_computer_control(self, command: str) -> Dict[str, Any]:
+        """Execute a computer control command.
+        
+        Args:
+            command: The command to execute
+            
+        Returns:
+            Result of the command execution
+        """
+        try:
+            logger.info(f"Executing computer control command: {command}")
+            
+            # Import the computer control module
+            from .computer import ComputerControl
+            
+            # Create a computer control instance
+            computer = ComputerControl()
+            
+            # Execute the command
+            result = computer.execute_command(command)
+            
+            logger.info(f"Computer control execution result: {result}")
+            return result
+        except Exception as e:
+            import traceback
+            error_message = f"Error executing computer control: {str(e)}"
+            logger.error(f"{error_message}\n{traceback.format_exc()}")
+            return {"error": error_message}
+    
+    def response_message_to_dict(self, message) -> Dict[str, Any]:
+        """Convert a response message to a dictionary.
+        
+        Args:
+            message: Response message from OpenAI (can be various types)
+            
+        Returns:
+            Dictionary representation of the message
+        """
+        try:
+            # If it's already a dict, return it
+            if isinstance(message, dict):
+                return message
+                
+            # Handle ChatCompletionMessage objects
+            if hasattr(message, 'model_dump'):
+                # New OpenAI SDK returns objects with model_dump
+                return message.model_dump()
+                
+            # Handle API response object
+            result = {
+                "role": getattr(message, "role", "assistant"),
+                "content": getattr(message, "content", "")
+            }
+            
+            # Add tool calls if present
+            if hasattr(message, "tool_calls") and message.tool_calls:
+                result["tool_calls"] = []
+                for tool_call in message.tool_calls:
+                    tc_dict = {
+                        "id": tool_call.id,
+                        "type": "function",
+                        "function": {
+                            "name": tool_call.function.name,
+                            "arguments": tool_call.function.arguments
+                        }
+                    }
+                    result["tool_calls"].append(tc_dict)
+                    
+            return result
+        except Exception as e:
+            logger.error(f"Error converting message to dict: {str(e)}")
+            # Fall back to a basic message
+            return {"role": "assistant", "content": str(message)}
diff --git a/src/store.py b/src/store.py
index 5173906..8cb6ce3 100644
--- a/src/store.py
+++ b/src/store.py
@@ -1,11 +1,12 @@
+import os
+import json
 import logging
-from .anthropic import AnthropicClient
+from typing import Dict, Any, Optional, List
+from dotenv import load_dotenv
 from .computer import ComputerControl
-from anthropic.types.beta import BetaMessage, BetaToolUseBlock, BetaTextBlock
-import json
+from .ai_providers import AIProviderManager
+from .ai_providers import AIProvider
 
-
-logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
 logger = logging.getLogger(__name__)
 
 class Store:
@@ -15,23 +16,169 @@ def __init__(self):
         self.running = False
         self.error = None
         self.run_history = []
-        self.last_tool_use_id = None
+        
+        # Load environment variables
+        load_dotenv()
+        
+        # Initialize AI provider
+        self.current_provider_name = os.getenv("DEFAULT_AI_PROVIDER", "anthropic")
+        self.current_model_id = None
+        self.ai_provider = self._create_provider(self.current_provider_name)
+        self.computer_control = ComputerControl()
+        
+    def _create_provider(self, provider_name: str) -> Optional[AIProvider]:
+        """Create an AI provider instance.
+        
+        Args:
+            provider_name: Name of the provider to create.
+            
+        Returns:
+            AIProvider instance or None if creation failed.
+        """
+        logger.info(f"Creating AI provider in store: {provider_name}")
         
         try:
-            self.anthropic_client = AnthropicClient()
-        except ValueError as e:
+            # Try to get the API key from environment
+            load_dotenv()
+            api_key_env_var = f"{provider_name.upper()}_API_KEY"
+            api_key = os.getenv(api_key_env_var)
+            
+            if not api_key:
+                error_msg = f"No API key found for {provider_name} provider. "
+                error_msg += f"Please set {api_key_env_var} in your .env file."
+                logger.error(error_msg)
+                self.error = error_msg
+                return None
+                
+            logger.info(f"Found API key for {provider_name}")
+            
+            # Create provider instance through manager
+            provider = AIProviderManager.create_provider(provider_name, api_key=api_key)
+            
+            if not provider:
+                error_msg = f"Failed to create {provider_name} provider. "
+                error_msg += "Check that you have the required dependencies installed."
+                logger.error(error_msg)
+                self.error = error_msg
+                return None
+                
+            logger.info(f"Successfully created {provider_name} provider")
+            return provider
+            
+        except Exception as e:
+            import traceback
             self.error = str(e)
-            logger.error(f"AnthropicClient initialization error: {self.error}")
-        self.computer_control = ComputerControl()
+            logger.error(f"AI provider initialization error: {self.error}")
+            logger.error(traceback.format_exc())
+            return None
         
     def set_instructions(self, instructions):
         self.instructions = instructions
         logger.info(f"Instructions set: {instructions}")
+    
+    def set_ai_provider(self, provider_name: str, model_id: Optional[str] = None) -> bool:
+        """Change the AI provider.
         
+        Args:
+            provider_name: Name of the provider to use.
+            model_id: Specific model ID to use (optional).
+            
+        Returns:
+            True if successful, False otherwise.
+        """
+        try:
+            logger.info(f"Setting AI provider to {provider_name} with model {model_id}")
+            
+            # Only recreate provider if it's different
+            if provider_name != self.current_provider_name:
+                logger.info(f"Creating new provider instance for {provider_name}")
+                self.current_provider_name = provider_name
+                self.ai_provider = self._create_provider(provider_name)
+                
+                if not self.ai_provider:
+                    logger.error(f"Failed to create provider: {self.error}")
+                    return False
+            else:
+                logger.info(f"Provider {provider_name} is already active, no need to recreate")
+                
+            # Set model ID if provided or use current
+            if model_id:
+                logger.info(f"Setting model to {model_id}")
+                self.current_model_id = model_id
+                if self.ai_provider:
+                    self.ai_provider.model_id = model_id
+                    
+            return self.ai_provider is not None
+        except Exception as e:
+            import traceback
+            self.error = str(e)
+            logger.error(f"Failed to set AI provider: {self.error}")
+            logger.error(traceback.format_exc())
+            return False
+    
+    def get_available_providers(self) -> List[str]:
+        """Get a list of available AI providers.
+        
+        Returns:
+            List of provider name strings.
+        """
+        return AIProviderManager.get_provider_names()
+    
+    def get_available_models(self, provider_name: Optional[str] = None) -> List[Dict[str, Any]]:
+        """Get a list of available models for a provider.
+        
+        Args:
+            provider_name: Name of the provider to get models for (uses current if None).
+            
+        Returns:
+            List of model info dictionaries.
+        """
+        name = provider_name or self.current_provider_name
+        provider = AIProviderManager.create_provider(name)
+        
+        if provider:
+            return provider.get_available_models()
+        return []
+        
+    def get_prompt_manager(self):
+        """Get the current provider's prompt manager.
+        
+        Returns:
+            PromptManagerBase instance for the current provider.
+        """
+        if self.ai_provider:
+            return self.ai_provider.prompt_manager
+        return None
+    
+    def update_prompt(self, prompt: str) -> bool:
+        """Update the system prompt for the current provider.
+        
+        Args:
+            prompt: New system prompt.
+            
+        Returns:
+            True if successful, False otherwise.
+        """
+        if not self.ai_provider or not hasattr(self.ai_provider, 'prompt_manager'):
+            return False
+            
+        return self.ai_provider.prompt_manager.save_prompt(prompt)
+    
+    def reset_prompt_to_default(self) -> bool:
+        """Reset the system prompt to the default for the current provider.
+        
+        Returns:
+            True if successful, False otherwise.
+        """
+        if not self.ai_provider or not hasattr(self.ai_provider, 'prompt_manager'):
+            return False
+            
+        return self.ai_provider.prompt_manager.reset_to_default()
+    
     def run_agent(self, update_callback):
-        if self.error:
-            update_callback(f"Error: {self.error}")
-            logger.error(f"Agent run failed due to initialization error: {self.error}")
+        if not self.ai_provider:
+            update_callback(f"Error: AI provider not initialized")
+            logger.error("Agent run failed due to missing AI provider")
             return
 
         self.running = True
@@ -41,14 +188,14 @@ def run_agent(self, update_callback):
         
         while self.running:
             try:
-                message = self.anthropic_client.get_next_action(self.run_history)
+                message = self.ai_provider.get_next_action(self.run_history)
                 self.run_history.append(message)
-                logger.debug(f"Received message from Anthropic: {message}")
+                logger.debug(f"Received message from AI: {message}")
                 
                 # Display assistant's message in the chat
-                self.display_assistant_message(message, update_callback)
+                self.ai_provider.display_assistant_message(message, update_callback)
                 
-                action = self.extract_action(message)
+                action = self.ai_provider.extract_action(message)
                 logger.info(f"Extracted action: {action}")
                 
                 if action['type'] == 'error':
@@ -73,7 +220,7 @@ def run_agent(self, update_callback):
                             "content": [
                                 {
                                     "type": "tool_result",
-                                    "tool_use_id": self.last_tool_use_id,
+                                    "tool_use_id": self.ai_provider.last_tool_use_id,
                                     "content": [
                                         {"type": "text", "text": "Here is a screenshot after the action was executed"},
                                         {"type": "image", "source": {"type": "base64", "media_type": "image/png", "data": screenshot}}
@@ -112,77 +259,6 @@ def stop_run(self):
             "content": [{"type": "text", "text": "Agent run stopped by user."}]
         })
         
-    def extract_action(self, message):
-        logger.debug(f"Extracting action from message: {message}")
-        if not isinstance(message, BetaMessage):
-            logger.error(f"Unexpected message type: {type(message)}")
-            return {'type': 'error', 'message': 'Unexpected message type'}
-        
-        for item in message.content:
-            if isinstance(item, BetaToolUseBlock):
-                tool_use = item
-                logger.debug(f"Found tool use: {tool_use}")
-                self.last_tool_use_id = tool_use.id
-                if tool_use.name == 'finish_run':
-                    return {'type': 'finish'}
-                
-                if tool_use.name != 'computer':
-                    logger.error(f"Unexpected tool: {tool_use.name}")
-                    return {'type': 'error', 'message': f"Unexpected tool: {tool_use.name}"}
-                
-                input_data = tool_use.input
-                action_type = input_data.get('action')
-                
-                if action_type in ['mouse_move', 'left_click_drag']:
-                    if 'coordinate' not in input_data or len(input_data['coordinate']) != 2:
-                        logger.error(f"Invalid coordinate for mouse action: {input_data}")
-                        return {'type': 'error', 'message': 'Invalid coordinate for mouse action'}
-                    return {
-                        'type': action_type,
-                        'x': input_data['coordinate'][0],
-                        'y': input_data['coordinate'][1]
-                    }
-                elif action_type in ['left_click', 'right_click', 'middle_click', 'double_click', 'screenshot', 'cursor_position']:
-                    return {'type': action_type}
-                elif action_type in ['type', 'key']:
-                    if 'text' not in input_data:
-                        logger.error(f"Missing text for keyboard action: {input_data}")
-                        return {'type': 'error', 'message': 'Missing text for keyboard action'}
-                    return {'type': action_type, 'text': input_data['text']}
-                else:
-                    logger.error(f"Unsupported action: {action_type}")
-                    return {'type': 'error', 'message': f"Unsupported action: {action_type}"}
-        
-        logger.error("No tool use found in message")
-        return {'type': 'error', 'message': 'No tool use found in message'}
-
-    def display_assistant_message(self, message, update_callback):
-        if isinstance(message, BetaMessage):
-            for item in message.content:
-                if isinstance(item, BetaTextBlock):
-                    # Clean and format the text
-                    text = item.text.strip()
-                    if text:  # Only send non-empty messages
-                        update_callback(f"Assistant: {text}")
-                elif isinstance(item, BetaToolUseBlock):
-                    # Format tool use in a more readable way
-                    tool_name = item.name
-                    tool_input = item.input
-                    
-                    # Convert tool use to a more readable format
-                    if tool_name == 'computer':
-                        action = {
-                            'type': tool_input.get('action'),
-                            'x': tool_input.get('coordinate', [0, 0])[0] if 'coordinate' in tool_input else None,
-                            'y': tool_input.get('coordinate', [0, 0])[1] if 'coordinate' in tool_input else None,
-                            'text': tool_input.get('text')
-                        }
-                        update_callback(f"Performed action: {json.dumps(action)}")
-                    elif tool_name == 'finish_run':
-                        update_callback("Assistant: Task completed! ")
-                    else:
-                        update_callback(f"Assistant action: {tool_name} - {json.dumps(tool_input)}")
-
     def cleanup(self):
         if hasattr(self, 'computer_control'):
             self.computer_control.cleanup()
diff --git a/src/window.py b/src/window.py
index 9944cb9..4be7bed 100644
--- a/src/window.py
+++ b/src/window.py
@@ -1,11 +1,11 @@
 from PyQt6.QtWidgets import (QMainWindow, QVBoxLayout, QHBoxLayout, QWidget, QTextEdit, 
-                             QPushButton, QLabel, QProgressBar, QSystemTrayIcon, QMenu, QApplication, QDialog, QLineEdit, QMenuBar, QStatusBar)
+                             QPushButton, QLabel, QProgressBar, QSystemTrayIcon, QMenu, QApplication, QDialog, QLineEdit, QMenuBar, QStatusBar, QComboBox)
 from PyQt6.QtCore import Qt, QPoint, pyqtSignal, QThread, QUrl, QSettings
 from PyQt6.QtGui import QFont, QKeySequence, QShortcut, QAction, QTextCursor, QDesktopServices
 from .store import Store
 from .anthropic import AnthropicClient  
 from .voice_control import VoiceController
-from .prompt_manager import PromptManager
+from .prompt_manager import create_prompt_manager
 import logging
 import qtawesome as qta
 
@@ -24,16 +24,17 @@ def run(self):
         self.finished_signal.emit()
 
 class SystemPromptDialog(QDialog):
-    def __init__(self, parent=None, prompt_manager=None):
+    def __init__(self, parent=None, prompt_manager=None, provider_name=None):
         super().__init__(parent)
         self.prompt_manager = prompt_manager
-        self.setWindowTitle("Edit System Prompt")
+        self.provider_name = provider_name or "AI"
+        self.setWindowTitle(f"Edit {self.provider_name.capitalize()} System Prompt")
         self.setFixedSize(800, 600)
         
         layout = QVBoxLayout()
         
         # Description
-        desc_label = QLabel("Edit the system prompt that defines the agent's behavior. Be careful with changes as they may affect functionality.")
+        desc_label = QLabel(f"Edit the system prompt for {self.provider_name.capitalize()} models. Be careful with changes as they may affect functionality.")
         desc_label.setWordWrap(True)
         desc_label.setStyleSheet("color: #666; margin: 10px 0;")
         layout.addWidget(desc_label)
@@ -110,8 +111,8 @@ class MainWindow(QMainWindow):
     def __init__(self, store, anthropic_client):
         super().__init__()
         self.store = store
+        # Keep anthropic_client for backward compatibility 
         self.anthropic_client = anthropic_client
-        self.prompt_manager = PromptManager()
         
         # Initialize theme settings
         self.settings = QSettings('Grunty', 'Preferences')
@@ -128,7 +129,7 @@ def __init__(self, store, anthropic_client):
         self.status_bar.showMessage("Voice control ready")
         
         # Check if API key is missing
-        if self.store.error and "ANTHROPIC_API_KEY not found" in self.store.error:
+        if self.store.error and "API_KEY not found" in self.store.error:
             self.show_api_key_dialog()
         
         self.setWindowTitle("Grunty 👨💻")
@@ -208,7 +209,7 @@ def save_api_key(self, dialog):
             
         # Save to .env file
         with open('.env', 'w') as f:
-            f.write(f'ANTHROPIC_API_KEY={api_key}')
+            f.write(f'API_KEY={api_key}')
             
         # Reinitialize the store and anthropic client
         self.store = Store()
@@ -216,74 +217,107 @@ def save_api_key(self, dialog):
         dialog.accept()
 
     def setup_ui(self):
-        central_widget = QWidget()
-        self.setCentralWidget(central_widget)
-        
-        # Create main layout
-        main_layout = QVBoxLayout()
-        main_layout.setContentsMargins(15, 15, 15, 15)
-        central_widget.setLayout(main_layout)
-        
-        # Container widget for rounded corners
-        self.container = QWidget()  # Make it an instance variable
+        # Main container with padding
+        self.container = QWidget()
         self.container.setObjectName("container")
-        container_layout = QVBoxLayout()
-        container_layout.setSpacing(0)  # Remove spacing between elements
-        self.container.setLayout(container_layout)
-        
-        # Create title bar
-        title_bar = QWidget()
-        title_bar.setObjectName("titleBar")
-        title_bar_layout = QHBoxLayout(title_bar)
-        title_bar_layout.setContentsMargins(10, 5, 10, 5)
-        
-        # Add Grunty title with robot emoji
-        title_label = QLabel("Grunty 🤖")
-        title_label.setObjectName("titleLabel")
-        title_bar_layout.addWidget(title_label)
-        
-        # Add File Menu
-        file_menu = QMenu("File")
-        new_task_action = QAction("New Task", self)
-        new_task_action.setShortcut("Ctrl+N")
-        edit_prompt_action = QAction("Edit System Prompt", self)
-        edit_prompt_action.setShortcut("Ctrl+E")
-        edit_prompt_action.triggered.connect(self.show_prompt_dialog)
-        quit_action = QAction("Quit", self)
-        quit_action.setShortcut("Ctrl+Q")
-        quit_action.triggered.connect(self.quit_application)
-        file_menu.addAction(new_task_action)
-        file_menu.addAction(edit_prompt_action)
-        file_menu.addSeparator()
-        file_menu.addAction(quit_action)
-        
-        file_button = QPushButton("File")
-        file_button.setObjectName("menuButton")
-        file_button.clicked.connect(lambda: file_menu.exec(file_button.mapToGlobal(QPoint(0, file_button.height()))))
-        title_bar_layout.addWidget(file_button)
-        
-        # Add spacer to push remaining items to the right
-        title_bar_layout.addStretch()
-        
-        # Theme toggle button
-        self.theme_button = QPushButton()
-        self.theme_button.setObjectName("titleBarButton")
-        self.theme_button.clicked.connect(self.toggle_theme)
-        self.update_theme_button()
-        title_bar_layout.addWidget(self.theme_button)
         
-        # Minimize and close buttons
-        minimize_button = QPushButton("−")
-        minimize_button.setObjectName("titleBarButton")
-        minimize_button.clicked.connect(self.showMinimized)
-        title_bar_layout.addWidget(minimize_button)
+        self.setCentralWidget(self.container)
+        
+        # Main layout
+        main_layout = QVBoxLayout(self.container)
+        main_layout.setContentsMargins(10, 10, 10, 10)  # Padding inside the window
         
-        close_button = QPushButton("×")
-        close_button.setObjectName("titleBarButton")
-        close_button.clicked.connect(self.close)
-        title_bar_layout.addWidget(close_button)
+        # Header
+        header_layout = QHBoxLayout()
         
-        container_layout.addWidget(title_bar)
+        # App title and info
+        title_layout = QVBoxLayout()
+        title_label = QLabel("Grunty")
+        title_label.setObjectName("titleLabel")
+        subtitle_label = QLabel("AI Computer Control")
+        subtitle_label.setObjectName("subtitleLabel")
+        title_layout.addWidget(title_label)
+        title_layout.addWidget(subtitle_label)
+        
+        # Add AI Provider/Model selector
+        ai_selector_layout = QHBoxLayout()
+        
+        # Provider dropdown
+        provider_layout = QVBoxLayout()
+        provider_label = QLabel("AI Provider:")
+        provider_label.setObjectName("selectorLabel")
+        self.provider_dropdown = QComboBox()
+        self.provider_dropdown.setObjectName("providerDropdown")
+        self.provider_dropdown.addItems(self.store.get_available_providers())
+        self.provider_dropdown.setCurrentText(self.store.current_provider_name)
+        self.provider_dropdown.currentTextChanged.connect(self.change_provider)
+        provider_layout.addWidget(provider_label)
+        provider_layout.addWidget(self.provider_dropdown)
+        
+        # Model dropdown
+        model_layout = QVBoxLayout()
+        model_label = QLabel("Model:")
+        model_label.setObjectName("selectorLabel") 
+        self.model_dropdown = QComboBox()
+        self.model_dropdown.setObjectName("modelDropdown")
+        self.update_model_dropdown()
+        self.model_dropdown.currentTextChanged.connect(self.change_model)
+        model_layout.addWidget(model_label)
+        model_layout.addWidget(self.model_dropdown)
+        
+        ai_selector_layout.addLayout(provider_layout)
+        ai_selector_layout.addLayout(model_layout)
+        
+        # Settings button
+        self.settings_btn = QPushButton()
+        self.settings_btn.setIcon(qta.icon('fa5s.cog'))
+        self.settings_btn.setObjectName("iconButton")
+        self.settings_btn.setToolTip("Settings")
+        self.settings_btn.clicked.connect(self.show_settings_menu)
+        
+        # Buttons layout
+        buttons_layout = QHBoxLayout()
+        buttons_layout.addStretch()
+        
+        # Voice control button (mic icon)
+        self.voice_btn = QPushButton()
+        self.voice_btn.setIcon(qta.icon('fa5s.microphone'))
+        self.voice_btn.setCheckable(True)
+        self.voice_btn.setObjectName("iconButton")
+        self.voice_btn.setToolTip("Voice Control (Experimental)")
+        self.voice_btn.clicked.connect(self.toggle_voice_control)
+        buttons_layout.addWidget(self.voice_btn)
+        
+        self.theme_btn = QPushButton()
+        self.theme_btn.setIcon(qta.icon('fa5s.moon' if self.dark_mode else 'fa5s.sun'))
+        self.theme_btn.setObjectName("iconButton")
+        self.theme_btn.setToolTip("Toggle Light/Dark Mode")
+        self.theme_btn.clicked.connect(self.toggle_theme)
+        buttons_layout.addWidget(self.theme_btn)
+        
+        # Settings button
+        buttons_layout.addWidget(self.settings_btn)
+        
+        # Minimize button
+        min_btn = QPushButton()
+        min_btn.setIcon(qta.icon('fa5s.window-minimize'))
+        min_btn.setObjectName("titlebarButton")
+        min_btn.clicked.connect(self.showMinimized)
+        buttons_layout.addWidget(min_btn)
+        
+        # Close button
+        close_btn = QPushButton()
+        close_btn.setIcon(qta.icon('fa5s.times'))
+        close_btn.setObjectName("titlebarButton")
+        close_btn.clicked.connect(self.minimize_to_tray)
+        buttons_layout.addWidget(close_btn)
+        
+        header_layout.addLayout(title_layout)
+        header_layout.addStretch()
+        header_layout.addLayout(buttons_layout)
+        
+        main_layout.addLayout(header_layout)
+        main_layout.addLayout(ai_selector_layout)
         
         # Action log with modern styling
         self.action_log = QTextEdit()
@@ -299,7 +333,7 @@ def setup_ui(self):
                 font-size: 13px;
             }
         """)
-        container_layout.addWidget(self.action_log, stretch=1)  # Give it flexible space
+        main_layout.addWidget(self.action_log, stretch=1)  # Give it flexible space
         
         # Progress bar - Now above input area
         self.progress_bar = QProgressBar()
@@ -317,7 +351,7 @@ def setup_ui(self):
             }
         """)
         self.progress_bar.hide()
-        container_layout.addWidget(self.progress_bar)
+        main_layout.addWidget(self.progress_bar)
 
         # Input section container - Fixed height at bottom
         input_section = QWidget()
@@ -442,21 +476,18 @@ def setup_ui(self):
         input_layout.addLayout(control_layout)
 
         # Add input section to main container
-        container_layout.addWidget(input_section)
+        main_layout.addWidget(input_section)
 
-        # Add the container to the main layout
-        main_layout.addWidget(self.container)
-        
         # Apply theme after all widgets are set up
         self.apply_theme()
         
     def update_theme_button(self):
         if self.dark_mode:
-            self.theme_button.setIcon(qta.icon('fa5s.sun', color='white'))
-            self.theme_button.setToolTip("Switch to Light Mode")
+            self.theme_btn.setIcon(qta.icon('fa5s.sun', color='white'))
+            self.theme_btn.setToolTip("Switch to Light Mode")
         else:
-            self.theme_button.setIcon(qta.icon('fa5s.moon', color='black'))
-            self.theme_button.setToolTip("Switch to Dark Mode")
+            self.theme_btn.setIcon(qta.icon('fa5s.moon', color='black'))
+            self.theme_btn.setToolTip("Switch to Dark Mode")
 
     def toggle_theme(self):
         self.dark_mode = not self.dark_mode
@@ -555,17 +586,28 @@ def apply_theme(self):
         """
 
         # Apply to all window control buttons
-        for button in [self.theme_button, 
-                      self.findChild(QPushButton, "menuButton"),
-                      self.findChild(QPushButton, "titleBarButton")]:
+        for button in [self.theme_btn, 
+                       self.settings_btn,   # Replace with actual button references
+                       self.voice_btn]:     # instead of findChild which might fail
             if button:
-                button.setStyleSheet(window_control_style)
-
+                button.setStyleSheet(f"""
+                    QPushButton {{
+                        background-color: {colors['button_bg']};
+                        color: {colors['button_text']};
+                        border: none;
+                        border-radius: 4px;
+                        padding: 8px;
+                    }}
+                    QPushButton:hover {{
+                        background-color: {colors['button_hover']};
+                    }}
+                """)
+        
         # Update theme button icon
         if self.dark_mode:
-            self.theme_button.setIcon(qta.icon('fa5s.sun', color=colors['button_text']))
+            self.theme_btn.setIcon(qta.icon('fa5s.sun', color=colors['button_text']))
         else:
-            self.theme_button.setIcon(qta.icon('fa5s.moon', color=colors['button_text']))
+            self.theme_btn.setIcon(qta.icon('fa5s.moon', color=colors['button_text']))
 
         # Update tray menu style if needed
         if hasattr(self, 'tray_icon') and self.tray_icon.contextMenu():
@@ -672,6 +714,17 @@ def toggle_window(self):
             self.raise_()
             self.activateWindow()
 
+    def minimize_to_tray(self):
+        """Minimize the window to the system tray."""
+        self.hide()
+        # Show a notification
+        self.tray_icon.showMessage(
+            "Grunty is still running",
+            "The application is minimized to the system tray.",
+            QSystemTrayIcon.MessageIcon.Information,
+            2000
+        )
+    
     def run_agent(self):
         instructions = self.input_area.toPlainText()
         if not instructions:
@@ -837,6 +890,26 @@ def update_log(self, message):
             self.action_log.verticalScrollBar().maximum()
         )
         
+    def log(self, message: str):
+        """Log a message to the log area.
+        
+        Args:
+            message: Message to log
+        """
+        if hasattr(self, 'action_log'):
+            import datetime
+            timestamp = datetime.datetime.now().strftime('%H:%M:%S')
+            log_message = f"[{timestamp}] {message}"
+            
+            # Add message to log area
+            if self.action_log.toPlainText():
+                self.action_log.append(log_message)
+            else:
+                self.action_log.setPlainText(log_message)
+        else:
+            # Fallback to console logging if action_log doesn't exist
+            logging.info(f"UI Log: {message}")
+        
     def handle_voice_input(self, text):
         """Handle voice input by setting it in the input area and running the agent"""
         self.input_area.setText(text)
@@ -946,5 +1019,110 @@ def quit_application(self):
         QApplication.quit()
 
     def show_prompt_dialog(self):
-        dialog = SystemPromptDialog(self, self.prompt_manager)
-        dialog.exec()
\ No newline at end of file
+        # Get the current provider
+        provider_name = self.provider_dropdown.currentText().lower()
+        
+        # Create a prompt manager instance for the current provider
+        prompt_manager = self.store.get_prompt_manager(provider_name)
+        
+        # Show dialog
+        dialog = SystemPromptDialog(self, prompt_manager, provider_name)
+        dialog.exec()
+    
+    def show_settings_menu(self):
+        """Show the settings menu when the settings button is clicked."""
+        # Create a menu
+        menu = QMenu(self)
+        
+        # Add system prompt action
+        prompt_action = QAction("Edit System Prompt", self)
+        prompt_action.triggered.connect(self.show_prompt_dialog)
+        menu.addAction(prompt_action)
+        
+        # Add a separator
+        menu.addSeparator()
+        
+        # Add about action
+        about_action = QAction("About Grunty", self)
+        about_action.triggered.connect(lambda: QMessageBox.about(self, "About Grunty", 
+                                                             "Grunty v1.0\nAI Computer Control\n\nAllows AI to control your computer."))
+        menu.addAction(about_action)
+        
+        # Show the menu at the button's position
+        menu.exec(self.settings_btn.mapToGlobal(QPoint(0, self.settings_btn.height())))
+    
+    # Update the model dropdown with models from the current provider.
+    def update_model_dropdown(self):
+        """Update the model dropdown with models from the current provider."""
+        current_provider = self.provider_dropdown.currentText()
+        self.model_dropdown.clear()
+        
+        models = self.store.get_available_models(current_provider)
+        model_names = [f"{model['name']}" for model in models]
+        model_ids = [model['id'] for model in models]
+        
+        self.model_dropdown.addItems(model_names)
+        
+        # Store model IDs as user data
+        for i, model_id in enumerate(model_ids):
+            self.model_dropdown.setItemData(i, model_id)
+        
+        # Select the current model if it exists
+        if self.store.current_model_id:
+            for i in range(self.model_dropdown.count()):
+                if self.model_dropdown.itemData(i) == self.store.current_model_id:
+                    self.model_dropdown.setCurrentIndex(i)
+                    break
+    
+    def change_provider(self, provider_name):
+        """Change the AI provider."""
+        import logging
+        logger = logging.getLogger(__name__)
+        
+        try:
+            logger.info(f"Attempting to change AI provider to {provider_name}")
+            
+            if provider_name != self.store.current_provider_name:
+                logger.info(f"Current provider is {self.store.current_provider_name}, changing to {provider_name}")
+                
+                if self.store.set_ai_provider(provider_name):
+                    logger.info(f"Successfully changed provider to {provider_name}")
+                    self.update_model_dropdown()
+                    self.log(f"AI provider changed to {provider_name}")
+                else:
+                    logger.error(f"Failed to change AI provider: {self.store.error}")
+                    self.log(f"Failed to change AI provider: {self.store.error}")
+                    # Show error message box
+                    from PyQt6.QtWidgets import QMessageBox
+                    QMessageBox.critical(self, "Provider Change Error", 
+                                      f"Failed to change AI provider to {provider_name}.\n\nReason: {self.store.error}")
+                    # Revert selection
+                    self.provider_dropdown.setCurrentText(self.store.current_provider_name)
+            else:
+                logger.info(f"Provider {provider_name} already selected, no change needed")
+        except Exception as e:
+            import traceback
+            logger.error(f"Exception when changing provider: {str(e)}\n{traceback.format_exc()}")
+            self.log(f"Error changing provider: {str(e)}")
+            # Show error message box
+            from PyQt6.QtWidgets import QMessageBox
+            QMessageBox.critical(self, "Provider Change Error", 
+                             f"An error occurred when changing to {provider_name}.\n\nError: {str(e)}")
+            # Revert selection
+            self.provider_dropdown.setCurrentText(self.store.current_provider_name)
+    
+    def change_model(self, model_name):
+        """Change the model for the current provider."""
+        current_index = self.model_dropdown.currentIndex()
+        if current_index >= 0:
+            model_id = self.model_dropdown.itemData(current_index)
+            if self.store.current_model_id != model_id:
+                if self.store.set_ai_provider(self.store.current_provider_name, model_id):
+                    self.log(f"Model changed to {model_name} ({model_id})")
+                else:
+                    self.log(f"Failed to change model: {self.store.error}")
+                    # Find and select the current model
+                    for i in range(self.model_dropdown.count()):
+                        if self.model_dropdown.itemData(i) == self.store.current_model_id:
+                            self.model_dropdown.setCurrentIndex(i)
+                            break
\ No newline at end of file
diff --git a/test_providers.py b/test_providers.py
new file mode 100644
index 0000000..ce4b99d
--- /dev/null
+++ b/test_providers.py
@@ -0,0 +1,120 @@
+"""
+Test script for AI providers.
+
+This script tests the provider creation and switching functionality.
+"""
+
+import os
+import sys
+import logging
+from dotenv import load_dotenv
+
+# Configure logging
+logging.basicConfig(
+    level=logging.DEBUG,
+    format='%(asctime)s - %(levelname)s - [%(filename)s:%(lineno)d] - %(message)s'
+)
+logger = logging.getLogger(__name__)
+
+# Load environment variables
+load_dotenv()
+
+# Add the project directory to the path
+sys.path.append(os.path.dirname(os.path.abspath(__file__)))
+
+# Import the providers
+from src.ai_providers import AIProviderManager
+from src.openai_provider import OpenAIProvider
+from src.anthropic_provider import AnthropicProvider
+
+def test_anthropic_provider():
+    """Test Anthropic provider creation and initialization."""
+    logger.info("Testing Anthropic provider...")
+    
+    # Get the API key
+    api_key = os.getenv("ANTHROPIC_API_KEY")
+    if not api_key:
+        logger.error("ANTHROPIC_API_KEY not found in environment variables")
+        return False
+        
+    # Create the provider
+    provider = AnthropicProvider(api_key=api_key)
+    
+    # Initialize the provider
+    if provider.initialize():
+        logger.info("Anthropic provider initialized successfully")
+        logger.info(f"Using model: {provider.model_id}")
+        return True
+    else:
+        logger.error("Failed to initialize Anthropic provider")
+        return False
+        
+def test_openai_provider():
+    """Test OpenAI provider creation and initialization."""
+    logger.info("Testing OpenAI provider...")
+    
+    # Get the API key
+    api_key = os.getenv("OPENAI_API_KEY")
+    if not api_key:
+        logger.error("OPENAI_API_KEY not found in environment variables")
+        return False
+        
+    # Create the provider
+    provider = OpenAIProvider(api_key=api_key)
+    
+    # Initialize the provider
+    if provider.initialize():
+        logger.info("OpenAI provider initialized successfully")
+        logger.info(f"Using model: {provider.model_id}")
+        return True
+    else:
+        logger.error("Failed to initialize OpenAI provider")
+        return False
+        
+def test_provider_manager():
+    """Test the provider manager."""
+    logger.info("Testing provider manager...")
+    
+    # Get the available providers
+    providers = AIProviderManager.get_provider_names()
+    logger.info(f"Available providers: {providers}")
+    
+    # Test creating each provider
+    for provider_name in providers:
+        logger.info(f"Testing provider: {provider_name}")
+        
+        # Get the API key
+        api_key_env = f"{provider_name.upper()}_API_KEY"
+        api_key = os.getenv(api_key_env)
+        
+        if not api_key:
+            logger.error(f"{api_key_env} not found in environment variables")
+            continue
+            
+        # Create the provider
+        provider = AIProviderManager.create_provider(provider_name, api_key=api_key)
+        
+        if provider:
+            logger.info(f"Provider {provider_name} created successfully")
+            logger.info(f"Using model: {provider.model_id}")
+        else:
+            logger.error(f"Failed to create provider {provider_name}")
+            
+def main():
+    """Run all provider tests."""
+    logger.info("Starting provider tests...")
+    
+    # Test individual providers
+    anthropic_result = test_anthropic_provider()
+    openai_result = test_openai_provider()
+    
+    # Test the provider manager
+    test_provider_manager()
+    
+    # Print summary
+    logger.info("Provider test summary:")
+    logger.info(f"Anthropic provider: {'SUCCESS' if anthropic_result else 'FAILED'}")
+    logger.info(f"OpenAI provider: {'SUCCESS' if openai_result else 'FAILED'}")
+    
+if __name__ == "__main__":
+    main()