Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 23 additions & 4 deletions changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,35 @@

All notable changes to this project will be documented in this file.

## [2025.12.21] - 2025-12-21

### 🐛 Fixed
- **fix**: Conducted an extensive codebase cleanup using Ruff, resolving over 200 linting issues and potential bugs.
- **fix**: Standardized error handling by replacing bare `except:` blocks with `except Exception:` or specific exception types across multiple modules (Bing search, GGUF converter, SwiftCLI, and various AI providers).
- **fix**: Resolved numerous `F821 Undefined name` and `F405` errors:
- Restored missing `get_item` method in `YTdownloader.py`.
- Defined missing variables (`result`, `content`, `tool_calls`) in `TextPollinationsAI.py` response processing.
- Fixed missing `ic` imports from `litprinter` in multiple TTS providers (`MurfAI`, `OpenAI.fm`, `Parler`, `Qwen`, `Sherpa`, `FreeTTS`).
- Fixed missing `exceptions` import in `FreeTTS`.
- Resolved undefined `CLI` reference in SwiftCLI `Context` using `TYPE_CHECKING` and explicit imports.
- Added missing type hint imports (`Optional`, `Any`, `Union`, `Generator`, `Response`) across 30+ AI provider modules including Cohere, Gemini, Groq, HuggingFace, and more.
- Fixed undefined `LitAgent` reference in `GitToolkit/utils.py` and missing `Union` in `YTdownloader.py`.
- Resolved `Response` naming conflict in `Ayle.py` by aliasing `curl_cffi` response.
- **fix**: Corrected syntax errors and corrupted logic blocks in `YTdownloader.py` and `iask_search.py`.
- **fix**: Improved project adherence to PEP 8:
- Moved module-level imports to the top of files in `server` and `aihumanizer`.
- Replaced incorrect type comparisons (e.g., `== bool`) with idiomatic `is bool`.
- Split multiple statements on single lines (E701, E702) across the entire project for better readability.
- **refactor**: Replaced star imports (`from ... import *`) with explicit imports in `GitToolkit` and `samurai` provider to eliminate name shadowing and improve static analysis.
- **refactor**: Added dynamic model fetching to both DeepInfra providers (`webscout/Provider/Deepinfra.py`, `webscout/Provider/OPENAI/deepinfra.py`) following Groq provider pattern. Implemented `get_models()` and `update_available_models()` class methods that fetch from `https://api.deepinfra.com/v1/models` API endpoint with fallback to default models on failure. Providers now automatically update their available models list during initialization.
### 🚮 Removed
- **removed**: `yep.py` - Removed the YEPCHAT provider and related files.

## [2025.12.20] - 2025-12-20

### 📝 Documentation Updates
- **docs**: litprinter.md - Completely rewrote documentation to be comprehensive and consistent with other Webscout docs. Added detailed sections for IceCream debugging, Rich Console, Panels & Layouts, Traceback Enhancement, Themes & Styling, Advanced Usage, Integration with Webscout, API Reference, Dependencies, and Supported Python Versions. Enhanced with professional formatting, extensive code examples, and parameter tables.

---

## [2025.12.20] - 2025-12-20

### ✨ Added

- **feat**: webscout/Provider/Nvidia.py - New standalone Nvidia NIM provider with dynamic model fetching and advanced stream sanitization.
Expand Down
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -118,5 +118,7 @@ version = {attr = "webscout.version.__version__"}
[tool.ruff]
line-length = 100
target-version = "py39"

[tool.ruff.lint]
select = ["E", "F", "W", "I"]
ignore = ["E501"]
ignore = ["E501", "F403", "F401"]
129 changes: 71 additions & 58 deletions webscout/AIauto.py
Original file line number Diff line number Diff line change
@@ -1,36 +1,38 @@
"""
This module provides the AUTO provider, which automatically selects and uses
an available LLM provider from the webscout library that doesn't require
API keys or cookies.
API keys or cookies.
"""

from webscout.AIbase import Provider
from webscout.exceptions import AllProvidersFailure
from typing import Union, Any, Dict, Generator, Optional, List, Tuple, Set, Type
import difflib
import importlib
import inspect
import pkgutil
import random
import inspect
import difflib
from typing import Any, Dict, Generator, List, Optional, Tuple, Type, Union

from webscout.AIbase import Provider, Response
from webscout.exceptions import AllProvidersFailure


def load_providers():
def load_providers() -> Tuple[Dict[str, Type[Provider]], set]:
"""
Dynamically loads all Provider classes from the `webscout.Provider` package.

This function iterates through the modules in the `webscout.Provider` package,
imports each module, and inspects its attributes to identify classes that
inherit from the `Provider` base class. It also identifies providers that
require special authentication parameters.

Returns:
tuple: A tuple containing two elements:
- provider_map (dict): A dictionary mapping uppercase provider names to their classes.
Tuple[Dict[str, Type[Provider]], set]: A tuple containing two elements:
- provider_map (Dict[str, Type[Provider]]): A dictionary mapping uppercase provider names to their classes.
- api_key_providers (set): A set of uppercase provider names requiring special authentication.
"""
provider_map = {}
api_key_providers = set()
provider_map: Dict[str, Type[Provider]] = {}
api_key_providers: set = set()
provider_package = importlib.import_module("webscout.Provider")

for _, module_name, _ in pkgutil.iter_modules(provider_package.__path__):
try:
module = importlib.import_module(f"webscout.Provider.{module_name}")
Expand All @@ -39,7 +41,7 @@ def load_providers():
if isinstance(attr, type) and issubclass(attr, Provider) and attr != Provider:
p_name = attr_name.upper()
provider_map[p_name] = attr

if hasattr(attr, "required_auth") and attr.required_auth:
api_key_providers.add(p_name)
else:
Expand All @@ -49,7 +51,7 @@ def load_providers():
api_key_providers.add(p_name)
except (ValueError, TypeError):
pass
except Exception as e:
except Exception:
pass
return provider_map, api_key_providers

Expand All @@ -73,15 +75,18 @@ def _get_models_safely(provider_cls: type) -> List[str]:
val = getattr(provider_cls, "AVAILABLE_MODELS")
if isinstance(val, list):
models.extend(val)

if hasattr(provider_cls, "get_models"):
try:
res = provider_cls.get_models()
if isinstance(res, list):
models.extend(res)
except:
# Use getattr to call the class method safely
get_models_method = getattr(provider_cls, "get_models")
if callable(get_models_method):
res = get_models_method()
if isinstance(res, list):
models.extend(res)
except Exception:
pass
except:
except Exception:
pass
return list(set(models))

Expand All @@ -91,31 +96,31 @@ class AUTO(Provider):
"""
An automatic provider that intelligently selects and utilizes an available
LLM provider from the webscout library.

It cycles through available free providers
until one successfully processes the request. Excludes providers
requiring API keys or cookies by default.
"""
def __init__(
self,
model: str = "auto",
api_key: str = None,
api_key: Optional[str] = None,
is_conversation: bool = True,
max_tokens: int = 600,
timeout: int = 30,
intro: str = None,
filepath: str = None,
intro: Optional[str] = None,
filepath: Optional[str] = None,
update_file: bool = True,
proxies: dict = {},
history_offset: int = 10250,
act: str = None,
exclude: Optional[list[str]] = None,
act: Optional[str] = None,
exclude: Optional[List[str]] = None,
print_provider_info: bool = False,
**kwargs: Any,
):
"""
Initializes the AUTO provider, setting up the parameters for provider selection and request handling.

This constructor initializes the AUTO provider with various configuration options,
including conversation settings, request limits, and provider exclusions.

Expand Down Expand Up @@ -157,7 +162,7 @@ def __init__(
def last_response(self) -> dict[str, Any]:
"""
Retrieves the last response dictionary from the successfully used provider.

Returns:
dict[str, Any]: The last response dictionary, or an empty dictionary if no provider has been used yet.
"""
Expand All @@ -167,7 +172,7 @@ def last_response(self) -> dict[str, Any]:
def conversation(self) -> object:
"""
Retrieves the conversation object from the successfully used provider.

Returns:
object: The conversation object, or None if no provider has been used yet.
"""
Expand All @@ -188,14 +193,14 @@ def _fuzzy_resolve_provider_and_model(
model (str): The model name to search for.

Returns:
Optional[Tuple[Type[Provider], str]]: A tuple containing the provider class
Optional[Tuple[Type[Provider], str]]: A tuple containing the provider class
and the resolved model name, or None if no match is found.
"""
available = [
(name, cls) for name, cls in provider_map.items()
if name not in self.exclude
]

if not self.api_key:
available = [p for p in available if p[0] not in api_key_providers]

Expand Down Expand Up @@ -261,7 +266,7 @@ def _resolve_provider_and_model(
(name, cls) for name, cls in provider_map.items()
if name not in self.exclude
]

if not self.api_key:
available = [p for p in available if p[0] not in api_key_providers]

Expand All @@ -281,13 +286,13 @@ def ask(
prompt: str,
stream: bool = False,
raw: bool = False,
optimizer: str = None,
optimizer: Optional[str] = None,
conversationally: bool = False,
**kwargs: Any,
) -> Union[Dict, Generator]:
) -> Response:
"""
Sends the prompt to available providers, attempting to get a response from each until one succeeds.

This method iterates through a prioritized list of available providers based on the requested model
and attempts to send the prompt to each provider until a successful response is received.

Expand Down Expand Up @@ -316,20 +321,20 @@ def ask(
queue = []
if resolved_provider:
queue.append((resolved_provider.__name__.upper(), resolved_provider, resolved_model))

all_available = [
(name, cls) for name, cls in provider_map.items()
if name not in self.exclude and (resolved_provider is None or cls != resolved_provider)
]

if not self.api_key:
all_available = [p for p in all_available if p[0] not in api_key_providers]

random.shuffle(all_available)

model_prio = []
others = []

for name, cls in all_available:
p_models = _get_models_safely(cls)
if resolved_model != "auto" and resolved_model in p_models:
Expand All @@ -342,7 +347,7 @@ def ask(
m = random.choice(p_models)
queue_model = m if m else "auto"
others.append((name, cls, queue_model))

queue.extend(model_prio)
queue.extend(others)

Expand All @@ -360,12 +365,12 @@ def ask(
"history_offset": self.history_offset,
"act": self.act,
}

if 'model' in sig:
init_kwargs['model'] = model_to_use
if 'api_key' in sig and self.api_key:
init_kwargs['api_key'] = self.api_key

for k, v in self.kwargs.items():
if k in sig or any(p.kind == inspect.Parameter.VAR_KEYWORD for p in sig.values()):
init_kwargs[k] = v
Expand All @@ -385,7 +390,7 @@ def ask(
except Exception:
continue

def chained_gen():
def chained_gen() -> Any:
if self.print_provider_info:
model = getattr(self.provider, "model", None)
provider_class_name = self.provider.__class__.__name__
Expand Down Expand Up @@ -423,9 +428,10 @@ def chat(
self,
prompt: str,
stream: bool = False,
optimizer: str = None,
optimizer: Optional[str] = None,
conversationally: bool = False,
) -> Union[str, Generator[str, None, None]]:
**kwargs: Any,
) -> Union[str, Generator[str, None, None]]:
"""
Provides a simplified chat interface, returning the message string or a generator of message strings.

Expand All @@ -444,13 +450,13 @@ def chat(
else:
return self._chat_non_stream(prompt, optimizer, conversationally)

def _chat_stream(self, prompt: str, optimizer: str, conversationally: bool) -> Generator[str, None, None]:
def _chat_stream(self, prompt: str, optimizer: Optional[str], conversationally: bool) -> Generator[str, None, None]:
"""
Internal helper for streaming chat responses.

Args:
prompt (str): The user's prompt.
optimizer (str): Name of the optimizer.
optimizer (Optional[str]): Name of the optimizer.
conversationally (bool): Whether to apply optimizer conversationally.

Yields:
Expand All @@ -462,16 +468,19 @@ def _chat_stream(self, prompt: str, optimizer: str, conversationally: bool) -> G
optimizer=optimizer,
conversationally=conversationally,
)
for chunk in response:
yield self.get_message(chunk)
if hasattr(response, "__iter__") and not isinstance(response, (str, bytes, dict)):
for chunk in response:
yield self.get_message(chunk)
elif isinstance(response, dict):
yield self.get_message(response)

def _chat_non_stream(self, prompt: str, optimizer: str, conversationally: bool) -> str:
def _chat_non_stream(self, prompt: str, optimizer: Optional[str], conversationally: bool) -> str:
"""
Internal helper for non-streaming chat responses.

Args:
prompt (str): The user's prompt.
optimizer (str): Name of the optimizer.
optimizer (Optional[str]): Name of the optimizer.
conversationally (bool): Whether to apply optimizer conversationally.

Returns:
Expand All @@ -483,23 +492,27 @@ def _chat_non_stream(self, prompt: str, optimizer: str, conversationally: bool)
optimizer=optimizer,
conversationally=conversationally,
)
return self.get_message(response)
if isinstance(response, dict):
return self.get_message(response)
return str(response)

def get_message(self, response: dict) -> str:
def get_message(self, response: Response) -> str:
"""
Extracts the message text from the provider's response dictionary.

Args:
response (dict): The response dictionary obtained from the `ask` method.
response (Response): The response obtained from the `ask` method.

Returns:
str: The extracted message string.
"""
assert self.provider is not None, "Chat with AI first"
if not isinstance(response, dict):
return str(response)
return self.provider.get_message(response)

if __name__ == "__main__":
auto = AUTO(print_provider_info=True)
response = auto.chat("Hello, how are you?", stream=True)
for chunk in response:
print(chunk, end="", flush=True)
print(chunk, end="", flush=True)
Loading