Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feat] Improve Proxy Logging #1356

Merged
merged 11 commits into from
Jan 8, 2024
Merged

[Feat] Improve Proxy Logging #1356

merged 11 commits into from
Jan 8, 2024

Conversation

ishaan-jaff
Copy link
Contributor

@ishaan-jaff ishaan-jaff commented Jan 8, 2024

Goal of this PR:

When user runs proxy with -debug
ONLY show the original request, if 1 request failed why it failed (timeoutError), the next model it was falling back too, success response

Proxy Logs after this Feature

Screenshot 2024-01-08 at 12 36 41 PM

Notes

  • moving to using logging
    -> it allows user to specify logging level - info, warn, debug & loggers. litellm-proxy, litellm-router, litellm
    -> users can pipe logs to a file

Copy link

vercel bot commented Jan 8, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 8, 2024 8:34am

@ishaan-jaff
Copy link
Contributor Author

addressing: #1338

@ishaan-jaff
Copy link
Contributor Author

Currently here is everything you see when trying to use the proxy debugging logs (this is showing an invalid openai key)
it's not great

See all Router/Swagger docs on http://0.0.0.0:8000

LiteLLM Proxy: INITIALIZING LITELLM CALLBACKS!
callback: <litellm.proxy.hooks.parallel_request_limiter.MaxParallelRequestsHandler object at 0x10fa62da0>
callback: <litellm.proxy.hooks.max_budget_limiter.MaxBudgetLimiter object at 0x10fa62dd0>
callback: <bound method Router.deployment_callback_on_failure of <litellm.router.Router object at 0x10fc362c0>>
prisma client - None
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Request Headers: Headers({'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'})
receiving data: {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}], 'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'}, 'body': {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}}
Inside Max Parallel Request Pre-Call Hook
Inside Max Budget Limiter Pre-Call Hook
get cache: cache key: None_user_api_key_user_id; local_only: False
in_memory_result: None
get cache: cache result: None
LiteLLM Proxy: final data being sent to completion call: {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}], 'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'}, 'body': {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}, 'metadata': {'user_api_key': None, 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'}, 'user_api_key_user_id': None}, 'request_timeout': 600}
LiteLLM.Router: Inside async function with retries: args - (); kwargs - {'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'}, 'body': {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}, 'metadata': {'user_api_key': None, 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'}, 'user_api_key_user_id': None, 'model_group': 'gpt-3.5-turbo'}, 'request_timeout': 600, 'specific_deployment': True, 'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}], 'original_function': <bound method Router._acompletion of <litellm.router.Router object at 0x10fc362c0>>, 'num_retries': 3}
LiteLLM.Router: async function w/ retries: original_function - <bound method Router._acompletion of <litellm.router.Router object at 0x10fc362c0>>
LiteLLM.Router: Inside _acompletion()- model: gpt-3.5-turbo; kwargs: {'proxy_server_request': {'url': 'http://0.0.0.0:8000/chat/completions', 'method': 'POST', 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'}, 'body': {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}, 'metadata': {'user_api_key': None, 'headers': {'host': '0.0.0.0:8000', 'user-agent': 'curl/7.88.1', 'accept': '*/*', 'content-type': 'application/json', 'content-length': '144'}, 'user_api_key_user_id': None, 'model_group': 'gpt-3.5-turbo'}, 'request_timeout': 600, 'specific_deployment': True}
get cache: cache key: b6b4bf0a-57c5-412f-868c-7de181609430_async_client; local_only: True
in_memory_result: <openai.AsyncOpenAI object at 0x10fce2440>
get cache: cache result: <openai.AsyncOpenAI object at 0x10fce2440>
Initialized litellm callbacks, Async Success Callbacks: [<litellm.proxy.hooks.parallel_request_limiter.MaxParallelRequestsHandler object at 0x10fa62da0>, <litellm.proxy.hooks.max_budget_limiter.MaxBudgetLimiter object at 0x10fa62dd0>]
callback: <litellm.proxy.hooks.parallel_request_limiter.MaxParallelRequestsHandler object at 0x10fa62da0>
callback: <litellm.proxy.hooks.max_budget_limiter.MaxBudgetLimiter object at 0x10fa62dd0>
callback: <bound method Router.deployment_callback_on_failure of <litellm.router.Router object at 0x10fc362c0>>
litellm.cache: None
kwargs[caching]: False; litellm.cache: None
kwargs[caching]: False; litellm.cache: None

LiteLLM completion() model= gpt-3.5-turbo; provider = openai

LiteLLM: Params passed to completion() {'functions': None, 'function_call': None, 'temperature': None, 'top_p': None, 'stream': None, 'max_tokens': None, 'presence_penalty': None, 'frequency_penalty': None, 'logit_bias': None, 'user': None, 'response_format': None, 'seed': None, 'tools': None, 'tool_choice': None, 'max_retries': 0, 'logprobs': None, 'top_logprobs': None, 'custom_llm_provider': 'openai', 'model': 'gpt-3.5-turbo', 'n': None, 'stop': None}

LiteLLM: Non-Default params passed to completion() {'max_retries': 0}
self.optional_params: {'max_retries': 0}
PRE-API-CALL ADDITIONAL ARGS: {'headers': {'Authorization': 'Bearer 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'}, 'api_base': ParseResult(scheme='https', userinfo='', host='api.openai.com', port=None, path='/v1/', query=None, fragment=None), 'acompletion': True, 'complete_input_dict': {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}}


POST Request Sent from LiteLLM:
curl -X POST \
https://api.openai.com/v1/ \
-H 'Authorization: Bearer sk-qnWGUIW9knaJ7Mxclg5qT3BlbkFJ********************' \
-d '{'model': 'gpt-3.5-turbo', 'messages': [{'role': 'user', 'content': 'what llm are you'}]}'


INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 401 Unauthorized"
Logging Details: logger_fn - None | callable(logger_fn) - False
Logging Details LiteLLM-Failure Call
LiteLLM.Router: Attempting to add b6b4bf0a-57c5-412f-868c-7de181609430 to cooldown list. updated_fails: 1; self.allowed_fails: 0
get cache: cache key: 08-51:cooldown_models; local_only: False
in_memory_result: None
get cache: cache result: None
LiteLLM.Router: adding b6b4bf0a-57c5-412f-868c-7de181609430 to cooldown models
set cache: key: 08-51:cooldown_models; value: ['b6b4bf0a-57c5-412f-868c-7de181609430']
LiteLLM.Router:
EXCEPTION FOR DEPLOYMENTS

LiteLLM.Router: {'gpt-3.5-turbo': ["<class 'litellm.exceptions.AuthenticationError'>Status: 401Message: NoneOpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}Full exceptionOpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}"]}
LiteLLM.Router: Model gpt-3.5-turbo had 1 exception
Custom Logger - final response object: None
LiteLLM.Router: An exception occurs: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

 TracebackTraceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 210, in acompletion
    response = await init_response
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 402, in acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 387, in acompletion
    response = await openai_aclient.chat.completions.create(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1295, in create
    return await self._post(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1536, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1315, in request
    return await self._request(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1392, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 763, in async_function_with_fallbacks
    response = await self.async_function_with_retries(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 891, in async_function_with_retries
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 849, in async_function_with_retries
    response = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 370, in _acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 355, in _acompletion
    response = await litellm.acompletion(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2354, in wrapper_async
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2246, in wrapper_async
    result = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 227, in acompletion
    raise exception_type(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 6585, in exception_type
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 5553, in exception_type
    raise AuthenticationError(
litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

LiteLLM.Router: Trying to fallback b/w models
LiteLLM.Router: inside model fallbacks: [{'openai-gpt-3.5': ['azure-gpt-3.5']}]
LiteLLM.Router: No fallback model group found for original model_group=gpt-3.5-turbo. Fallbacks=[{'openai-gpt-3.5': ['azure-gpt-3.5']}]
LiteLLM.Router: An exception occurred - OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Traceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 210, in acompletion
    response = await init_response
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 402, in acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 387, in acompletion
    response = await openai_aclient.chat.completions.create(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1295, in create
    return await self._post(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1536, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1315, in request
    return await self._request(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1392, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 813, in async_function_with_fallbacks
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 763, in async_function_with_fallbacks
    response = await self.async_function_with_retries(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 891, in async_function_with_retries
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 849, in async_function_with_retries
    response = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 370, in _acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 355, in _acompletion
    response = await litellm.acompletion(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2354, in wrapper_async
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2246, in wrapper_async
    result = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 227, in acompletion
    raise exception_type(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 6585, in exception_type
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 5553, in exception_type
    raise AuthenticationError(
litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Traceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 210, in acompletion
    response = await init_response
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 402, in acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 387, in acompletion
    response = await openai_aclient.chat.completions.create(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1295, in create
    return await self._post(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1536, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1315, in request
    return await self._request(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1392, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/proxy/proxy_server.py", line 1435, in chat_completion
    response = await llm_router.acompletion(**data, specific_deployment=True)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 314, in acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 310, in acompletion
    response = await self.async_function_with_fallbacks(**kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 832, in async_function_with_fallbacks
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 813, in async_function_with_fallbacks
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 763, in async_function_with_fallbacks
    response = await self.async_function_with_retries(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 891, in async_function_with_retries
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 849, in async_function_with_retries
    response = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 370, in _acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 355, in _acompletion
    response = await litellm.acompletion(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2354, in wrapper_async
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2246, in wrapper_async
    result = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 227, in acompletion
    raise exception_type(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 6585, in exception_type
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 5553, in exception_type
    raise AuthenticationError(
litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
An error occurred: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

 Debug this by setting `--debug`, e.g. `litellm --model gpt-3.5-turbo --debug`
Traceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 210, in acompletion
    response = await init_response
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 402, in acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/llms/openai.py", line 387, in acompletion
    response = await openai_aclient.chat.completions.create(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1295, in create
    return await self._post(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1536, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1315, in request
    return await self._request(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/openai/_base_client.py", line 1392, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/ishaanjaffer/Github/litellm/litellm/proxy/proxy_server.py", line 1435, in chat_completion
    response = await llm_router.acompletion(**data, specific_deployment=True)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 314, in acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 310, in acompletion
    response = await self.async_function_with_fallbacks(**kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 832, in async_function_with_fallbacks
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 813, in async_function_with_fallbacks
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 763, in async_function_with_fallbacks
    response = await self.async_function_with_retries(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 891, in async_function_with_retries
    raise original_exception
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 849, in async_function_with_retries
    response = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 370, in _acompletion
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/router.py", line 355, in _acompletion
    response = await litellm.acompletion(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2354, in wrapper_async
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 2246, in wrapper_async
    result = await original_function(*args, **kwargs)
  File "/Users/ishaanjaffer/Github/litellm/litellm/main.py", line 227, in acompletion
    raise exception_type(
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 6585, in exception_type
    raise e
  File "/Users/ishaanjaffer/Github/litellm/litellm/utils.py", line 5553, in exception_type
    raise AuthenticationError(
litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: sk-qnWGU***************************************Rvfb. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
INFO:     127.0.0.1:64141 - "POST /chat/completions HTTP/1.1" 401 Unauthorized

@ishaan-jaff ishaan-jaff changed the title [Draft-Feat] Improve Proxy Logging [Feat] Improve Proxy Logging Jan 8, 2024
@ishaan-jaff
Copy link
Contributor Author

waiting to pass testing will merge this, this add no dependancies- just cleans out proxy logs. Using the Python built-in library, logging

@ishaan-jaff ishaan-jaff merged commit a70626d into main Jan 8, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant