-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with login from Home Assistant - http 403 #190
Comments
Thanks. This is already fixed and waiting for merging. |
Reopening as there is a 403 block we need to work around. It affects the oauth login too. |
Yup, I've been trying to figure out why and am coming up completely blank. I can use curl to grab the same URL with the same exact headers (including user agent) and it works perfectly. Once I try from within the existing HA integration (so no proxy), that 403 comes up. Every. Time. I can't for the life of me figure out why the response is different. |
Yah welcome to my world. 😉 Right now the same code running on my dev server running on my mac book works but the same code on my rpi gets the 403. |
It's really frustrating. I even tried running curl from inside my HA container and it passed. It's impressively painful to debug. |
I'm concerned there may be something up with aiohttp. It's not the first time I've seen weird behavior there. |
Was thinking the same thing. Using the requests library does work. A simple |
Be aware that aiohttp does autogenerate headers so that could be coming in play. The proxy code allows you to turn it off. |
I was manually setting the headers, and |
From a data and header perspective, I can't find a difference between aiohttp and requests/curl. I spun up a simple webserver to point all three at so I could dump the traffic more easily and it was exactly the same. My best guess at this point is it has to do with a lower level implementation, perhaps aiohttp is a bit more aggressive about how it connects which triggers the WAF's scanning protection? Or perhaps it's something entirely different.... |
Thanks for investigating that. This is perplexing. I'm debating whether we include requests just for the auth but it seems to be a step backwards to include two libraries. |
And my dev server still works in case you want to see the response when it works.
root ➜ /workspaces $ pip list | egrep "teslajsonpy|aiohttp|authcaptureproxy"
aiohttp 3.7.4.post0
aiohttp-cors 0.7.0
authcaptureproxy 0.8.1
pytest-aiohttp 0.3.0
teslajsonpy 0.17.1 The only difference right now is it's directly overwritten the tesla component instead of using custom_components. I guess I can try that route too. EDIT: Doesn't matter if it's a custom component. I used pr_custom_component on my dev server and it still works there too. For good measure, here's my failing production server:
bash-5.0# pip list | egrep "teslajsonpy|aiohttp|authcaptureproxy"
aiohttp 3.7.4.post0
aiohttp-cors 0.7.0
authcaptureproxy 0.8.1
teslajsonpy 0.17.1 |
Is there a difference in OS and/or hardware specs between your dev and production servers? I'm struggling to explain why the behavior is different. |
Yes there's a difference but I'm struggling to understand why it would matter. The dev server is a homeassistant docker container inside my osX M1 host.
The production is a raspberry pi4 running the latest version of raspbian.
Please note the production server had previously logged in using the PR code and has a valid tokens session (for now). The new 403 is happening when I try to add the integration again (which is supported for multiple accounts). |
I was just wondering if it could have to do with performance somewhere in the stack. Faster machine, faster requests, WAF unhappy. But that M1 host should be snappier than the rpi4 for such things so that theory is right out. |
I wish I could contribute, but my python skills are at the hello world stage. Could you try posting the authentication request to your own server and diff the requests from production and developement to see if there are indeed changes being made by aiohttp (or somewhere else). The headers being sent in your two examples above are quite different. Are the sec-fetch headers neccessary for the API? |
@mattsch actually did that with his browser, curl and aiohttp and didn't see any differences. It's a bit of work to setup a fake Tesla server to intercept the traffic.
Who knows. I can remove them and test but mattsch also played with the headers too. They're coming from the browser but I'm literally using the same browser with different tabs to test. |
I've removed the headers and it doesn't matter. Also downgraded to aiohttp 3.7.3 in case something crazy happened there. Still no luck. I just realized the error page is saying:
My logs indicate we're doing https in the get request. @mattsch Are you seeing the same weird message about http? I also tried this toy example. import aiohttp
import asyncio
async def main():
async with aiohttp.ClientSession() as session:
async with session.get('http://auth.tesla.com/oauth2/v3/authorize?client_id=ownerapi&code_challenge=NjRiN5ZmJkNTIyN2UwNzE4ZGExZTZkNmEwOGMyOTE4NzIyN2QyMjhiOWNkZWY2ZjI0YTRkZjhlNTVhMzQxOQ%3D%3D&code_challenge_method=S256&redirect_uri=https://auth.tesla.com/void/callback&response_type=code&scope=openid+email+offline_access&state=EtpSQ8Wt4uw1fN5_mMF_t2I7T1JVGMW4T8gwzulHGATZuoZXbhdLZkAX53-pvYwN9X4siVIoNNWzD0BDbmMVQ') as resp:
print(resp.status)
print(await resp.text())
loop = asyncio.get_event_loop()
loop.run_until_complete(main()) It gets the 403 page on my production server but not my dev server. On the host for my dev server, I get the 403. |
This toy example with requests is even wierder. import aiohttp
import asyncio
import requests
URL = "http://auth.tesla.com/oauth2/v3/authorize?client_id=ownerapi&code_challenge=NjRiN5ZmJkNTIyN2UwNzE4ZGExZTZkNmEwOGMyOTE4NzIyN2QyMjhiOWNkZWY2ZjI0YTRkZjhlNTVhMzQxOQ%3D%3D&code_challenge_method=S256&redirect_uri=https://auth.tesla.com/void/callback&response_type=code&scope=openid+email+offline_access&state=EtpSQ8Wt4uw1fN5_mMF_t2I7T1JVGMW4T8gwzulHGATZuoZXbhdLZkAX53-pvYwN9X4siVIoNNWzD0BDbmMVQ"
async def main(requests_header):
async with aiohttp.ClientSession() as session:
async with session.get(URL, headers=requests_header) as resp:
print(resp.status)
print(resp.request_info.headers)
# print(await resp.text())
req = requests.get(URL)
requests_header = req.request.headers
print(req.status_code)
print(requests_header)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(requests_header)) EDIT: |
I'm using
Running that last script I get:
HTTP 200 is OK. Doesn't get less strange :) |
This comment has been minimized.
This comment has been minimized.
@alandtse I did notice that the response said Here's my slightly tweaked copy of the toy script. I am using As you can see, the user-agent is the same in both, and I set the connection header to match what requests sets but I get the same behavior either way. The only other variable I see here is python versions. My aiohttp versions are all the same so that's probably the next best place to start poking. I'll have some time a bit later dig, but for now here's what I'm running: #!/usr/bin/env python
import aiohttp
import asyncio
import requests
head = {
"User-Agent": "python-requests/2.25.1",
"Connection": "keep-alive"
}
url = 'https://auth.tesla.com/oauth2/v3/authorize?client_id=ownerapi&code_challenge=NDgwM2RlMGE1NTEwZWI1NWIzM2Q2NzM3YTRkYTBlZWNjYWMyOGUzZGZiNDJkNmZkNWE3ZDkxNmQ1MzI5YTg0OQ&code_challenge_method=S256&redirect_uri=https://auth.tesla.com/void/callback&response_type=code&scope=openid+email+offline_access&state=9_MVz16nNle7FSB8-O50bKZId0XNAgTrrIg9agarIiPBV9GnMtsw3uAHeC3jXNjLjs4CSYrqQ5EQBIy-_fmoVQ'
async def fetch(client):
async with client.get(
url,
headers = head) as resp:
return resp
async def main():
async with aiohttp.ClientSession() as client:
r = await fetch(client)
print("AIOHTTP")
print("Request headers:", r.request_info.headers)
print("Response Headers:", r.headers)
print("Response code:", r.status)
print("Full response:", r)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
sess = requests.Session()
req = sess.get(url)
print("REQUESTS")
print("Request headers:", req.request.headers)
print("Response headers:", req.headers)
print("Response code:", req.status_code)
# vim:syntax=python
# vim:sw=4:softtabstop=4:expandtab |
Here's a fun one. On my Macbook host (python 3.9.4) which fails normally, if I enable a mitmproxy it works. It's definitely aiohttp being rejected. You'll have to set the proxy value to False below to disable the proxy. I set the proxy in my network connection and realized that aiohttp was going around the network setting. I had to use the proxy keyword to force it to use my proxy. #!/usr/bin/env python
import aiohttp
import asyncio
import ssl
import requests
custom_context = ssl.SSLContext()
head = {"User-Agent": "python-requests/2.25.1", "Connection": "keep-alive"}
url = "https://auth.tesla.com/oauth2/v3/authorize?client_id=ownerapi&code_challenge=NDgwM2RlMGE1NTEwZWI1NWIzM2Q2NzM3YTRkYTBlZWNjYWMyOGUzZGZiNDJkNmZkNWE3ZDkxNmQ1MzI5YTg0OQ&code_challenge_method=S256&redirect_uri=https://auth.tesla.com/void/callback&response_type=code&scope=openid+email+offline_access&state=9_MVz16nNle7FSB8-O50bKZId0XNAgTrrIg9agarIiPBV9GnMtsw3uAHeC3jXNjLjs4CSYrqQ5EQBIy-_fmoVQ"
async def fetch(client, proxy=False):
if proxy:
async with client.get(
url, headers=head, ssl=custom_context, proxy="http://localhost:8080"
) as resp:
return resp
async with client.get(url, headers=head) as resp:
return resp
async def main():
async with aiohttp.ClientSession() as client:
r = await fetch(client, proxy=True)
print("AIOHTTP")
print("Request headers:", r.request_info.headers)
print("Response Headers:", r.headers)
print("Response code:", r.status)
print("Full response:", r)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
sess = requests.Session()
req = sess.get(url, verify=False)
print("REQUESTS")
print("Request headers:", req.request.headers)
print("Response headers:", req.headers)
print("Response code:", req.status_code)
# vim:syntax=python
# vim:sw=4:softtabstop=4:expandtab |
As for Python versions, I tested 3.8.5, 3.8.7 and 3.9.4 (using pyenv) on Ubuntu 20 and they all worked with aiohttp and requests. The 403/forbidden is served by AkamaiGHost, so maybe the request is blocked before reaching Tesla? Maybe someone else also has this problem? |
@alandtse Good find. Can confirm the same behavior across my test hosts. With mitmproxy it works just fine, without it 403s. Now my question is why in the !*@&# does it not fail consistently?? If it were a blanket block somehow for the aiohttp client, it would fail everywhere and not just some places. I might just have to dig up docs on the WAF to see if there's any clues. |
I'm going to open an issue with aiohttp. I don't think mitmproxy uses aiohttp under the hood but I think we've demonstrated it's isolated to aiohttp. The question of course it whether they can reproduce the 403. Is docker the common issue here? Edit: i did see ALPN is set differently with requests. Maybe that is related? |
Are you able to test the actual Tesla PR? If the toy script works, the actual PR should be working for you. If you're saying it doesn't that's another weird case to consider. |
Normally I'd agree. But that's like the only difference I can see between the aiohttp get vs the requests get. It's also something a WAF could see so maybe that's the trigger. I can't seem to find any documentation on setting a ALPN on aiohttp. |
If it was the culprit I'd expect that would be consistent, but it's not. And the server connection from mitmproxy also shows it's not setting the header and it succeeds. |
Unfortunately I don't know how to merge the pull request from github into my docker container :- I guess I missed something with the hot-patching :) |
And you said the toy script worked inside your docker? Can you please confirm because that is an additional weird case. |
That's a negative - the toy script (#190 (comment)) does not work in the container: aiohttp get a 403 forbidden response. Requests get 200 ok. The container is set up with host networking, and on the host the toy script get a 200 ok response with both methods. |
Well that got closed quick. They said it they'll assume it's not an aiohttp bug and asked if we did any netcat. I don't know how to use netcat. Do you? I may just rip out aiohttp and use another library. Since I'm not going to support HA's core integration anymore, I don't have to stay with aiohttp on the component either. |
I'm not sure netcat would gain us much here really. Tcpdump is probably the next best option to show the actual network layer differences. 😞 |
Btw thanks for your help in tracking this down. It helps that someone else technical can reproduce it so I'm not just crazy. ;) If we're really at the tcpdump level, I think your hypothesis about response time may be right. I did some more searching on the Akamai GHost and 403s and you will see this issue pop up for random items. I'm wondering if Akamai has created some logic such as DDOS protection that is getting tripped up by aiohttp. It's also possible that aiohttp has been used in such attacks in the past so it's already a close trigger. |
I came over some similar cases with DDOS protection triggering because the ordering of the request headers were off. If the host header wasn't first it would not get through. I will look into it in more depth tomorrow. |
If you want to stick with an async library with a |
Thanks. That was my planned replacement route. It's less performant than aiohttp but given the use case it's probably fine. |
httpx works perfectly across my environments. My vote is to ditch aiohttp if it fixes things for others as well. #!/usr/bin/env python
# $Id$
import aiohttp
import asyncio
import requests
import ssl
import httpx
head = {
"User-Agent": "python-requests/2.25.1",
"Connection": "keep-alive"
}
url = 'https://auth.tesla.com/oauth2/v3/authorize?client_id=ownerapi&code_challenge=NDgwM2RlMGE1NTEwZWI1NWIzM2Q2NzM3YTRkYTBlZWNjYWMyOGUzZGZiNDJkNmZkNWE3ZDkxNmQ1MzI5YTg0OQ&code_challenge_method=S256&redirect_uri=https://auth.tesla.com/void/callback&response_type=code&scope=openid+email+offline_access&state=9_MVz16nNle7FSB8-O50bKZId0XNAgTrrIg9agarIiPBV9GnMtsw3uAHeC3jXNjLjs4CSYrqQ5EQBIy-_fmoVQ'
nossl_conn = aiohttp.TCPConnector(ssl=False)
proxy = "http://localhost:8080"
async def fetch(client):
async with client.get(
url,
#proxy=proxy
) as resp:
return resp
async def main():
async with aiohttp.ClientSession() as client:
r = await fetch(client)
print("AIOHTTP")
print("Request headers:", r.request_info.headers)
print("Response Headers:", r.headers)
print("Response code:", r.status)
print("Cookies:", r.cookies)
async with httpx.AsyncClient() as clientx:
rx = await clientx.get(url)
print("HTTPX")
print("Request headers:", rx.request.headers)
print("Response Headers:", rx.headers)
print("Response code:", rx.status_code)
print("Cookies:", rx.cookies)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
#sess = requests.Session()
##sess.proxies = {"http": proxy, "https": proxy}
#sess.verify = False
#req = sess.get(url)
#print("REQUESTS")
#print("Request headers:", req.request.headers)
#print("Response headers:", req.headers)
#print("Response code:", req.status_code)
# vim:syntax=python
# vim:sw=4:softtabstop=4:expandtab Edit: According to mitmproxy, the request timings of httpx are slightly faster (~10-15ms) than the requests library. |
Tested HTTPX works both on bare-metal and in the docker container (where aiohttp fails) |
aiohttp appears to have issues related to Akamai Global Host and developers do not seem interested in resolving as a bug. aio-libs/aiohttp#5643 BREAKING CHANGE: API has changed due to use of httpx. Modifiers, test_url, and other items that access aiohttp ClientResponse will need to be fixed. Closes zabuldon#190
Swap completed to httpx and issues are gone with my testing. With any major rework, I may have missed something so I need testers. You will need the last rejected PR to HA. If you're on HA You will manually have to take the PRs for authcaptureproxy and teslajsonpy. I won't provide any support on installing things the above. A easier installation will probably become available later this week. |
@alandtse Can confirm it works! |
One problem I just ran into is that it fails to setup properly if there's an error when starting up. Doesn't look like httpx has any native support for retries so we might need some custom logic here to retry. And setting the connection timeout doesn't seem to actually help either for some reason. Edit: |
So you're saying the default of 5s doesn't work for you consistently? Or it never works? 60 seems like a big jump up though. I guess aiohttp was at 300 but that seems way too much. |
Yup, 5s doesn't work at all with one car online and the other asleep. It's not a problem with both asleep since it doesn't go grabbing all the data, but seems to be a bit slower doing that call than the initial vehicles list. Without some retry logic, we may want to go even longer than 60 seconds just to be on the safe side. |
aiohttp appears to have issues related to Akamai Global Host. aio-libs/aiohttp#5643 Closes zabuldon#190
This is resolved in https://github.com/alandtse/home-assistant/tree/tesla_oauth_callback. I will be packaging it as a custom component later this week. |
Thanks for your efforts in keeping my car connected to Home Assistant 😊 |
That would be really appreciated |
I deleted the integration when errors started occurring, restarted Hass, and then tried to re-add. Logs attached from my attempt to re-add. FWIW, had the same problem with and without MFA enabled.
Reported here as well.
home-assistant.log
The text was updated successfully, but these errors were encountered: