-
-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION] How to do logging in a FastApi container, any logging does not appear #19
Comments
I have a similar issue. Using this image, I don't see any FastAPI logs on STDOUT in Docker Compose. The app is something like this:
When I run it with
But when I run the app in Docker Compose with
|
Umair, I'm not sure if that is the same issue. When a Docker container writes to STDOUT and is running in the background (that is, detached), you can only see that via logs. |
Any news on this issue? I'm having the same problem and can't really find a solution anywhere. |
@JCHHeilmann have you had a look on this issue? |
@tyler46 yes , and I've just tried it that way again. The info log doesn't show up without setting the LOG_LEVEL to debug. But the default is info, so it should. Or am I missing something? |
I finally solved it! First, make sure you set the environment variable Now in your actual FastAPI app, add this code below the imports: from fastapi.logger import logger
# ... other imports
import logging
gunicorn_logger = logging.getLogger('gunicorn.error')
logger.handlers = gunicorn_logger.handlers
if __name__ != "main":
logger.setLevel(gunicorn_logger.level)
else:
logger.setLevel(logging.DEBUG) This way, if your app is loaded via gunicorn, you can tell the logger to use gunicorn's log level instead of the default one. Because if gunicorn loads your app, FastAPI does not know about the environment variable directly; you will have to manually override the log level. The I tested this with the version where the command |
PS: I got this idea from this blog post: https://medium.com/@trstringer/logging-flask-and-gunicorn-the-manageable-way-2e6f0b8beb2f |
Hmm... I'm having a similar problem, and can't really figure out how to deal with this. This is my script: import logging
from fastapi import FastAPI
app = FastAPI()
logger = logging.getLogger("gunicorn.error")
@app.get("/")
async def root():
logger.info("Hello!")
return "Hello, world!" Running this directly with the following command: uvicorn main:app Gives the following output:
Running it in the
I'm missing the actual HTTP requests in this case. Don't know if this is a big deal, not very experienced with building web-services yet. Is there a particular reason for not showing the HTTP requests in the console output? |
@janheindejong You have not set the logging handler ( |
Hmm... am I not using the gunicorn logger? This line basically makes the logger variable point to the gunicorn one, right? logger = logging.getLogger("gunicorn.error") It's not that I'm not getting any output from the logger (see the Hello! line). It's just that the HTTP requests are not shown, which they are if I run the app outside the container. |
Well, if you set logger to be the |
I've tried adding the gunicorn handlers and level to the fastapi_logger, but that didn't work (see code below). from fastapi import FastAPI
from fastapi.logger import logger as fastapi_logger
app = FastAPI()
logger = logging.getLogger("gunicorn.error")
fastapi_logger.handlers = logger.handlers
fastapi_logger.setLevel(logger.level)
@app.get("/")
async def root():
logger.info("Hello!")
return "Hello, world!" Note that the |
Hm, do you need the explicit import of |
Hmm... yeah I'm afraid that doesn't work. There's no instance of |
That would be:
Sorry for missing that. |
Ah yes... well... if I use your code, and if I use `logger` in my code, it
does indeed work. The problem is that I'm still not seeing the FastAPI
logger output (e.g. GET / HTTP/1.1 200).
…On Tue, Apr 14, 2020 at 2:26 PM Werner Robitza ***@***.***> wrote:
That would be:
from fastapi.logger import logger
Sorry for missing that.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#19 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHHINZZC65JQW4LYZ547F4DRMRJAPANCNFSM4JEOLYQQ>
.
|
i also have this problem.. any new? |
@itaymining Have you tried my solution from above? |
Yes, but it won't show the 'HTTP-GET/POST" from fastapi routes... |
I got the same problem occurring since yesterday. Before that, I could see everything in the logs (routes called, response code etc). Now i only see EDIT: I was import "logging" module to use on a few endpoint & i guess the config was messing with the fastapi logger. Removing this fixed my issue. |
I'm also still having this issue. I've stopped using this docker image as a result. Actually it might be an issue with uvicorn. I have build my own minimal "start fast-api with uvicorn" docker image and it had the same problem. |
I'm not using this repo, but adding this to the gunicorn command worked for me:
|
I also think it is a gunicorn thing... I also posted it here: |
I've struggled with this for the past few days as well, and only just figured it out. The HTTP request info is stored in the
This will allow the |
Thanks allot! it works for me! |
Above solution doesn't work for me. Also this solution ties the application code too closely to the deployment method. We shouldn't be referencing gunicorn/uvicorn in the code. |
@bcb Maybe this implementation |
The above solution does not work for me. Anyone has other solution? Thanks! docker-compose.yml
main.py
|
The problem with the solution going around here is that it breaks logging when gunicorn isn't being used. It also doesn't affect the root handler, which is what a lot of modules are going to be using. Here's the version I use which tries to resolve those issues as well- import logging
from fastapi.logger import logger as fastapi_logger
if "gunicorn" in os.environ.get("SERVER_SOFTWARE", ""):
'''
When running with gunicorn the log handlers get suppressed instead of
passed along to the container manager. This forces the gunicorn handlers
to be used throughout the project.
'''
gunicorn_logger = logging.getLogger("gunicorn")
log_level = gunicorn_logger.level
root_logger = logging.getLogger()
gunicorn_error_logger = logging.getLogger("gunicorn.error")
uvicorn_access_logger = logging.getLogger("uvicorn.access")
# Use gunicorn error handlers for root, uvicorn, and fastapi loggers
root_logger.handlers = gunicorn_error_logger.handlers
uvicorn_access_logger.handlers = gunicorn_error_logger.handlers
fastapi_logger.handlers = gunicorn_error_logger.handlers
# Pass on logging levels for root, uvicorn, and fastapi loggers
root_logger.setLevel(log_level)
uvicorn_access_logger.setLevel(log_level)
fastapi_logger.setLevel(log_level) |
You can also take advantage of yaml configuration to propagate logs to the root handler. Basically use the root handler along with a fileHandler and a streamHandler, then propagate the uvicorn one (I propagate the error, but you can also propagate the access): version: 1
disable_existing_loggers: false
formatters:
standard:
format: "%(asctime)s - %(levelname)s - %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: standard
level: INFO
stream: ext://sys.stdout
file:
class: logging.handlers.WatchedFileHandler
formatter: standard
filename: mylog.log
level: INFO
loggers:
uvicorn:
error:
propagate: true
root:
level: INFO
handlers: [console, file]
propagate: no Then at app startup:
|
any easy way to do it? |
Use this script to run your app in a Docker container? |
small modification to dictConfig # pip install PyYAML
import yaml
with open('config.yml') as f:
config = yaml.load(f, Loader=yaml.FullLoader)
logging.config.dictConfig(config) |
OK, I found that most of the solutions here do not work for me if unmodified, but I figured out what works for me, here it is: if "gunicorn" in os.environ.get("SERVER_SOFTWARE", ""):
gunicorn_error_logger = logging.getLogger("gunicorn.error")
gunicorn_logger = logging.getLogger("gunicorn")
fastapi_logger.setLevel(gunicorn_logger.level)
fastapi_logger.handlers = gunicorn_error_logger.handlers
root_logger.setLevel(gunicorn_logger.level)
uvicorn_logger = logging.getLogger("uvicorn.access")
uvicorn_logger.handlers = gunicorn_error_logger.handlers
else:
# https://github.com/tiangolo/fastapi/issues/2019
LOG_FORMAT2 = "[%(asctime)s %(process)d:%(threadName)s] %(name)s - %(levelname)s - %(message)s | %(filename)s:%(lineno)d"
logging.basicConfig(level=logging.INFO, format=LOG_FORMAT2) This works for both uvicorn standalone and gunicorn with uvicorn workers:
|
Hey Brilliant work. I want also request time and request bytes length which gunicorn provides. My gunicorn conf file. errorlog = "gunicorn_error.log"syslog = Trueaccesslog = "-" options = { please help!! |
One shall make use of logging configuration via YAML
|
Sorry if I'm posting in this issue after several months, but using this approach now I can properly see all the logs in the console, however there is no mylog.log in my folder. I'm running this container in a VM with ubuntu, is it possible that something else could be blocking the creation of this log file? |
This might seem super obvious to somebody else, but I didn't find the specific mylog.log file, however, I did find the information I needed in the docker logs located at /var/lib/docker/containers. |
It worked! Thank you! |
Thanks everyone for the discussion and help here! 🙇 Some of these things were solved in some recent(ish) Uvicorn versions. And for other cases the tricks in the comments here might be what you need. If you are still having problems, please create a new issue. If @PunkDork21's problem was solved, you can close the issue now. Also, just a reminder, you probably don't need this Docker image anymore: https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker#-warning-you-probably-dont-need-this-docker-image
|
That said, if you're looking for a uvicorn container image that's always up to date, supports multiple versions of python, and also supports ARM then you should check out the multi-py uvicorn image. |
Assuming the original issue was solved, it will be automatically closed now. But feel free to add more comments or create new issues. |
@jacob-vincent thanks for sharing this. I am struggling getting my log setup working as well and this has got me 90% of the way there. I am facing 1 more issue I was hoping you may be able to shed some light on... Can't seem to get the logs working for tasks that are kicked off via an AsyncIOScheduler() from fastapi import FastAPI
from models.db import init_db
from routers.portfolio import router as portfolio_router
from routers.balance import router as balance_router
from routers.transaction import router as transaction_router
from idom import component, html
from idom.backend.fastapi import configure
import os
from tasks import manage_portfolio_task, fetch_liquidity_changes
from apscheduler.schedulers.asyncio import AsyncIOScheduler
# Configure logs
import logging
from fastapi.logger import logger as fastapi_logger
gunicorn_error_logger = logging.getLogger("gunicorn.error")
gunicorn_logger = logging.getLogger("gunicorn")
uvicorn_access_logger = logging.getLogger("uvicorn.access")
uvicorn_access_logger.handlers = gunicorn_error_logger.handlers
fastapi_logger.handlers = gunicorn_error_logger.handlers
if __name__ != "__main__":
fastapi_logger.setLevel(gunicorn_logger.level)
else:
fastapi_logger.setLevel(logging.DEBUG)
# Configure endpoints
app = FastAPI()
app.include_router(portfolio_router, prefix="/portfolio", tags=["Portfolio"])
app.include_router(balance_router, prefix="/balance", tags=["Balance"])
app.include_router(transaction_router, prefix="/transaction", tags=["Transaction"])
# Setup Scheduler
scheduler = AsyncIOScheduler()
scheduler.add_job(manage_portfolio_task, "interval", seconds=10)
scheduler.add_job(fetch_liquidity_changes, "interval", minutes=5)
scheduler.start()
# Configure DB
@app.on_event("startup")
async def on_startup():
# Startup db
await init_db()
# For local dev
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000) Have a separate issue open here as well but think its very closely related to this one |
This issue is still pretty popular. It took me a while to figure this out, so in case it helps other folks: The default setting for Uvicorn access logs is to write them to STDOUT. However, the default setting for Gunicorn access logs is to discard them without writing them anywhere. On the other hand, the default setting for Gunicorn error logs is to stream to STDERR. When running Uvicorn as a worker class for Gunicorn, Uvicorn will attach its logs properly to Gunicorn's access and error loggers, but unless you have configured the access log to write somewhere you will never see them. It is easy to be confused by this because the default behaviors for Uvicorn/Gunicorn are opposite so the logs just seem to disappear. Given all of this if you re-use the Instead configure the Gunicorn access logger location and your uvicorn/app logs should start showing up. If you are trying to get them to show up in your console, use "-" to send them to STDOUT. |
My team just switched to hypercorn from gunicorn to get http/2 and ran into a ton of logging issues. Because the documentation isn't great on that project right now and this discussion is still up, I'll put our solution here in case anyone is searching for a solution and comes across this. Not sure why hypercorn doesn't respect the log config the same way as gunicorn or uvicorn, but it doesn't today, at least when used in conjunction with fastapi. We needed to apply the config in the logging module before other imports to make sure they all inherited the same settings. This worked for us both at the CLI and in docker. Here are our files and what we did: Put this somewhere accessible to whichever file is your
{
"version": 1,
"disable_existing_loggers": false,
"formatters": {
"standard": {
"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
},
"minimal": {
"format": "%(message)s"
}
},
"handlers": {
"console": {
"level": "INFO",
"class": "logging.StreamHandler",
"formatter": "standard",
"stream": "ext://sys.stdout"
},
"hypercorn": {
"level": "INFO",
"class": "logging.StreamHandler",
"formatter": "minimal",
"stream": "ext://sys.stdout"
}
},
"root": {
"handlers": [
"console"
],
"level": "INFO"
},
"loggers": {
"": {
"handlers": [
"console"
],
"level": "INFO",
"propagate": false
},
"hypercorn.error": {
"handlers": [
"hypercorn"
],
"level": "INFO",
"propagate": false
},
"hypercorn.access": {
"handlers": [
"hypercorn"
],
"level": "INFO",
"propagate": false
}
}
} Here's the most important part
import json
import logging
import logging.config
import os
# Need to setup loggers before importing other modules that may use loggers
# Use whatever path you need to grab the log_config.json file
with open(os.path.join(os.path.dirname(__file__), "log_config.json")) as f:
logging.config.dictConfig(json.load(f))
# All other imports go below here
_LOGGER.getLogger(__name__) To run with Hypercorn hypercorn src.main:app --bind 0.0.0.0:${PORT} --workers 4 --access-logfile - --error-logfile - --worker-class trio |
Description
I have another project that utilizes fast api using gunicorn running uvicorn workers and supervisor to keep the api up. Recently I came across the issue that none of my logs from files that are not the fast api app are coming through. Initially I tried making an adhoc script to see if it works as well as changing the levels of the logging. I only had success if I set the logging to be at the DEBUG level.
I put together another small project to test out if I would run into this problem with a clean slate and I still couldn't get logging working with a standard
Other steps I took was chmod-ing the /var/log/ directory in case it was a permissions issue but I had no luck. Has anyone else ran into this or have recommendations on how they implemented logging?
Additional context
For context I put up the testing repo here: https://github.com/PunkDork21/fastapi-git-test
Testing it would be like:
The most of the files are similar to what I have in my real project
The text was updated successfully, but these errors were encountered: