Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

type of mtime columns in oc_filecache can be too small #7637

Closed
ryhaberecht opened this issue Dec 28, 2017 · 10 comments
Closed

type of mtime columns in oc_filecache can be too small #7637

ryhaberecht opened this issue Dec 28, 2017 · 10 comments

Comments

@ryhaberecht
Copy link

ryhaberecht commented Dec 28, 2017

mtime columns in oc_filecache are of type integer, usually signed, which allows us a timestamp up until 2147483647. This translates to January 2038 which is, in itself, not a problem. However, sometimes filesystems or applications mess up and give your shiny new PDF a timestamp of 4294967295, meaning all 8 bytes set to 1. Such a messed up file can not be uploaded to nextcloud, but I strongly believe that such a simple error should be handled automatically, see below.

Steps to reproduce

  1. create a file with modified time of 2147483647 (if signed) or 4294967295 (if unsigned, full 8 byte with all 1's)
  2. try to upload the file
  3. insertion into oc_filecache fails with out of range error

Expected behaviour

Solution 1)
Accept any file from any common filesystem, no matter how unreal and stupid the timestamp might seem. This needs a change from type integer to bigint for all timestamp columns. Might still not cover all possible timestamps from all common filesystems.

Solution 2)
Cap excess timestamps to the maximum value allowed in the mtime column. Might annoy users starting in January 2038, or users who, for some reason, want to use timestamps of the distant future.

I believe it would be best to apply both solutions together.

Actual behaviour

File is not uploaded. An error is reported, but rather silently. It is impossible for non-programmers to see and fix the issue.

Server configuration

Operating system:
Debian 9

Database:
PostgreSQL 9.6.6

Nextcloud version: (see Nextcloud admin page)
12.0.4

@rullzer
Copy link
Member

rullzer commented Dec 28, 2017

So we could fix this now with our new fancy migration command. However, keep in mind that a lot of clients might start doing weird stuff if you have way off timestamps.

IMO it is perfectly valid for the webdav endpoint to error if you for some reason get a timestamps from the year 2200. It probably means there are other issues on your system.

@ryhaberecht
Copy link
Author

If you throw an error at the user, at least explain what the error means and how to recover:

File supposedly last modified in the far future. This is very unusual, please check if your file is corrupt. See this workaround: [support page link].

instead of

some stuff with numbers in something, more weird things, error, because of some weird word nobody knows

@nalt
Copy link

nalt commented Jan 2, 2018

I regularly had this problem on a number of files on which the mtime was 2147483648 or "Jan 19 04:14:08 2038". (These files were actually previously created by the Nextcloud client, so I guess there must have been another bug.)

Is an mtime in 2038 really that wrong? After all, you are free to set it like that. Should the user really be punished by loosing a file? He probably did not even actively set the mtime.

I think this is a bug. Even a severe one since you will loose files eventually if you do not notice. In addition, the error message is totally unclear. You have to go count the "?" in the SQL query, map it to 2147483648 and then know that this number is critical. Show me one normal user who can do that ;-)

@rullzer
Copy link
Member

rullzer commented Jan 3, 2018

Fair enough. It is not nice.

@nickvergessen should be easy to add to the db:convert-filecache-bigint command right?

@nickvergessen
Copy link
Member

Hmm, actually I'd prefer to then fix the time columns completely by converting them to datetime 🙊

But I guess we could do the quick fix and make it bigint, until we translate all time-columns to datetime one day.

@agates
Copy link
Contributor

agates commented Feb 16, 2018

I'm still getting this in Nextcloud 13 on Postgresql 9.5. The mtime field is still an integer after upgrading from Nexcloud 12.

I've ran the command "sudo -u www-data php occ db:convert-filecache-bigint" but mtime is still an integer.

@agates
Copy link
Contributor

agates commented Feb 16, 2018

Looks like mtime and storage_mtime were both left out of the conversion. I modified core/Command/Db/ConvertFilecacheBigInt.php by adding them to the 'filecache' list on my server and ran the conversion command, which seemed to work.

Is there anything else I need to touch?

@ryhaberecht
Copy link
Author

ryhaberecht commented Feb 17, 2018

Nope, all set. As far as I know, these two columns were the only culprits.
Maybe your patch should make it into 13.1? This issue could be closed then.

Or someone knows where to apply the capping method mentioned in solution 2. If you know where to implement this centrally, just write me and I can program it.

Thank you!

@agates
Copy link
Contributor

agates commented Feb 17, 2018

I agree that a datetime (timestamp in postgresql) would be a better solution, but I certainly don't know the level of effort required there :).

PostgreSQL's "timestamp with time zone" is a fantastic type and an example of instantaneous time

@MorrisJobke
Copy link
Member

Fixed by #8412

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants