Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stray locks not being cleaned: server replied: Locked (ajax cron) #20380

Closed
klausguenter opened this issue Nov 7, 2015 · 104 comments
Closed

Stray locks not being cleaned: server replied: Locked (ajax cron) #20380

klausguenter opened this issue Nov 7, 2015 · 104 comments

Comments

@klausguenter
Copy link

klausguenter commented Nov 7, 2015

Hi there,

I have problems uploading some files via windows 7 client (Version 2.0.2, build 5569) connected with an owncloud 8.2 stable-Server.

The files exist on the client, not on the server. The log file on the client says:

06.11.2015 23:07:35 folder1/xxx.MDB   F:\Cloud1                     Error downloading http://xxx/owncloud/remote.php/webdav/folder1/xxx.MDB - server replied: Locked ("folder1/xxx.MDB" is locked)4,3 MB

I wonder why the client has problems to download - it should try to upload.

At first I thougt that the file on the client could be in use by another program. But the server says, that the file ist locked, not the client.

Can anyone help me please?

Regards,
klausguenter

@MorrisJobke
Copy link
Contributor

cc @icewind1991 for the locking topic

@PVince81
Copy link
Contributor

PVince81 commented Nov 9, 2015

I believe the error message in the client says "download" even when uploading, it's another issue.

The question here is why the file is locked in the first place. Are there other users accessing that folder ?

I suspect a stray lock.

@klausguenter
Copy link
Author

It's possible that the files to upload were in use by another program when the sync-client tried to upload them for the first time.
When the problem occured I did a restart of the client pc to make sure that these files are not in use by another program any more. But the files kept being not uploadable.

@apramhaas
Copy link
Contributor

I have exact the same problem. It suddenly occured for one file and for the first time. I'm the only one who is syncing to this directory (3 PCs, 2 mobile devices). I can not overwrite or delete it.
Came here from https://forum.owncloud.org/viewtopic.php?t=31270&p=100790 and tried this procedure:

  • deleting via client
  • deleting via webpage
  • deleting physical files from data folder and corresponding encryption keys
  • running occ files:scan and occ files:cleanup
  • truncated table oc_filecache

Server configuration

Operating system:
Raspbian 8
Web server:
Nginx
Database:
Mysql
PHP version:
5.6.14
ownCloud version: (see ownCloud admin page)
8.2.0.12
List of activated apps:

  • activity: 2.1.3
  • files: 1.2.0
  • files_pdfviewer: 0.7
  • files_sharing: 0.7.0
  • files_texteditor: 2.0
  • files_trashbin: 0.7.0
  • files_versions: 1.1.0
  • gallery: 14.2.0

The content of config/config.php:

"system": {
        "instanceid": "oc788abd2781",
        "passwordsalt": "***REMOVED SENSITIVE VALUE***",
        "datadirectory": "\/var\/ocdata",
        "dbtype": "mysql",
        "version": "8.2.0.12",
        "installed": true,
        "config_is_read_only": false,
        "forcessl": true,
        "loglevel": 2,
        "theme": "",
        "maintenance": false,
        "trashbin_retention_obligation": "30, auto",
        "trusted_domains": [
            "***REMOVED SENSITIVE VALUE***"
        ],
        "mail_smtpmode": "php",
        "dbname": "owncloud",
        "dbhost": "localhost",
        "dbuser": "***REMOVED SENSITIVE VALUE***",
        "dbpassword": "***REMOVED SENSITIVE VALUE***",
        "secret": "***REMOVED SENSITIVE VALUE***",
        "forceSSLforSubdomains": true,
        "memcache.local": "\\OC\\Memcache\\APCu"
    }

Error message from logfile:

{"reqId":"5h4sJPhlw0mjlWNp5wdl","remoteAddr":"94.87.129.34","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"safe.kdbx\\\" is locked\",\"Exception\":\"OC\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Tree.php(179): OC\\\\Connector\\\\Sabre\\\\File->delete()\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(287): Sabre\\\\DAV\\\\Tree->delete('safe.kdbx')\\n#2 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpDelete(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(469): Sabre\\\\Event\\\\EventEmitter->emit('method:DELETE', Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(254): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/var\\\/www\\\/owncloud\\\/apps\\\/files\\\/appinfo\\\/remote.php(55): Sabre\\\\DAV\\\\Server->exec()\\n#7 \\\/var\\\/www\\\/owncloud\\\/remote.php(137): require_once('\\\/var\\\/www\\\/ownclo...')\\n#8 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/lib\\\/private\\\/connector\\\/sabre\\\/file.php\",\"Line\":300}","level":4,"time":"2015-11-09T22:34:35+00:00","method":"DELETE","url":"\/remote.php\/webdav\/safe.kdbx"}

@klausguenter
Copy link
Author

Server configuration

Operating system:
Debian 7 stable
Web server:
Apache 2.2.22
Database:
Mysql 5.5.46
PHP version:
5.4.45
ownCloud version:
8.2.0.12 (stable)
List of activated apps:

activity: 2.1.3
deleted files: 0.7.0
first run wizard 1.1
Gallery 14.2.0
Mail Template Editor 0.1
Notifications 0.1.0
Provisioning API 0.3.0
Share Files 0.7.0
Text Editor 2.0
Updater 0.6
Versions 1.1.0
Video Viewer 0.1.3

The content of config/config.php:

$CONFIG = array (
'instanceid' => '_',
'passwordsalt' => '
',
'secret' => '**
',
'trusted_domains' =>
array (
0 => '_',
1 => '
',
),
'datadirectory' => '**
',
'overwrite.cli.url' => '',
'dbtype' => 'mysql',
'version' => '8.2.0.12',
'dbname' => 'owncloud1',
'dbhost' => 'localhost',
'dbtableprefix' => 'oc
',
'dbuser' => '
_',
'dbpassword' => '***',
'logtimezone' => 'UTC',
'installed' => true,
'filelocking.enabled' => 'true',
'memcache.locking' => '\OC\Memcache\Redis',
'memcache.local' => '\OC\Memcache\Redis',
'redis' =>
array (
'host' => 'localhost',
'port' => 6379,
'timeout' => 0,
),
);

@icewind1991
Copy link
Contributor

Do you also get files locked errors when trying to upload trough the web interface?

@apramhaas
Copy link
Contributor

Yes

@Siddius
Copy link

Siddius commented Nov 10, 2015

i had the same problem. My workaround :
Enable maintaince mode
Deleted every entry in table "oc_file_locks" in database
Disable Maintaince mode

Dirty but solved the problem ... for now

@icewind1991 icewind1991 changed the title Problems uploading some files via PC-Client: server replied: Locked Stray locks not being cleaned: server replied: Locked Nov 10, 2015
@apramhaas
Copy link
Contributor

I've found some additional files which can not be deleted because they are locked. If you need additional debug data let me know....

@icewind1991
Copy link
Contributor

Are there any errors in the logs before the locking error shows up?

@apramhaas
Copy link
Contributor

I see no other errors before the locking error. It occures just in the moment I want to modify or delete a file.
All this "problem" files were present before the update to Owncloud 8.2. Maybe the error came with this version.

Here is my owncloud.log
https://gist.github.com/unclejamal3000/2aba05cd32cc53771256

@jbouv55151
Copy link

I do have the same problem with a fresh installation of 8.2
I did not have this problem on older version on the same server.

@PVince81 PVince81 added this to the 9.0-current milestone Nov 19, 2015
@DavidShepherdson
Copy link

This is happening to me (on both 8.2 and 8.2.1, with MySQL), particularly (I think) since I added Dropbox external storage to one of my users (another user already had Dropbox set up previously with no problems).

Possibly of note: I just tried cleaning things up, by turning on maintenance mode, deleting everything from oc_file_locks, then running occ files:scan --all. After doing the latter, and with maintenance mode still turned on, there are now 10002 rows in oc_file_locks. Is that expected? I assumed there would only be locks if something was still using the files (which no clients would be, since it's in maintenance mode, and since the files:scan process finished, it wouldn't still be holding onto locks, would it?).

@icewind1991
Copy link
Contributor

After doing the latter, and with maintenance mode still turned on, there are now 10002 rows in oc_file_locks. Is that expected?

For performance reasons (since 8.2.1) rows are not cleaned up directly but re-used in further requests

@DavidShepherdson
Copy link

Fair enough, so that's probably not related to the issue, then. For what it's worth, I've removed the Dropbox external storage from this particular user, and haven't had any file locking problems so far since then. That may be coincidence, of course, or just that the particular files being synched with the Dropbox folder were the ones likely to cause the locking issue.

@stevespaw
Copy link

All of our s3 files are locked. We cannot delete or rename any files that were there previous to 8.2 update.
UGGG is this fixable? we have thousands of files on s3.

@bcutter
Copy link

bcutter commented Nov 23, 2015

Same on OC v8.2.1 with TFL and Memcaching via Redis as recommended. Anyway, there are a few entries in oc_file_locks (although through using Redis there shouldn´t be any locks?). No idea how to fix this. Only one specific file affected, making me and the never-ending, logfile-filling desktop clients going crazy.

Thankful for every tip or workaround! No idea how to "unlock" the file...

@PVince81
Copy link
Contributor

@icewind1991 are you able to reproduce this issue ?

For DB-based locking it might be possible to remove the locks by cleaning the "oc_file_locks" table.
If you're using redis exclusively for ownCloud, you might be able to clear it using the command flushall using redis-cli.

@PVince81
Copy link
Contributor

Are you guys using php-fpm ? I suspect that if the PHP process gets killed due to timeouts, the locks might not get cleared properly. However I thought that locks now have a TTL, @icewind1991 ?

@bcutter
Copy link

bcutter commented Nov 24, 2015

Yes, php-fpm is in the game too. @PVince81 perfect! That was what I was looking for (at http://redis.io/commands). For the moment syncing works fine again.

Do you know the cli for listing all keys/locked files on redis-cli too?

And I still don´t get why oc_file_locks has entries although using redis...

@AntonioGMuriana
Copy link

I've been experiencing the same issue.

Operating system: Ubuntu 14.04.3 LTS
Web server: Apache 2.4.7
Database: MySQL 5.5.46
PHP version: 5.5.9 (running as Apache Module)
ownCloud version: 8.2.1-1.1
MemChache: APCu 4.0.7

After entering on Maintenance Mode, I have seen that the table oc_file_locks has lost of entries with lock > 0 (even > 10) and about 150 entries with a future ttl value.

Solved by deleting all rows and leaving the maintenance mode.

@pdesanex
Copy link

Same issue here.

all-inkl.com shared hosting
PHP 5.6.13
mySQL 5.6.27
ownCloud 8.2.1 stable

Flushing oc_file_locks resolves all issues.

@Cybertinus
Copy link

I was hit by this bug too. My system:

PHP 5.6.14
MariaDB 10.0.21
Nginx 1.9.5 (thus using php-fpm)
FreeBSD 10.2-RELEASE-p8
OwnCloud 8.2.1 stable

The flushing of oc_file_locks seams to fix this issue indeed. So I wrote a little script to remove all the stale locks from the file_locks table:

#!/usr/bin/env bash

##########
# CONFIG #
##########

# CentOS 6: /usr/bin/mysql
# FreeBSD: /usr/local/bin/mysql
mysqlbin='/usr/local/bin/mysql'

# The location where OwnCloud is installed
ownclouddir='/var/www/owncloud'

#################
# ACTUAL SCRIPT #
#################

dbhost=$(grep dbhost "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbuser=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbpass=$(grep dbpassword "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbname=$(grep dbname "${ownclouddir}/config/config.php" | cut -d "'" -f 4)
dbprefix=$(grep dbtableprefix "${ownclouddir}/config/config.php" | cut -d "'" -f 4)

"${mysqlbin}" --silent --host="${dbhost}" --user="${dbuser}" --password="${dbpass}" --execute="DELETE FROM ${dbprefix}file_locks WHERE ttl < UNIX_TIMESTAMP();" "${dbname}"

Just configure where the mysql command can be found (hint: which mysql will tell you) and configure where OwnCloud itself is installed.
It needs this location in order to find the config.php inside your owncloud install. It extracts the needed database information from it and uses that to connect to MySQL. This has the advantage that when you change the password of the owncloud MySQL user, this scripts automatically uses the new information. And it saves you from having another file on your filesystem stating your password.
You don't need to edit anything below the "ACTUAL SCRIPT" comment.
When it has connection with MySQL it removes all the locks from the database that are already expired. It doesn't remove all locks as suggested in the rest of this issue, because there can be valid locks in the database which are still in the future. This script leaves those alone, to prevent bad stuff from happening.

And of course you can run this script as a cronjob every night, so you don't have to think about these stale locks anymore.

Hopefully this workaround script is useful for someone else except just me :)

@Atanamo
Copy link

Atanamo commented Dec 6, 2015

Hi, recently I had the same problem (Using the database as locking system).
The file_locks table was full of stray locks (>10k). Most data rows had set the "lock" field to 1, some hundreds to 2 and so on.

As I read the post of @PVince81 here, the "ttl" was introduced for removing old or stray locks?
But...
The "ttl" of most of the entries in my table was more than 12 hours old.
So the locks should have been expired, right?

Well, I tested the expire mechanism and it seems not to work as expected.

  • I clean up the locks table
  • Edit one file, so a new entry in locks table appeared
    • After edit: The entry had "lock"=0 --> fine
    • Successfully tested if the file can be renamed
  • Then I incremented the lock manually via the database
    • Having this, I tried to rename the file using the web UI (again)
    • Result: The file could not be renamed, because of lock error --> also fine
  • Then I set the "ttl" to a timestamp one month ago
    • Having this and the lock=1, I tried to rename the file again
    • Result:
      • The file could not be renamed, because of lock error
      • The table entry showed up with an updated "ttl", set to one hour in future

In the last case I would expect the file can be renamed successfully. But the file lock is respected although it is expired.

By looking into the code of the DBLockingProvider, I cannot find anything that checks the ttl for the locks - except by method cleanEmptyLocks().
But this method is only removing expired entries having "lock"=0.

So I wonder, if this is the only purpose of the ttl: Only to clean up valid old and fully released locks?
If not, this might cause the bug.

But in any case, it seems to be useful to introduce a timestamp like the ttl, which is regarded when a lock should be acquired. For example let's call this timestamp "stray_timeout"

  • If the stray_timeout lies in future, the table entry is valid and the lock field is incremented on acquiring.
  • Otherwise the lock is corrupted/stray and should be ignored. On acquiring the lock field must be set to 1.

Well, hope these thoughts are not totally nonsense and may help ;-)


ownCloud version: 8.2.1 (stable)
Operating system: Raspbian 8
Web server: Nginx
Database: MySQL 5.5.44
PHP version: 5.6.14 - Using PHP-FPM

@PVince81
Copy link
Contributor

PVince81 commented Dec 7, 2015

The "ttl" of most of the entries in my table was more than 12 hours old.
So the locks should have been expired, right?

@icewind1991 can you have a look why the expiration is not working ?

@e-alfred
Copy link

e-alfred commented Nov 9, 2016

I cleaned the oc_file_locks table, it is empty, but I still get error 423 with certain files:

{"reqId":"8TlQgrXtV+is28PODYIV","remoteAddr":"1.1.1.1","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"workspace\\\/.metadata\\\/.plugins\\\/org.eclipse.core.resources\\\/.projects\\\/sudoku\\\/org.eclipse.jdt.core\\\/state.dat\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php(136): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #271)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1036): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('state.dat', Resource id #271)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('workspace\\\/.meta...', Resource id #271, NULL)\\n#3 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#7 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#8 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#9 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"user\"}","level":4,"time":"2016-11-09T12:55:53+00:00","method":"PUT","url":"\/remote.php\/webdav\/workspace\/.metadata\/.plugins\/org.eclipse.core.resources\/.projects\/sudoku\/org.eclipse.jdt.core\/state.dat","user":"user"}

I am running redis as a memcache.

Cron.php is run every 15 minutes by a crontab entry, in the webinterface it says it was run a few minutes ago.

@PVince81
Copy link
Contributor

PVince81 commented Nov 9, 2016

@e-alfred are you using redis for locking too ? If yes, then oc_file_locks is not used but redis. You might want to clear the redis cache too then.

@e-alfred
Copy link

e-alfred commented Nov 9, 2016

Yes, Redis for both caching and locking. I flushed the Redis cache and will see what happens.

@e-alfred
Copy link

e-alfred commented Nov 9, 2016

@PVince81 The problem still prevails, interestingly only for one user with synced hidden files (Git repositories and Eclipse configuration) I am getting a 423 response for certain files.

Here are two examples:

{"reqId":"EguF+W00ZGHGG+M\/L3+Y","remoteAddr":"1.1.1.1","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"workspace\\\/.metadata\\\/.plugins\\\/org.eclipse.core.resources\\\/.projects\\\/sudoku\\\/org.eclipse.jdt.core\\\/state.dat\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php(136): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #271)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1036): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('state.dat', Resource id #271)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('workspace\\\/.meta...', Resource id #271, NULL)\\n#3 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#7 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#8 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#9 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"user\"}","level":4,"time":"2016-11-09T14:52:29+00:00","method":"PUT","url":"\/remote.php\/webdav\/workspace\/.metadata\/.plugins\/org.eclipse.core.resources\/.projects\/sudoku\/org.eclipse.jdt.core\/state.dat","user":"user"}

{"reqId":"la+uRl6ZkWB6uigOyEfH","remoteAddr":"1.1.1.1","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"test\\\/test\\\/test\\\/.git\\\/refs\\\/heads\\\/test\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/Directory.php(136): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #271)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1036): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Directory->createFile('test', Resource id #271)\\n#2 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(523): Sabre\\\\DAV\\\\Server->createFile('teaching\\\/test...', Resource id #271, NULL)\\n#3 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#6 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#7 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#8 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#9 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"user\"}","level":4,"time":"2016-11-09T14:52:28+00:00","method":"PUT","url":"\/remote.php\/webdav\/test\/test\/test\/.git\/refs\/heads\/test","user":"user"}

@e-alfred
Copy link

Okay, I run an occ files:scan --all and now I do not get any locking messages like above anymore.

@Germano0
Copy link

owncloud-9.1.1-1.fc24.noarch here. Command occ files:scan --all did not solve the problem

@ghost ghost mentioned this issue Dec 19, 2016
@PVince81 PVince81 removed the 8.2.2 label Dec 22, 2016
@PVince81 PVince81 modified the milestones: 9.1.4, 9.1.3 Dec 22, 2016
@moscicki
Copy link

moscicki commented Jan 18, 2017

We have a similar problem on a small auxiliary installation running owncloud 9.1.3. Is this issue understood already and in the pipeline for fixing?

Files are sometimes locked and neither cleaning up the oc_file_locks table occ files:scan --all solves the problem. In this particular server the crons were not run for a long time due to misconfiguration. We corrected that and run the cron job few times by hand while trying to resolve the problem. It did not help.

The TTL entries in oc_file_locks are set to some insanely high values. What should they be normally? 3600s?

The problem appears for a folder which is "External Mount" pointing to the local disk on the same server and then shared with a user by administrator.

Transactional File Locking is not enabled -- should it?

Here are the server error messages.

{"reqId":"anLv5s55dWcTgOOydVOV","remoteAddr":"ip.address","app":"webdav","message":"Exception: {\"Message\":\"HTTP\\\/1.1 423 \\\"Data\\\/AutoGen\\\/GpsPhotoDb\\\/PhotoListByLatLng\\\/6.3 81.2.dat\\\" is locked\",\"Exception\":\"OCA\\\\DAV\\\\Connector\\\\Sabre\\\\Exception\\\\FileLocked\",\"Code\":0,\"Trace\":\"#0 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(1070): OCA\\\\DAV\\\\Connector\\\\Sabre\\\\File->put(Resource id #57)\\n#1 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/CorePlugin.php(511): Sabre\\\\DAV\\\\Server->updateFile('Data\\\/AutoGen\\\/Gp...', Resource id #57, NULL)\\n#2 [internal function]: Sabre\\\\DAV\\\\CorePlugin->httpPut(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#3 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/event\\\/lib\\\/EventEmitterTrait.php(105): call_user_func_array(Array, Array)\\n#4 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(459): Sabre\\\\Event\\\\EventEmitter->emit('method:PUT', Array)\\n#5 \\\/var\\\/www\\\/owncloud\\\/3rdparty\\\/sabre\\\/dav\\\/lib\\\/DAV\\\/Server.php(248): Sabre\\\\DAV\\\\Server->invokeMethod(Object(Sabre\\\\HTTP\\\\Request), Object(Sabre\\\\HTTP\\\\Response))\\n#6 \\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/appinfo\\\/v1\\\/webdav.php(56): Sabre\\\\DAV\\\\Server->exec()\\n#7 \\\/var\\\/www\\\/owncloud\\\/remote.php(164): require_once('\\\/var\\\/www\\\/ownclo...')\\n#8 {main}\",\"File\":\"\\\/var\\\/www\\\/owncloud\\\/apps\\\/dav\\\/lib\\\/Connector\\\/Sabre\\\/File.php\",\"Line\":174,\"User\":\"service\"}","level":4,"time":"2017-01-17T18:11:10+00:00","method":"PUT","url":"\/owncloud\/remote.php\/webdav\/Data\/AutoGen\/GpsPhotoDb\/PhotoListByLatLng\/6.3%2081.2.dat","user":"service"}

@PVince81
Copy link
Contributor

@moscicki this issue here is about people using ajax cron where ajax cron doesn't run often enough to trigger the oc_filelock cleaning background job.

If you say that even clearing that table doesn't solve the problem, then it's a problem that has not been reproduced and understood yet. The TTL is set to 3600 here https://github.com/owncloud/core/blob/v9.1.3/lib/private/Lock/DBLockingProvider.php#L100 in the default value.

From my understand it's not that the lock isn't cleared when clearing the table. The problem is that the lock reappears after clearing and stays there.

The posted exception is about an upload (Webdav PUT).

Usually unlocking the lock is triggered after the fclose() from after the upload. If that fails, then it happens during PHP garbage collection.
However it was observed that if the upload is aborted and the PHP connection is lost, it is likely that the fclose() code is never reached and also the GC doesn't run any more, or can't run. This was discovered here with PHP 7: #22370.

See #22370 (comment) for possible fixes.

If no connection abortion or timeouts were involved, then the problem might be somewhere else.

@moscicki
Copy link

@PVince81: I created a new issue for this because it looks like the exception causing this is different: #26980

@PVince81 PVince81 modified the milestones: 9.1.5, 9.1.4 Feb 6, 2017
@ulfklose
Copy link

I do have the same problem since today. I already cleared the database tables

mysql> DELETE FROM oc_file_locks WHERE 1;
Query OK, 0 rows affected (0.00 sec)

but when I try to move a file to another directory using the web interface I get the lock error message and see this in my log

OCA\DAV\Connector\Sabre\Exception\FileLocked: HTTP/1.1 423 "Atzumer Teich.mp4" is locked

Repeating the move procedure results in the same error, no moving possible.

@joseluis
Copy link

joseluis commented Mar 23, 2017

We also had multiple folders locked for multiple users, which couldn't be used, or deleted... This was in centos 6 with cpanel. The only thing that worked for us was configuring redis as explained here and here.

@PVince81
Copy link
Contributor

PVince81 commented Apr 5, 2017

Closing in favor of a more generic ticket about ajax cron: #27574

Please discuss possible approaches there

@janvlug
Copy link

janvlug commented Jun 7, 2017

Should this issue really be closed in favor of #27574 ?
I have locked files while I use cron for cron.
Furthermore, a solution on how to unlock the files is still needed.
See also: https://help.nextcloud.com/t/file-is-locked-how-to-unlock/1883/10

@lock
Copy link

lock bot commented Aug 2, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked as resolved and limited conversation to collaborators Aug 2, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests