diff --git a/docs/design/NSFSGlacierStorageClass.md b/docs/design/NSFSGlacierStorageClass.md index 3508464e3f..2ff80663c4 100644 --- a/docs/design/NSFSGlacierStorageClass.md +++ b/docs/design/NSFSGlacierStorageClass.md @@ -15,31 +15,30 @@ There are 3 primary flows of concern and this document will discuss all 3 of the 2. Restore object that are uploaded to `GLACIER` storage class (API: `RestoreObject`). 3. Copy objects where source is an object stored in `GLACIER` (API: `PutObject`). -### WAL -Important component of all the flows is the write ahead log (WAL). NooBaa has a `SimpleWAL` which as name states -is extremely simple in some senses. It does not deal with fsync issues, partial writes, holes, etc. rather just -appends data seperated by a new line character. +### Persistent Log +Important component of all the flows is the Persistent Log. NooBaa has a `PersistentLogger` is extremely simple in some senses. +It does not deal with fsync issues, partial writes, holes, etc. rather just appends data seperated by a new line character. -`SimpleWAL` features: +`PersistentLogger` features: 1. Exposes an `append` method which adds data to the file. -2. Can perform auto rotation of the file which makes sure that a single WAL is never too huge for the -WAL consumer to consume. -3. Exposes a `process` method which allows "safe" iteration on the previous WAL files. +2. Can perform auto rotation of the file which makes sure that a single log is never too huge for the +log consumer to consume. +3. Exposes a `process_inactive` method which allows "safe" iteration on the previous log files. 4. Tries to make sure that no data loss happens due to process level races. #### Races which are handled by the current implementation -1. `n` processes open the WAL file while a "consumer" swoops and tries to process the file affectively losing the +1. `n` processes open the log file while a "consumer" swoops and tries to process the file affectively losing the current writes (due to processing partially written file and ultimately invoking `unlink` on the file) - This isn't -possible as `process` method makes sure that it doesn't iterate over the "current active file". -2. `k` processes out of `n` (such that `k < n`) open the WAL while a "consumer" swoops and tries to process the -file affectively losing the current writes (due to unliking the file others hold reference to) - Although `process` +possible as `process_inactive` method makes sure that it doesn't iterate over the "current active file". +2. `k` processes out of `n` (such that `k < n`) open the peristent log while a "consumer" swoops and tries to process the +file affectively losing the current writes (due to unliking the file others hold reference to) - Although `process_inactive` method will not protect against this as technically "current active file" is a different file but this is still **not** possible as the "consumer" need to have an "EXCLUSIVE" lock on the files before it can process the file this makes sure that for as long as any process is writing on the file, the "consumer" cannot consume the file and will block. -3. `k` processes out of `n` (such that `k < n`) open the WAL but before the NSFS process could get a "SHARED" lock on +3. `k` processes out of `n` (such that `k < n`) open the peristent log but before the NSFS process could get a "SHARED" lock on the file the "consumer" process swoops in and process the files and then issues `unlink` on the file. The unlink will not delete the file as `k` processes have open FD to the file but as soon as those processes will be done writing to -it and will close the FD, the file will be deleted which will result in lost writes - This isn't possible as `SimpleWAL` +it and will close the FD, the file will be deleted which will result in lost writes - This isn't possible as `PersistentLogger` does not allow writing to a file till it can get a lock on the file and ensure that there are `> 0` links to the file. If there are no links then it tries to open file the again assuming that the consumer has issued `unlink` on the file it holds the FD to. @@ -66,7 +65,7 @@ which manages the actual movements of the file. 2. NooBaa rejects the request if NooBaa isn't configured to support the given storage class. This is **not** enabled by default and needs to be enabled via `config-local.js` by setting `config.NSFS_GLACIER_ENABLED = true` and `config.NSFS_GLACIER_LOGS_ENABLED = true`. 3. NooBaa will set the storage class to `GLACIER` by setting `user.storage_class` extended attribute. -4. NooBaa creates a simple WAL (Write Ahead Log) and appends the filename to the log file. +4. NooBaa creates a persistent log and appends the filename to the log file. 5. Completes the upload. Once the upload is complete, the file sits on the disk till the second process kicks in and actually does the movement @@ -79,19 +78,19 @@ does as well). 1. A scheduler (eg. Cron, human, script, etc) issues `node src/cmd/manage_nsfs glacier migrate --interval `. 2. The command will first acquire an "EXCLUSIVE" lock so as to ensure that only one tape management command is running at once. 3. Once the process has the lock it will start to iterate over the potentially currently inactive files. -4. Before processing a WAL file, the proceess will get an "EXCLUSIVE" lock to the file ensuring that it is indeed the only +4. Before processing a log file, the proceess will get an "EXCLUSIVE" lock to the file ensuring that it is indeed the only process processing the file. -5. It will read the WAL one line at a time and will ensure the following: +5. It will read the log one line at a time and will ensure the following: 1. The file still exists. 2. The file is still has `GLACIER` storage class. (This is can happen if the user uploads another object with `STANDARD` storage class). 3. The file doesn't have any of the `RestoreObject` extended attributes. This is to ensure that if the file was marked for restoration as soon as it was uploaded then we don't perform the migration at all. This is to avoid unnecessary work and also make sure that we don't end up racing with ourselves. -6. Once a file name passes through all the above criterions then we add its name to a temporary WAL and handover the file +6. Once a file name passes through all the above criterions then we add its name to a temporary log and handover the file name to `migrate` script which should be in `config.NSFS_GLACIER_TAPECLOUD_BIN_DIR` directory. We expect that the script will take the file name as its first parameter and will perform the migration. If the `config.NSFS_GLACIER_BACKEND` is set to `TAPECLOUD` (default) then we expect the script to output data in compliance with `eeadm migrate` command. -7. We delete the temporary WAL that we created. -8. We delete the WAL created by NSFS process **iff** there were no failures in `migrate`. In case of failures we skip the WAL +7. We delete the temporary log that we created. +8. We delete the log created by NSFS process **iff** there were no failures in `migrate`. In case of failures we skip the log deletion as a way to retry during the next trigger of the script. It should be noted that NooBaa's `migrate` (`TAPECLOUD` backend) invocation does **not** consider `DUPLICATE TASK` an error. ### Flow 2: Restore Object @@ -105,21 +104,21 @@ which manages the actual movements of the file. by default and needs to be enabled via `config-local.js` by setting `config.NSFS_GLACIER_ENABLED = true` and `config.NSFS_GLACIER_LOGS_ENABLED = true`. 3. NooBaa performs a number of checks to ensure that the operation is valid (for example there is no already ongoing restore request going on etc). -4. NooBaa saves the filename to a simple WAL (Write Ahead Log). +4. NooBaa saves the filename to a persistent log. 5. Returns the request with success indicating that the restore request has been accepted. #### Phase 2 1. A scheduler (eg. Cron, human, script, etc) issues `node src/cmd/manage_nsfs glacier restore --interval `. 2. The command will first acquire an "EXCLUSIVE" lock so as to ensure that only one tape management command is running at once. 3. Once the process has the lock it will start to iterate over the potentially currently inactive files. -4. Before processing a WAL file, the proceess will get an "EXCLUSIVE" lock to the file ensuring that it is indeed the only +4. Before processing a log file, the proceess will get an "EXCLUSIVE" lock to the file ensuring that it is indeed the only process processing the file. -5. It will read the WAL one line at a time and will store the names of the files that we expect to fail during an eeadm restore +5. It will read the log one line at a time and will store the names of the files that we expect to fail during an eeadm restore (this can happen for example because a `RestoreObject` was issued for a file but later on that file was deleted before we could actually process the file). -6. The WAL is handed over to `recall` script which should be present in `config.NSFS_GLACIER_TAPECLOUD_BIN_DIR` directory. We expect that the script will take the file name as its first parameter and will perform the recall. If the `config.NSFS_GLACIER_BACKEND` is set to `TAPECLOUD` (default) then we expect the script to output data in compliance with `eeadm recall` command. -7. If we get any unexpected failures then we mark it a failure and make sure we do not delete the WAL file (so as to retry later). -8. We iterate over the WAL again to set the final extended attributes. This is to make sure that we can communicate the latest with +6. The log is handed over to `recall` script which should be present in `config.NSFS_GLACIER_TAPECLOUD_BIN_DIR` directory. We expect that the script will take the file name as its first parameter and will perform the recall. If the `config.NSFS_GLACIER_BACKEND` is set to `TAPECLOUD` (default) then we expect the script to output data in compliance with `eeadm recall` command. +7. If we get any unexpected failures then we mark it a failure and make sure we do not delete the log file (so as to retry later). +8. We iterate over the log again to set the final extended attributes. This is to make sure that we can communicate the latest with the NSFS processes. ### Flow 3: Copy Object with Glacier Object as copy source diff --git a/src/manage_nsfs/manage_nsfs_glacier.js b/src/manage_nsfs/manage_nsfs_glacier.js index 0469a5b51c..b0b9af0f8a 100644 --- a/src/manage_nsfs/manage_nsfs_glacier.js +++ b/src/manage_nsfs/manage_nsfs_glacier.js @@ -20,11 +20,13 @@ async function process_migrations() { const fs_context = native_fs_utils.get_process_fs_context(); await lock_and_run(fs_context, CLUSTER_LOCK, async () => { + const backend = getGlacierBackend(); + if ( - await low_free_space() || + await backend.low_free_space() || await time_exceeded(fs_context, config.NSFS_GLACIER_MIGRATE_INTERVAL, MIGRATE_TIMESTAMP_FILE) ) { - await run_glacier_migrations(fs_context); + await run_glacier_migrations(fs_context, backend); await record_current_time(fs_context, MIGRATE_TIMESTAMP_FILE); } }); @@ -33,9 +35,10 @@ async function process_migrations() { /** * run_tape_migrations reads the migration WALs and attempts to migrate the * files mentioned in the WAL. - * @param {nb.NativeFSContext} fs_context + * @param {nb.NativeFSContext} fs_context + * @param {import('../sdk/nsfs_glacier_backend/backend').GlacierBackend} backend */ -async function run_glacier_migrations(fs_context) { +async function run_glacier_migrations(fs_context, backend) { // This WAL is getting opened only so that we can process all the prcess WAL entries const wal = new PersistentLogger( config.NSFS_GLACIER_LOGS_DIR, @@ -43,7 +46,6 @@ async function run_glacier_migrations(fs_context) { { disable_rotate: true, locking: 'EXCLUSIVE' }, ); - const backend = getGlacierBackend(); await wal.process_inactive(async file => backend.migrate(fs_context, file)); } @@ -51,13 +53,15 @@ async function process_restores() { const fs_context = native_fs_utils.get_process_fs_context(); await lock_and_run(fs_context, CLUSTER_LOCK, async () => { + const backend = getGlacierBackend(); + if ( - await low_free_space() || + await backend.low_free_space() || !(await time_exceeded(fs_context, config.NSFS_GLACIER_RESTORE_INTERVAL, RESTORE_TIMESTAMP_FILE)) ) return; - await run_glacier_restore(fs_context); + await run_glacier_restore(fs_context, backend); await record_current_time(fs_context, RESTORE_TIMESTAMP_FILE); }); } @@ -66,8 +70,9 @@ async function process_restores() { * run_tape_restore reads the restore WALs and attempts to restore the * files mentioned in the WAL. * @param {nb.NativeFSContext} fs_context + * @param {import('../sdk/nsfs_glacier_backend/backend').GlacierBackend} backend */ -async function run_glacier_restore(fs_context) { +async function run_glacier_restore(fs_context, backend) { // This WAL is getting opened only so that we can process all the prcess WAL entries const wal = new PersistentLogger( config.NSFS_GLACIER_LOGS_DIR, @@ -75,7 +80,6 @@ async function run_glacier_restore(fs_context) { { disable_rotate: true, locking: 'EXCLUSIVE' }, ); - const backend = getGlacierBackend(); await wal.process_inactive(async file => backend.restore(fs_context, file)); } @@ -86,16 +90,11 @@ async function process_expiry() { if (!(await time_exceeded(fs_context, config.NSFS_GLACIER_EXPIRY_INTERVAL, EXPIRY_TIMESTAMP_FILE))) return; - await run_glacier_expiry(fs_context); + await getGlacierBackend().expiry(fs_context); await record_current_time(fs_context, EXPIRY_TIMESTAMP_FILE); }); } -async function run_glacier_expiry(fs_context) { - const backend = getGlacierBackend(); - await backend.expiry(fs_context); -} - /** * time_exceeded returns true if the time between last run recorded in the given * timestamp_file and now is greater than the given interval. @@ -120,15 +119,6 @@ async function time_exceeded(fs_context, interval, timestamp_file) { return false; } -/** - * low_free_space returns true if the default backend has low disk space - * @returns {Promise} - */ -async function low_free_space() { - const backend = getGlacierBackend(); - return backend.low_free_space(); -} - /** * record_current_time stores the current timestamp in ISO format into * the given timestamp file diff --git a/src/util/file_reader.js b/src/util/file_reader.js index 3d17726dac..b4f009968a 100644 --- a/src/util/file_reader.js +++ b/src/util/file_reader.js @@ -91,10 +91,15 @@ class NewlineReader { } async init() { - const fh = await nb_native().fs.open(this.fs_context, this.path, 'r'); - if (this.lock) await fh.flock(this.fs_context, this.lock); - - this.fh = fh; + let fh = null; + try { + fh = await nb_native().fs.open(this.fs_context, this.path, 'r'); + if (this.lock) await fh.flock(this.fs_context, this.lock); + + this.fh = fh; + } finally { + if (fh) await fh.close(this.fs_context); + } } async close() {