diff --git a/_data/destinations/bigquery/loading-errors.yml b/_data/destinations/bigquery/loading-errors.yml index a0429e924..244cf1b51 100644 --- a/_data/destinations/bigquery/loading-errors.yml +++ b/_data/destinations/bigquery/loading-errors.yml @@ -6,9 +6,12 @@ # "Primary key change is not permitted" +numeric-out-of-range: &numeric-out-of-range "Numeric out of range for BigQuery on [NUMERIC]" +primary-key-change: &primary-key-change "Primary key change is not permitted" + all: ## Primary Key change not allowed - - message: "Primary key change is not permitted" + - message: *primary-key-change id: "pk-change-not-permitted" applicable-to: "Google BigQuery v2 destinations" level: "critical" @@ -25,7 +28,7 @@ all: fix-it: | Reset the table(s) mentioned in the error. This will queue a full re-replication of the table(s), which will ensure Primary Keys are correctly captured and used to de-dupe data when loading. - - message: "Numeric out of range for BigQuery on [NUMERIC]" + - message: *numeric-out-of-range id: "numeric-out-of-range" applicable-to: "All Google BigQuery destination versions" level: "warning" diff --git a/_data/errors/extraction/databases/mongo.yml b/_data/errors/extraction/databases/mongo.yml index 26c60b0c3..ec6741be4 100644 --- a/_data/errors/extraction/databases/mongo.yml +++ b/_data/errors/extraction/databases/mongo.yml @@ -18,6 +18,11 @@ raw-error: # '[VALUE]' is not a valid ObjectId, it must be a 12-byte input or a 24-character hex string + oplog-age-out: &oplog-age-out | + Clearing state because Oplog has aged out + Must complete full table sync before starting oplog replication for [COLLECTION_NAME] + + documentation: projection-queries: &projection-queries category: "Projection queries" @@ -147,4 +152,20 @@ all: {% for integration in applicable-integrations %} - [{{ integration.display_name }}]({{ integration.url | prepend: site.baseurl | append: "/v2" | append: "#create-a-database-user" }}) - {% endfor %} \ No newline at end of file + {% endfor %} + + - message: *oplog-age-out + id: "oplog-age-out-full-table-replication" + applicable-to: *all-mongo + level: "info" + category: "Log-based Incremental Replication" + category-doc: | + {{ link.replication.log-based-incremental | prepend: site.baseurl | append: "#limitation--log-retention" }} + version: "1,2" + summary: "Insufficient maximum OpLog size" + cause: | + The OpLog's maximum size is insufficient, causing log files to age out before Stitch can replicate them. When this occurs, Stitch will clear the saved log position ID for any affection collection(s) and re-replicate them in full. + fix-it: | + Increase the maximum size of the OpLog using the [replSetResizeOplog](https://docs.mongodb.com/v4.0/reference/command/replSetResizeOplog/#dbcmd.replSetResizeOplog){:target="new"} command. + + **Note**: As the maximum size you need depends on your database, it may take some experimentation to identify the best setting. Mongo doesn't currently recommend an OpLog size. \ No newline at end of file diff --git a/_destinations/redshift/guides/redshift-apply-encodings-sort-dist-keys.md b/_destinations/redshift/guides/redshift-apply-encodings-sort-dist-keys.md index e5c3c38ce..72ac9c20c 100644 --- a/_destinations/redshift/guides/redshift-apply-encodings-sort-dist-keys.md +++ b/_destinations/redshift/guides/redshift-apply-encodings-sort-dist-keys.md @@ -25,15 +25,15 @@ use-tutorial-sidebar: false # -------------------------- # intro: | - {% include important.html type="single-line" content="The process we outline in this tutorial - which includes dropping tables - can lead to data corruption and other issues if done incorrectly. **Please proceed with caution or reach out to Stitch support if you have questions.**" %} + {% include important.html type="single-line" content="The process we outline in this tutorial - which includes dropping tables - can lead to data corruption and other issues if done incorrectly. **Proceed with caution or reach out to Stitch support if you have questions.**" %} - Want to improve your query performance? In this article, we’ll walk you through how to use encoding, Sort, and Distribution keys to streamline query processing. + Want to improve your query performance? In this guide, we’ll walk you through how to use encoding, SORT, and DIST (distribution) keys to streamline query processing. Before we dive into their application, here's a quick overview of each of these performance enhancing tools. - **Encodings**, or [compression types](http://docs.aws.amazon.com/redshift/latest/dg/t_Compressing_data_on_disk.html), are used to reduce the amount of required storage space and the size of data that’s read from storage. This in turn can lead to a reduction in processing time for queries. - - **[Sort keys](http://docs.aws.amazon.com/redshift/latest/dg/t_Sorting_data.html)** determine the order in which rows in a table are stored. When properly applied, Sort Keys allow large chunks of data to be skipped during query processing. Less data to scan means a shorter processing time, thus improving the query’s performance. + - **[SORT keys](http://docs.aws.amazon.com/redshift/latest/dg/t_Sorting_data.html)** determine the order in which rows in a table are stored. When properly applied, SORT Keys allow large chunks of data to be skipped during query processing. Less data to scan means a shorter processing time, thus improving the query’s performance. - **[Distribution, or DIST keys](http://docs.aws.amazon.com/redshift/latest/dg/t_Distributing_data.html)** determine where data is stored in Redshift. When data is replicated into your data warehouse, it’s stored across the compute nodes that make up the cluster. If data is heavily skewed - meaning a large amount is placed on a single node - query performance will suffer. Even distribution prevents these bottlenecks by ensuring that nodes equally share the processing load. @@ -57,18 +57,18 @@ steps: We’ll use a table called `orders`, which is contained in the `rep_sales` schema. - Log into your Redshift database using your SQL client to get started. + To get started, log into your Redshift database using [psql](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-from-psql.html){:target="new"}. - Use this command to retrieve the table schema, replacing `rep_sales` and `orders` with the names of your schema and table, respectively: + Use this command to retrieve the table schema, replacing `rep_sales` and `orders` with the names of your schema and table, respectively: - ```sql - \d+ rep_sales.orders - ``` + {% capture code %}\d+ rep_sales.orders + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} For the `rep_sales.orders` table, the result looks like this: - ``` - | Column | Data Type | + {% capture code %}| Column | Data Type | | --------------------+----------------------------| | id [pk] | BIGINT | | rep_name | VARCHAR(128) | @@ -80,7 +80,9 @@ steps: | _sdc_batched_at | TIMESTAMP WITHOUT TIMEZONE | | _sdc_table_version | BIGINT | | _sdc_replication_id | VARCHAR(128) | - ``` + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} In this example, we'll perform the following: @@ -101,17 +103,19 @@ steps: Retrieve the table's Primary Key using the following query: - ```sql - SELECT description + {% capture code %}SELECT description FROM pg_catalog.pg_description WHERE objoid = 'old_orders'::regclass; - ``` + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} The result will look like the following, where `primary_keys` is an array of strings referencing the columns used as the table's Primary Key: - ```sql - {"primary_keys":["id"]} - ``` + {% capture code %}{"primary_keys":["id"]} + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} {% include important.html first-line="**Primary Key comments**" content="Redshift doesn’t enforce the use of Primary Keys, but Stitch requires them to replicate data. In the following example, you'll see `COMMENT` being used to note the table's Primary Key. **Make sure you include the Primary Key comment in the next step, as missing or incorrectly defined Primary Key comments will cause issues with data replication.**" %} @@ -128,8 +132,7 @@ steps: For the `rep_sales.orders` example table, this is the transaction that will perform the actions listed above: - ```sql - SET search_path to rep_sales; + {% capture code %}SET search_path to rep_sales; BEGIN; ALTER TABLE orders RENAME TO old_orders; CREATE TABLE new_orders ( @@ -154,7 +157,9 @@ steps: ALTER TABLE orders OWNER TO ; /* Grants table ownership to Stitch */ DROP TABLE old_orders; /* Drops the "old" table */ END; - ``` + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} - title: "Verify the table owner" anchor: "verify-table-owner" @@ -163,34 +168,36 @@ steps: To verify the table's owner, run the following query and replace `rep_sales` and `orders` with the names of the schema and table, respectively: - ```sql - SELECT schemaname, + {% capture code %}SELECT schemaname, tablename, tableowner FROM pg_catalog.pg_tables WHERE schemaname = 'rep_sales' AND tablename = 'order' - ``` + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} If Stitch is not the owner of the table, run the following command: - ```sql - ALTER TABLE . OWNER TO ; - ``` + {% capture code %}ALTER TABLE . OWNER TO ; + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} - title: "Verify the encoding and key application" anchor: "verify-application" content: | - To verify that the changes were applied correctly, retrieve the table’s schema again using this command, replacing `rep_sales` and `orders` with the names of your schema and table, respectively: + To verify that the changes were applied correctly, retrieve the table’s schema again using this command, replacing `rep_sales` and `orders` with the names of your schema and table, respectively: - ```sql - \d+ rep_sales.orders - ``` + {% capture code %}\d+ rep_sales.orders + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} In this example, if the Keys and encodings were applied correctly, the response would look something like this: - ```sql - | Column | Data type | Encoding | Distkey | Sortkey | + {% capture code %}| Column | Data type | Encoding | Distkey | Sortkey | |---------------------+----------------------------+----------+---------+---------| | id | BIGINT | none | true | true | | rep_name | VARCHAR(128) | bytedict | false | false | @@ -202,7 +209,9 @@ steps: | _sdc_batched_at | TIMESTAMP WITHOUT TIMEZONE | none | false | false | | _sdc_table_version | BIGINT | none | false | false | | _sdc_replication_id | VARCHAR(128) | none | false | false | - ``` + {% endcapture %} + + {% include layout/code-snippet.html code=code language="sql" %} For the `id` column, the `Distkey` and `Sortkey` is set to `true`, meaning that the keys were properly applied. diff --git a/_includes/integrations/databases/setup/binlog/mongodb-oplog.html b/_includes/integrations/databases/setup/binlog/mongodb-oplog.html index 0c462c5e0..e4b8ceb30 100644 --- a/_includes/integrations/databases/setup/binlog/mongodb-oplog.html +++ b/_includes/integrations/databases/setup/binlog/mongodb-oplog.html @@ -12,33 +12,52 @@ 1. Start the {{ integration.display_name }} instance: - ```shell + {% capture code %} mongod --port 27017 + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell +{{ code | lstrip | rstrip }} ``` 2. Connect to the Mongo shell as a `root` user: + {% capture code %} + mongo --port 27017 -u -p --authenticationDatabase admin + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + ```shell - mongo --port 27017 +{{ code | lstrip | rstrip }} ``` 3. Navigate to the `/etc/mongod.conf` file. -4. In `/etc/mongod.conf`, uncomment `replication` and specify a name for the replica set (`replSetName`). - - In this example, we're using `rs0` as the replica set name: +4. In `/etc/mongod.conf`, uncomment `replication` and define the following configuration options. **Note**: As `/etc/mongod.conf` is a protected file, you may need to assume `sudo` to edit it. - ```shell + {% capture code %} replication: replSetName: "rs0" - ``` + oplogSizeMB: + {% endcapture %} - Use the `rs.status()` command to return this replica set's name going forward. + {% include layout/code-snippet.html use-code-block=false code=code %} - **Note**: As `/etc/mongod.conf` is a protected file, you may need to assume `sudo` to edit it. + ```shell +{{ code | lstrip | rstrip }} + ``` -5. Save the changes. + - **replSetName**: The name for the replica set. In this example, we used `rs0`. Use the `rs.status()` command to return this replica set's name going forward. + - **oplogSizeMB**: The maximum size, in megabytes, for the oplog. If undefined, MongoDB will use the default size - refer to [MongoDB's docs for more info](https://docs.mongodb.com/v4.0/core/replica-set-oplog/#oplog-size){:target="new"}. + When the oplog reaches this size, MongoDB will automatically remove log entries to maintain the maximum oplog size. If Stitch is unable to replicate all of a table's log entries before they age out, Stitch will re-replicate the table in full to ensure records aren't missing. Refer to the [Log-based Incremental guide]({{ link.replication.log-based-incremental | prepend: site.baseurl | append: "#limitation--log-retention" }}) for more info and examples. + + **Note**: If you're using an existing replica set and want to change its maximum size, use the [replSetResizeOplog](https://docs.mongodb.com/v4.0/reference/command/replSetResizeOplog/#dbcmd.replSetResizeOplog){:target="new"} command. + +5. Save the changes. @@ -48,27 +67,51 @@ 1. Restart `mongod` with the configuration file: - ```shell + {% capture code %} sudo mongod --auth --config /etc/mongod.conf + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell +{{ code | lstrip | rstrip }} ``` 2. Connect to the Mongo shell as a `root` user, replacing `` and `` with the `root` user's username and password: - ```shell + {% capture code %} mongo --port 27017 -u -p --authenticationDatabase admin + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell +{{ code | lstrip | rstrip }} ``` 3. Initiate the replica set, replacing `` with the IP address or endpoint used by the `mongod` instance: - ```shell + {% capture code %} rs.initiate({_id: "rs0", members: [{_id: 0, host: ":27017"}]}) + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell +{{ code | lstrip | rstrip }} ``` -If successful, you'll receive a response similar to the following: + If successful, you'll receive a response similar to the following: + + {% capture code %} + { "ok" : 1 } + {% endcapture %} -```json -{ "ok" : 1 } -``` + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```json +{{ code | lstrip | rstrip }} + ``` @@ -80,27 +123,51 @@ 2. Reconnect as the Stitch database user you created in [Step 2](##create-a-database-user). Replace `` and `` with the Stitch user's username and password, respectively: - ```shell + {% capture code %} mongo --port 27017 -u -p --authenticationDatabase admin + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell +{{ code | lstrip | rstrip }} ``` 3. Switch to the `local` database: - ```shell + {% capture code %} use local - ``` + {% endcapture %} -4. View OpLog rows: + {% include layout/code-snippet.html use-code-block=false code=code %} ```shell +{{ code | lstrip | rstrip }} + ``` + +4. View oplog rows: + + {% capture code %} db.oplog.rs.find() + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell +{{ code | lstrip | rstrip }} ``` -If successful, records from the OpLog similar to the following will be returned: + If successful, records from the oplog similar to the following will be returned: -```json -{ "ts" : Timestamp(1524038245, 63), "t" : NumberLong(1), "h" : NumberLong("-596019791399272412"), "v" : 2, "op" : "i", "ns" : "stitchTest.customers", "ui" + {% capture code %} + { "ts" : Timestamp(1524038245, 63), "t" : NumberLong(1), "h" : NumberLong("-596019791399272412"), "v" : 2, "op" : "i", "ns" : "stitchTest.customers", "ui" : UUID("0e623d9c-722c-41d5-a5e6-83947cc2466e"), "wall" : ISODate("2018-04-18T07:57:25.065Z"), "o" : { "_id" : 100, "name" : "Finn" } } -``` + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```json +{{ code | lstrip | rstrip }} + ``` {% endcase %} \ No newline at end of file diff --git a/_includes/integrations/databases/setup/db-users/mongo.html b/_includes/integrations/databases/setup/db-users/mongo.html index 76052251e..e8eab6a66 100755 --- a/_includes/integrations/databases/setup/db-users/mongo.html +++ b/_includes/integrations/databases/setup/db-users/mongo.html @@ -21,9 +21,11 @@ {% capture mongo-create-user-below-v3 %} Create the user, using the `addUser` command for {{ integration.display_name }} versions 2.4 through 2.6. Replace `` with a password: -```javascript +{% capture code %} {{ site.data.taps.extraction.database-setup.user-privileges.mongo.create-user-v2 | strip }} -``` +{% endcapture %} + +{% include layout/code-snippet.html code=code language="javascript" %} {% endcapture %} {% include layout/expandable-heading.html title="I'm using a MongoDB version between 2.4 and 2.6." content=mongo-create-user-below-v3 anchor="mongo-create-user-below-v3" %} @@ -32,9 +34,11 @@ {% capture mongo-create-user-v3-v3-2 %} Create the user, using the `createUser` command for {{ integration.display_name }} versions 3.0 through 3.2. Replace `` with a password: -```javascript +{% capture code %} {{ site.data.taps.extraction.database-setup.user-privileges.mongo.create-user-v3-2 | strip }} -``` +{% endcapture %} + +{% include layout/code-snippet.html code=code language="javascript" %} {% endcapture %} {% include layout/expandable-heading.html title="I'm using a MongoDB version between 3.0 and 3.2." content=mongo-create-user-v3-v3-2 anchor="mongo-create-user-v3-v3-2" %} @@ -43,9 +47,11 @@ {% capture mongo-create-user-v3-4 %} [For versions 3.4 and above](https://docs.mongodb.com/v3.4/reference/built-in-roles/#readAnyDatabase){:target="new"}, the `readAnyDatabase` role doesn't include the `local` database. Create the user, granting the additional `read` role on the `local` database: -```javascript +{% capture code %} {{ site.data.taps.extraction.database-setup.user-privileges.mongo.create-user-v3-4 | strip }} -``` +{% endcapture %} + +{% include layout/code-snippet.html code=code language="javascript" %} {% endcapture %} {% capture 3-4-title %} diff --git a/_replication/replication-methods/log-based-incremental-replication.md b/_replication/replication-methods/log-based-incremental-replication.md index af32b0c9c..954adf7ff 100644 --- a/_replication/replication-methods/log-based-incremental-replication.md +++ b/_replication/replication-methods/log-based-incremental-replication.md @@ -364,44 +364,79 @@ sections: {{ section.back-to-list | flatify }} -## MYSQL/ORACLE RETENTION PERIOD - - title: "Limitation {{ forloop.index }}: Logs can age out and stop replication (Microsoft SQL Server, MySQL, and Oracle)" +## LOG AGE OUT + - title: "Limitation {{ forloop.index }}: Logs can age out and impact replication (Microsoft SQL Server, MongoDB, MySQL, and Oracle)" anchor: "limitation--log-retention" - databases: "mssql, mysql, oracle" + databases: "mongo, mssql, mysql, oracle" content: | - {% include note.html type="single-line" content="**Note**: This section is applicable only to **Microsoft SQL Server, MySQL,** and **Oracle**-backed database integrations." %} + {% include note.html type="single-line" content="**Note**: This section is applicable only to **Microsoft SQL Server, MongoDB, MySQL,** and **Oracle**-backed database integrations." %} - Log files, by default, are not stored indefinitely on a database server. The amount of time a log file is stored depends on the database's log retention settings. + Log files, by default, aren't stored indefinitely on a database server. The amount of time a log file is stored depends on the database's log retention settings. - Log retention settings specify the amount of time before a log file is automatically removed from the database server. When a log file is removed from the server before Stitch can read from it, replication will be unable to proceed. + Log retention settings when a log file is automatically removed from the database server. This can either be a set amount of time, or the maximum size of all the database's log files. When a log file is removed from the server before Stitch can read from it, one of two things will happen depending on the database type: - When this occurs, an extraction error similar to the following will surface in the [Extraction Logs]({{ link.replication.extraction-logs | prepend: site.baseurl }}): + {% for sub-subsection in subsection.sub-subsections %} + - [{{ sub-subsection.summary }}](#{{ sub-subsection.anchor }}) + {% endfor %} - - **For MySQL databases**: - ``` - {{ site.data.errors.extraction.databases.mysql.raw-error.log-retention-purge | strip }} - ``` - - - **For Oracle databases**: - ``` - {{ site.data.errors.extraction.databases.oracle.raw-error.missing-logfile | strip }} - ``` - - To resolve the error, you'll need to [reset the integration from the {{ app.page-names.int-settings }} page]({{ link.replication.reset-rep-keys | prepend: site.baseurl }}). **Note**: This is different than resetting an individual table. - - This error can be caused by a few things: + This can be caused by a few things: 1. **The log file is purged before historical replication completes**. This is because the maximum [log position ID](#log-based-incremental-replication-terminology) is saved at the start of [historical replication jobs](#log-based-incremental-replication-terminology), so Stitch knows where to begin reading from the database logs after historical data is replicated. - 2. **The log retention settings are set to too short of a time period**. Stitch recommends a minimum of **3 days**, but **7 days** is preferred to account for resolving potential issues without losing logs. + 2. **For log retention settings that define a time period, the time period is too short.** Stitch recommends a minimum of **3 days**, but **7 days** is preferred to account for resolving potential issues without losing logs. - **For Microsoft SQL Server databases**, this is the `CHANGE_RETENTION` setting. - **For MysQL databases**, these are the `expire_logs_days` or `binlog_expire_logs_seconds` settings. - **For Oracle databases**: - **For self-hosted Oracle databases**, this is the [RMAN retention policy setting]({{ site.baseurl }}/integrations/databases/oracle#configure-rman-backups). - **For Oracle-RDS databases**, these are the [AWS automated backup]({{ site.baseurl }}/integrations/databases/amazon-oracle-rds#enable-aws-automated-backups) and [`archivelog retention hours`]({{ site.baseurl }}/integrations/databases/amazon-oracle-rds#define-archivelog-retention-hours) settings. - 3. **Any critical error that prevents Stitch from replicating data**, such as a connection issue that prevents Stitch from connecting to the database or a [schema violation](#limitation-3--structural-changes). If the error persists past the log retention period, the log will be purged before Stitch can read it. + 3. **For log retention settings that define a maximum size, the size is insufficient.** This is applicable to MongoDB integrations. When creating a replica set, this is defined using the replication `oplogSizeMB` configuration option. It can also be defined for an existing replica set using the [replSetResizeOplog](https://docs.mongodb.com/v4.0/reference/command/replSetResizeOplog/#dbcmd.replSetResizeOplog){:target="new"} command. + + 4. **Any critical error that prevents Stitch from replicating data**, such as a connection issue that prevents Stitch from connecting to the database or a [schema violation](#limitation-3--structural-changes). If the error persists past the log retention period, the log will be purged before Stitch can read it. + + sub-subsections: + - title: "MongoDB: Affected tables will be re-replicated in full" + anchor: "limitation--log-retention--full-re-replication" + summary: "**Affected collections will be re-replicated in full**. This is applicable to MongoDB database integrations." + content: | + When logs age out for a MongoDB database integration, the affected collections will be re-replicated in full and the following will surface in the [Extraction Logs]({{ link.replication.extraction-logs | prepend: site.baseurl }}): + + {% capture code %}{{ site.data.errors.extraction.databases.mongo.raw-error.oplog-age-out | strip }} + {% endcapture %} + + {% include layout/code-snippet.html code=code language="shell"%} + + To prevent collection re-replication, increase the maximum size of the OpLog with the [replSetResizeOplog](https://docs.mongodb.com/v4.0/reference/command/replSetResizeOplog/#dbcmd.replSetResizeOplog){:target="new"} command. **Note**: As the maximum size you need depends on your database, it may take some experimentation to identify the best setting. Mongo doesn't currently recommend an OpLog size. + + - title: "MySQL and Oracle: Replication will stop" + anchor: "limitation--log-retention--stop-replication" + summary: "**Replication will stop**. This is applicable to MySQL and Oracle database integrations." + content: | + When logs age out for MySQL and Oracle database integrations, an extraction error similar to the following will surface in the [Extraction Logs]({{ link.replication.extraction-logs | prepend: site.baseurl }}): + + - **For MySQL databases**: + {% capture code %}{{ site.data.errors.extraction.databases.mysql.raw-error.log-retention-purge | strip }} + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell + {{ code | lstrip | rstrip }} + ``` + + - **For Oracle databases**: + {% capture code %}{{ site.data.errors.extraction.databases.oracle.raw-error.missing-logfile | strip }} + {% endcapture %} + + {% include layout/code-snippet.html use-code-block=false code=code %} + + ```shell + {{ code | lstrip | rstrip }} + ``` + + To resolve the error, you'll need to [reset the integration from the {{ app.page-names.int-settings }} page]({{ link.replication.reset-rep-keys | prepend: site.baseurl }}). **Note**: This is different than resetting an individual table. + + {{ section.back-to-list | flatify }} - {{ section.back-to-list | flatify }} ## POSTGRES INCREASE DISK SPACE - title: "Limitation {{ forloop.index }}: Will increase source disk space usage (PostgreSQL)"