From 780b92ada9afcf1d58085a83a0b9e6bc982203d1 Mon Sep 17 00:00:00 2001 From: Lorry Tar Creator Date: Tue, 17 Feb 2015 17:25:57 +0000 Subject: Imported from /home/lorry/working-area/delta_berkeleydb/db-6.1.23.tar.gz. --- docs/programmer_reference/transapp_hotfail.html | 206 ++++++++++++------------ 1 file changed, 106 insertions(+), 100 deletions(-) (limited to 'docs/programmer_reference/transapp_hotfail.html') diff --git a/docs/programmer_reference/transapp_hotfail.html b/docs/programmer_reference/transapp_hotfail.html index 6d4143dd..512a91a3 100644 --- a/docs/programmer_reference/transapp_hotfail.html +++ b/docs/programmer_reference/transapp_hotfail.html @@ -14,7 +14,7 @@

- For some applications, it may be useful to periodically snapshot - the database environment for use as a hot failover should the - primary system fail. The following steps can be taken to keep a - backup environment in close synchrony with an active environment. - The active environment is entirely unaffected by these procedures, - and both read and write operations are allowed during all steps - described here. + For some applications, it may be useful to periodically + snapshot the database environment for use as a hot failover + should the primary system fail. The following steps can be + taken to keep a backup environment in close synchrony with an + active environment. The active environment is entirely + unaffected by these procedures, and both read and write + operations are allowed during all steps described here.

-

- The procedure described here is not compatible with the concurrent - use of the transactional bulk insert optimization (transactions - started with the DB_TXN_BULK flag). After the bulk optimization - is used, the archive must be created again from scratch starting - with step 1. +

+ The procedure described here is not compatible with the + concurrent use of the transactional bulk insert optimization + (transactions started with the DB_TXN_BULK flag). After the + bulk optimization is used, the archive must be created again + from scratch starting with step 1.

- The db_hotbackup utility is the preferred way to automate generating a hot - failover system. The first step is to run db_hotbackup utility without the - -u flag. This will create hot - backup copy of the databases in your environment. After that point - periodically running the db_hotbackup utility with the - -u flag will copy the new - log files and run recovery on the backup copy to bring it current - with the primary environment. + The db_hotbackup utility is the preferred way to automate + generating a hot failover system. The first step is to run + db_hotbackup utility without the -u + flag. This will create hot backup copy of the databases in + your environment. After that point periodically running the + db_hotbackup utility with the -u + flag will copy the new log files and run recovery on the + backup copy to bring it current with the primary environment.

-

- Note that you can also create your own hot backup solution using - the DB_ENV->backup() or DB_ENV->dbbackup() methods. +

+ Note that you can also create your own hot backup solution + using the DB_ENV->backup() or DB_ENV->dbbackup() methods.

-

- To implement your own hot fail over system, the steps below can be - followed. However, care should be taken on non-UNIX based systems - when copying the database files to be sure that they are either - quiescent, or that either the DB_ENV->backup() or db_copy() routine is - used to ensure atomic reads of the database pages. +

+ To implement your own hot fail over system, the steps below + can be followed. However, care should be taken on non-UNIX + based systems when copying the database files to be sure that + they are either quiescent, or that either the DB_ENV->backup() or + db_copy() routine is used to ensure atomic reads of the + database pages.

  1. -

    - Run the db_archive utility with the -s option in the active environment - to identify all of the active environment's database files, and - copy them to the backup directory. +

    + Run the db_archive utility with the -s + option in the active environment to + identify all of the active environment's database + files, and copy them to the backup directory.

    -

    - If the database files are stored in a separate directory from - the other Berkeley DB files, it will be simpler (and much - faster!) to copy the directory itself instead of the individual - files (see DB_ENV->add_data_dir() for additional information). +

    + If the database files are stored in a separate + directory from the other Berkeley DB files, it will be + simpler (and much faster!) to copy the directory + itself instead of the individual files (see + DB_ENV->add_data_dir() for additional information).

    Note

    - If any of the database files did not have an open DB - handle during the lifetime of the current log files, - the db_archive utility will not list them in its output. This - is another reason it may be simpler to use a separate - database file directory and copy the entire directory - instead of archiving only the files listed by the + If any of the database files did not have an + open DB handle during the lifetime of the + current log files, the db_archive utility will not list + them in its output. This is another reason it may + be simpler to use a separate database file + directory and copy the entire directory instead of + archiving only the files listed by the db_archive utility.

  2. - Remove all existing log files from the backup directory. + Remove all existing log files from the backup + directory.
  3. -
  4. - Run the db_archive utility with the -l - option in the active environment to identify all of the active - environment's log files, and copy them to the backup directory. +
  5. + Run the db_archive utility with the -l + option in the active environment to + identify all of the active environment's log files, and + copy them to the backup directory.
  6. - Run the db_recover utility with the -c - option in the backup directory to catastrophically recover the - copied environment. + Run the db_recover utility with the -c option + in the backup directory to catastrophically recover the copied environment.

- Steps 2, 3 and 4 may be repeated as often as you like. If Step 1 - (the initial copy of the database files) is repeated, then Steps 2, - 3 and 4 must be performed at least - once in order to ensure a consistent database environment - snapshot. + Steps 2, 3 and 4 may be repeated as often as you like. If + Step 1 (the initial copy of the database files) is repeated, + then Steps 2, 3 and 4 must be + performed at least once in order to ensure a consistent + database environment snapshot.

-

- These procedures must be integrated with your other archival - procedures, of course. If you are periodically removing log files - from your active environment, you must be sure to copy them to the - backup directory before removing them from the active directory. - Not copying a log file to the backup directory and subsequently - running recovery with it present may leave the backup snapshot of - the environment corrupted. A simple way to ensure this never - happens is to archive the log files in Step 2 as you remove them - from the backup directory, and move inactive log files from your - active environment into your backup directory (rather than copying - them), in Step 3. The following steps describe this procedure in - more detail: +

+ These procedures must be integrated with your other + archival procedures, of course. If you are periodically + removing log files from your active environment, you must be + sure to copy them to the backup directory before removing them + from the active directory. Not copying a log file to the + backup directory and subsequently running recovery with it + present may leave the backup snapshot of the environment + corrupted. A simple way to ensure this never happens is to + archive the log files in Step 2 as you remove them from the + backup directory, and move inactive log files from your active + environment into your backup directory (rather than copying + them), in Step 3. The following steps describe this procedure + in more detail:

  1. - Run the db_archive utility with the -s - option in the active environment to identify all of the active - environment's database files, and copy them to the backup - directory. + Run the db_archive utility with the -s option + in the active environment to identify all of the active environment's database files, + and copy them to the backup directory.
  2. -
  3. - Archive all existing log files from the backup directory, moving them - to a backup device such as CD-ROM, alternate disk, or tape. +
  4. + Archive all existing log files from the backup + directory, moving them to a backup device such as CD-ROM, + alternate disk, or tape.
  5. - Run the db_archive utility (without any option) in the active environment - to identify all of the log files in the active environment that are - no longer in use, and move them to - the backup directory. + Run the db_archive utility (without any option) in the + active environment to identify all of the log files in the + active environment that are no longer in use, and + move them to the + backup directory.
  6. -
  7. - Run the db_archive utility with the -l - option in the active environment to identify all of the remaining - log files in the active environment, and copy the log files to the backup - directory. +
  8. + Run the db_archive utility with the -l option + in the active environment to + identify all of the remaining log files in the active + environment, and copy the + log files to the backup directory.
  9. - Run the db_recover utility with the -c - option in the backup directory to catastrophically recover the - copied environment. + Run the db_recover utility with the -c option + in the backup directory to + catastrophically recover the copied environment.
-

- As before, steps 2, 3, 4 and 5 may be repeated as often as you - like. If Step 1 (the initial copy of the database files) is - repeated, then Steps 2 through 5 - must be performed at least once in - order to ensure a consistent database environment snapshot. +

+ As before, steps 2, 3, 4 and 5 may be repeated as often as + you like. If Step 1 (the initial copy of the database files) + is repeated, then Steps 2 through 5 must be + performed at least once in order to + ensure a consistent database environment snapshot.