Chapter 7. Administrative Procedures

Table of Contents

Backing Up the Store
Taking a Snapshot
Snapshot Management
Recovering the Store
Using the Load Program
Restoring Directly from a Snapshot
Managing Avro Schema
Adding Schema
Changing Schema
Disabling and Enabling Schema
Showing Schema
Replacing a Failed Storage Node
Replacing a Failed Disk
Repairing a Failed Zone
Addressing Lost Admin Service Quorum
Verifying the Store
Monitoring the Store
Events
Setting Store Parameters
Changing Parameters
Setting Store Wide Policy Parameters
Admin Parameters
Storage Node Parameters
Replication Node Parameters
Security Parameters
Admin Restart
Replication Node Restart
Removing an Oracle NoSQL Database Deployment
Fixing Incorrect Storage Node HA Port Ranges

This chapter contains procedures that may be generally useful to the Oracle NoSQL Database administrator.

Note

Oracle NoSQL Database Storage Nodes and Admins make use of an embedded database (Oracle Berkeley DB, Java Edition). You should never directly manipulate the files maintained by this database. In general it is a bad idea to move, delete or modify the files and directories located under KVROOT unless you are asked to do so by Oracle Customer Support. But in particular, never move or delete any file ending with a jdb suffix. These will all be found in an env directory somewhere under KVROOT.

Backing Up the Store

To back up the KVStore, you take snapshots of nodes in the store and copy the resulting snapshots to a safe location. Note that the distributed nature and scale of Oracle NoSQL Database makes it unlikely that a single machine can hold the backup for the entire store. These instructions do not address where and how snapshots are stored.

Taking a Snapshot

A snapshot provides consistency across all records within the same shard, but not across partitions in independent shards. The underlying snapshot operations are performed in parallel to the extent possible in order to minimize any potential inconsistencies.

To take a snapshot from the admin CLI, use the snapshot create command:

kv-> snapshot create -name <snapshot name>

Using this command, you can create or remove a named snapshot. (The name of the snapshot is provided using the <name> parameter.) You can also remove all snapshots currently stored in the store.

For example, to create and remove a snapshot:

kv-> snapshot create -name thursday
Created snapshot named 110915-153514-thursday on all 3 nodes
kv-> snapshot remove -name 110915-153514-thursday
Removed snapshot 110915-153514-thursday 

You can also remove all snapshots currently stored in the store:

kv-> snapshot create -name thursday
Created snapshot named 110915-153700-thursday on all 3 nodes
kv-> snapshot create -name later
Created snapshot named 110915-153710-later on all 3 nodes
kv-> snapshot remove -all
Removed all snapshots

Note

Snapshots should not be taken while any configuration (topological) changes are being made, because the snapshot might be inconsistent and not usable. At the time of the snapshot, use ping and then save the information that identifies masters for later use during a load or restore. For more information, see Snapshot Management.

Snapshot Management

When you run a snapshot, data is collected from every Replication Node in the system, including both masters and replicas. If the operation does not succeed for at least one of the nodes in a shard, it fails.

If you decide to create an off-store copy of the snapshot, you should copy the snapshot data for only one of the nodes in each shard. If possible, copy the snapshot data taken from the node that was serving as the master at the time the snapshot was taken.

At the time of the snapshot, you can identify which nodes are currently running as the master using the ping command. There is a master for each shard in the store and they are identified by the keyword: MASTER. For example, in the following example, replication node rg1-rn1, running on Storage Node sn1, is the current master:

java -Xmx256m -Xms256m \
-jar KVHOME/lib/kvstore.jar ping -port 5000 -host node01
Pinging components of store mystore based upon topology sequence #107
Time: 2013-12-18 21:07:44 UTC
mystore comprises 300 partitions on 3 Storage Nodes
Storage Node [sn1] on node01:5000  
Zone: [name=Boston id=zn1 type=PRIMARY]
Status: RUNNING   Ver: 12cR1.3.0.1 2013-12-18 06:35:02 UTC  
Build id: 8e70b50c0b0e
    Rep Node [rg1-rn1] Status: RUNNING,MASTER at sequence number: 31
haPort: 5011
Storage Node [sn2] on node02:5000  
Zone: [name=Boston, id=zn1 type=PRIMARY]
Status: RUNNING   Ver: 12cR1.3.0.1 2013-12-18 06:35:02 UTC  
Build id: 8e70b50c0b0e
    Rep Node [rg1-rn2] Status: RUNNING,REPLICA at sequence number: 31
haPort: 5011
Storage Node [sn3] on node03:5000  
Zone: [name=Boston, id=zn1 type=PRIMARY]
Status: RUNNING   Ver: 12cR1.3.0.1 2013-12-18 06:35:02 UTC 
 Build id: 8e70b50c0b0e
    Rep Node [rg1-rn3] Status: RUNNING,REPLICA at sequence number: 31
haPort:5011

You should save the above information and associate it with the respective snapshot, for later use during a load or restore.

Note

Snapshots include the admin database. Depending on how the store might need to be restored, the admin database may or may not be useful.

Snapshot data for the local Storage Node is stored in a directory inside of the KVROOT directory. For each Storage Node in the store, you have a directory named:

KVROOT/<store>/<SN>/<resource>/snapshots/<snapshot_name>/files

where:

  • <store> is the name of the store.

  • <SN> is the name of the Storage Node.

  • <resource> is the name of the resource running on the Storage Node. Typically this is the name of a replication node.

  • <snapshot_name> is the name of the snapshot.

Snapshot data consists of a number of files, and they all are important. For example:

 > ls /var/kvroot/mystore/sn1/rg1-rn1/snapshots/110915-153828-later
00000000.jdb 00000002.jdb 00000004.jdb 00000006.jdb
00000001.jdb 00000003.jdb 00000005.jdb 00000007.jdb