Upgrade from Release 2.0 to Release 3.0

Using a Script to Upgrade to Release 3

Upgrading a store from release 2 to release 3 can be accomplished one Storage Node at a time because a mix of release 2 and 3 Storage Nodes are permitted to run simultaneously in the same store. This allows you to strategically upgrade Storage Nodes in the most efficient manner.

Note

Upgrading a 1.0 store directly to release 3 is not supported. You must upgrade your store from 1.0 to 2.0 before upgrading to release 3. For instructions on how to upgrade your 1.0 store, see Upgrade from NoSQL DB Release 1.0 to NoSQL DB Release 2.0.

Note

If your store contains more than a handful of Storage Nodes, you may want to perform your upgrade using a script. See Using a Script to Upgrade to Release 3 for more information.

To avoid potential problems, new CLI commands are available to identify when nodes can be upgraded at the same time. These commands are described in the following procedure.

To upgrade your store, start by installing the release 3 software on a Storage Node that is running an admin service. The new CLI commands require an updated admin service in order to function.

Do the following:

  1. On a Storage Node running a release 2 admin service:

    1. Place the updated software in a new KVHOME directory on a Storage Node running the admin service. The new KVHOME directory is referred to here as NEW_KVHOME. If nodes share this directory using NFS, this only needs to be done once for each shared directory.

    2. Stop the Storage Node using the release 2 CLI. When you do this, this shuts down the admin service on that Storage Node.

      If you have configured the node to automatically start the Storage Node Agent on reboot using /etc/init.d, Upstart, or some other mechanism first modify that script to point to NEW_KVHOME.

      Once you have modified that script, shutdown the Storage Node:

      java -Xmx256m -Xms256m \
      -jar KVHOME/lib/kvstore.jar stop -root <kvroot>
    3. Restart the Storage Node using the release 3 code:

      nohup java -Xmx256m -Xms256m \
      -jar NEW_KVHOME/lib/kvstore.jar start -root <kvroot>& 

      (If the system is configured to automatically restart the Storage Node Agent, this step may not be necessary.)

    4. Use the CLI to connect to the Storage Node which is now running the release 3 code:

      java -Xmx256m -Xms256m \
      -jar NEW_KVHOME/lib/kvstore.jar runadmin -port 5000 -host node1
      kv->
    5. Verify that all the Storage Nodes in the store are running the proper software level required to upgrade to release 3. Note that any patch release level of 2.0 or 2.1 meets the minimum software level requirements.

      kv-> verify prerequisite
      Verify: starting verification of mystore based upon topology 
      sequence #315
      300 partitions and 6 storage nodes. Version: 12.1.3.0.1 Time:
      2014-01-07 08:19:15 UTC
      See node1:<KVROOT>/mystore/log/mystore_{0..N}.log for progress 
      messages
      Verify prerequisite: Storage Node [sn3] on node3:5000
      Zone: [name=Boston id=zn1 type=PRIMARY]   Status: RUNNING   
      Ver: 12cR1.2.1.54 2013-11-11 12:09:35 UTC  Build id: 921c25300b5e
      
      ...
      
      Verification complete, no violations.  

      Note that only a partial sample of the verification command's output is shown here. The important part is the last line, which shows no violations.

      The most likely reason for a violation is if you are (accidentally) attempting a release level downgrade. For example, it is illegal to downgrade from a higher minor release to a lower minor release. Possibly this is occurring simply because you are running the CLI using a package at a minor release level that is lower than the release level at other nodes in the store.

      Note

      It is legal to downgrade from a higher patch level to a lower patch level. So, for example downgrading from 2.1.4 to 2.1.3 would be legal, while downgrading from 2.1.3 to 2.0.39 would not be legal.

      Also, a violation will occur if you attempt to upgrade 1.0 nodes directly to release 3. When upgrading a 1.0 store, you must first upgrade to 2.0, and then upgrade to release 3. For more information on upgrading a 1.0 store, see Upgrade from NoSQL DB Release 1.0 to NoSQL DB Release 2.0.

      In any case, if the verify prerequisite command shows violations, resolve the situation before you attempt to upgrade the identified nodes.

    6. Obtain an ordered list of the nodes to upgrade.

      kv-> show upgrade-order
      sn3 sn4
      sn2 sn5
      sn6

      The Storage Nodes combined together on a single line should be upgraded together. Therefore, for this output, you would upgrade sn3 and sn4. Then upgrade sn2 and sn5. And, finally, upgrade sn6.

      Note that you must completely upgrade a group of nodes before continuing to the next group. That is, upgrade sn3 and sn4 before you proceed to upgrading sn2, sn5, or sn6.

  2. For each of the Storage Nodes in the first group of Storage Nodes to upgrade (sn3 and sn4, in this example):

    1. Place the release 3 software in a new KVHOME directory. The new KVHOME directory is referred to here as NEW_KVHOME. If nodes share this directory using NFS, this only needs to be done once for each shared directory.

    2. Stop the Storage Node using the release 2 utility.

      If you have configured the node to automatically start the Storage Node Agent on reboot using /etc/init.d, Upstart, or some other mechanism first modify that script to point to NEW_KVHOME.

      Once you have modified that script, shutdown the Storage Node using the old code:

      java -Xmx256m -Xms256m \
      -jar KVHOME/lib/kvstore.jar stop -root <kvroot>
    3. Restart the Storage Node using the new code:

      nohup java -Xmx256m -Xms256m \
      -jar NEW_KVHOME/lib/kvstore.jar start -root <kvroot>& 

      (If the system is configured to automatically restart the Storage Node Agent, this step may not be necessary.)

  3. Verify the upgrade before upgrading your next set of nodes. This command shows which nodes have been successfully upgraded, and which nodes still need to be upgraded:

    kv-> verify upgrade
    Verify: starting verification of mystore based upon topology 
    sequence #315
    300 partitions and 6 storage nodes. Version: 12.1.3.0.1 Time:  ....
    See node1:<KVROOT>/mystore/log/mystore_{0..N}.log for progress 
    messages
    Verify upgrade: Storage Node [sn3] on node3:5000    
    Zone: [name=Boston id=zn1 type=PRIMARY]    Status: RUNNING   
    Ver: 12cR1.3.0.1 2013-12-18 06:35:02 UTC  Build id: 8e70b50c0b0e
    
    ...
    
    Verify: sn2: Node needs to be upgraded from 12.1.2.1.54 to 
    version 12.1.3.0.0 or newer
    
    ...
    
    Verification complete, 0 violations, 3 notes found.
    Verification note: [sn2]    Node needs to be upgraded from 
    12.1.2.1.54 to version 12.1.3.0.0 or newer
    Verification note: [sn5]    Node needs to be upgraded from 
    12.1.2.1.54 to version 12.1.3.0.0 or newer
    Verification note: [sn6]    Node needs to be upgraded from 
    12.1.2.1.54 to version 12.1.3.0.0 or newer 

    For brevity and space, we only show part of the output generated by the verify upgrade command. Those nodes which have been upgraded are identified with a verification message that includes the current software version number:

     Verify upgrade: Storage Node [sn3] on node3:5000    
    Zone: [name=Boston id=zn1 type=PRIMARY]    
    Status: RUNNING   
    Ver: 12cR1.3.0.1 2013-12-18 06:35:02 UTC  Build id: 8e70b50c0b0e

    Those nodes which still need to be upgraded are identified in two different ways. First, the verification message for the node indicates that an upgrade is still necessary:

    Verify: sn2: Node needs to be upgraded from 12.1.2.1.54 to
    version 12.1.3.0.0 or newer 

    Second, the very end of the verification output identifies all the nodes that still need to be upgraded:

    Verification complete, 0 violations, 3 notes found.
    Verification note: [sn2]    Node needs to be upgraded from
    12.1.2.1.54 to version 12.1.3.0.0 or newer
    Verification note: [sn5]    Node needs to be upgraded from
    12.1.2.1.54 to version 12.1.3.0.0 or newer
    Verification note: [sn6]    Node needs to be upgraded from
    12.1.2.1.54 to version 12.1.3.0.0 or newer 

    Note

    If the verification shows nodes you thought were upgraded as being still in need of an upgrade, you must resolve that problem before upgrading the other nodes in your store. As a kind of a sanity check, you can verify just those nodes you just finished upgrading:

    kv-> verify upgrade -sn sn3 -sn sn4
    Verify: starting verification of mystore based upon topology 
    sequence #315
    ...
    Verification complete, no violations.
    
  4. You can continue upgrading groups of Storage Nodes, as identified by the show upgrade-order command. Follow the procedure outlined above. Stop the release 2 Storage Node using the release 2 stop command, then restart the Storage Node using the release 3 start command. Continue doing this until all Storage Nodes have been upgraded.

    If at some point you lose track of which group of nodes should be upgraded next, you can always run the show upgrade-order command again:

    kv-> show upgrade-order
    Calculating upgrade order, target version: 12.1.3.0.1,
    prerequisite: 11.2.2.0.23
    sn2 sn5
    sn6 
  5. When you are all done upgrading your Storage Nodes, the verify upgrade command will show no verification notes at the end of its output:

    kv-> verify upgrade
    Verify: starting verification of mystore based upon topology 
    sequence #315
    ...
    Verification complete, no violations.
    kv-> 

Using a Script to Upgrade to Release 3

For any deployments with more than a handful of Storage Nodes, the manual upgrade procedure described above becomes problematic. In that case, you should probably upgrade your store using a script.

An example script (bash shell script) is available for you to examine in the release 3 distribution. It can be found here:

<KVROOT>/examples/upgrade/onlineUpgrade

This script has the same upgrade restrictions as was described earlier in this section: it will only upgrade a release 2 installation to release 3, and your store must have a replication factor of at least 3 in order for your store to be available during the upgrade process.

The provided script is an example only. It must be modified in order for it to properly function for your installation.

Note that the script does not perform any software provisioning for you. This means you are responsible for placing the release 3 package on your host machines in whatever location you are using for your installation software. That said, the script communicates with your host machines using ssh, so you could potentially enhance the script to provision your machines using scp.

Because the script uses ssh, in order for it to function you must configure your machines to allow automatic login (that is, login over ssh without a password). ssh supports public/private key authentication, so this is generally a secure way to operate.

For information on how to configure ssh in this way, see http://www.linuxproblem.org/art_9.html. For information on how to install and configure ssh and the ssh server, see your operating system's documentation.