Oracle NoSQL Database Change Log
Release 12cR18.104.22.168 Enterprise Edition
This release of Oracle NoSQL Database adds several new features
including user/password authentication and network security, secondary
zones, and support for typed data and a tabular data model, which adds
the ability to define secondary indexes on fields in a table. The
table interface adds a new client API to access tables, indexes, and
data types, along with CLI to manage these new constructs.
Upgrading an existing store to release 3.0 requires that the store be
running with a 2.0 release. If you want to use release 3.0 with a store
created prior to the 2.0 release, be sure to upgrade the store to a
2.0 release before upgrading it to the 3.0 release. Once a store has
been upgraded to release 3.0, it cannot be downgraded to an earlier
See the section
Updating an Existing Oracle NoSQL Database Deployment in the Admin
Release 3.0 is compatible with Java SE 7 and later, and has
been tested and certified against Oracle JDK 7u51. We encourage you to
upgrade to the latest Java releases to take advantage of the latest bug
fixes and performance improvements.
Attempting to use this release with a version of Java earlier than Java
7 will produce an error message similar to:
Exception in thread "main" java.lang.UnsupportedClassVersionError:
oracle/kv/impl/util/KVStoreMain : Unsupported major.minor version 51.0
Changes in 12cR22.214.171.124 Enterprise Edition
When using the Table API, it is now possible to create indices on
fields in records, maps, and arrays. [#23091]
The integration of Oracle NoSQL Database with Oracle Coherence has been updated to
support Coherence 12c (12.1.2). As of Coherence 12.1.2, cache configuration
parameters are specified within a custom XML namespace and are processed by the
NoSQL Database namespace handler at runtime. Though it's possible to use this
updated module with Coherence version 3.7.1, we highly recommend that you upgrade
Coherence to the latest version. Please see the javadoc for
oracle.kv.coherence package for information on how to configure a NoSQL Database
backed cache with Coherence 12.1.2, or the earlier Coherence 3.7.1. [#23350]
It is now possible to use Oracle External Tables to access Oracle NoSQL Database tables created with the Table API.
In addition to the usual required properties, users need to specify the
table name in the external table configuration file.
Please see KVHOME/examples/externaltables/cookbook.html for details.[#23605]
Added a new "size" option to the Admin CLI table command, to estimate
the in-memory size of the given table. The results of the size command
can be used as inputs when planning resource requirements for a store.
Oracle Enterprise Manager can now monitor instances of Oracle NoSQL Database.
For more information, please see
Integrating Oracle Enterprise Manager with Oracle NoSQL Database in the Admin Guide.
Added more sanity checking and improved error messages for the
securityconfig add/remove-security commands. [#23311]
Fixed a bug where specifying an invalid value for the Storage Node
parameter, "mgmtClass" could cause a crash in the Admin service.
Modified the oracle.kv.RequestLimitConfig constructor to improve
bounds checking and correct problems with integer overflow. [#23244]
Fixed a bug that sometimes caused Admin parameters not to take effect
until the Admin's hosting SNA was rebooted. [#23429]
Added a new attribute (String replicationState) in the RepNode MBean
presented via JMX, to indicate the state of the RepNode's membership
in its replication group. Typically the value will show "MASTER" or
"REPLICA", but it can also report "DETACHED" or "UNKNOWN". This same
value is reported via SNMP in the repNodeReplicationState object, as
defined in nosql.mib. [#23459]
Modified the Durability, RequestLimitConfig, and Version classes to
implement Serializable to permit serializing instances of
oracle.kv.KVStoreConfig, which was already serializable. [#23474]
Fixed a bug where an operation using the Table API might see a
SecondaryIntegrityException if there is a replication node failover
while secondary index is populated. [#23520]
Prior to this release, it was not possible use the Load utility to create
a new store from snapshot files that had been taken against a store
with security enabled, or a store that was using the Table API. This
has been fixed. The -security, -username, -load-admin, and -force
flags were added to the load utility to use in this case. See the
Administrator's Guide for more information. [#23528]
Fixed a bug where invoking the "history" command in the Admin CLI with
the "-last" option and a value that is greater than the total number
of commands executed in the store could result in a
Fixed a bug where an Admin service might exit with the following
exception. Before the fix, administrative functionality would
seamlessly fail over to another admin service, but the process exit
was unnecessary and would show up as an alertable event.
Transaction -XXX cannot execute write operations because this node is no longer a master
Fixed a bug in the Table API so that enum fields may have names that begin
with an underscore.
In rare cases, when a store has been deployed with Storage Nodes with
capacity > 1 and there are concurrent delete operations, table iteration operations,
and a transfer of mastership roles in a shard, it could be possible
for the iteration operation to incorrectly skip a value that should
have been returned by the iterator. This has been fixed. [#23608]
Fixed a GC configuration issue that could cause the CMS phase of the Java GC to
run repeatedly, consuming CPU resources on an otherwise idle RepNode. The fix
changed the default JVM CMSInitiatingOccupancyFraction from 77 to
80. Our testing indicates that this is a better configuration under a broad
range of application access patterns. However, if you need to override this new
configuration in some unusual circumstance, you can use the Admin's
change-policy command and, if it's an existing store, the plan
change-parameter command, as below:
change-policy -params "javaMiscParams=-XX:CMSInitiatingOccupancyFraction=77"
plan change-parameters -all-rns -params "javaMiscParams=-XX:CMSInitiatingOccupancyFraction=77"
Fixed the following bugs which could occur in a store with security
The following error could be reported after a deploy-topology command
which deploys a new replication node in a security enabled store:
Task 23/DeployNewRN on sn1(slc06tyu:5000) ended in state ERROR
oracle.kv.impl.fault.RNUnavailableException: Security metadata
database is not opened yet.
The following exception could be seen after an elasticity change in a
security enabled store: [#23703]
Insufficient access rights : client host: xx.xxx.xxx.xx:
attempt to call RepNodeAdmin.updateMetadata(MetadataInfo,AuthContext,short)
The following problem might be logged after an elasticity change in a
security enabled store: [#23704]
ProcessMonitor: at oracle.kv.impl.api.rgstate.RepNodeState$ReqHandlerRef.resolve(RepNodeState.java:649)
ProcessMonitor: at oracle.kv.impl.api.rgstate.RepNodeState$ReqHandlerRef.get(RepNodeState.java:709)
Fixed a bug where application requests might unnecessarily time out
for a brief period of time directly after an elasticity change [#23705]
The version of the Oracle Coherence library bundled with Oracle NoSQL
Database has been upgraded to the more recent Coherence 12.1.2. This
requires a change in the way cache configuration parameters are
specified for the NoSQL Database backed cache.
New documentation has been added on how to use the Large Object
API. See the index page, and "Oracle NoSQL Database Large Object API".
Changes in 12cR126.96.36.199
Modified the administrative CLI to save its command line history to a
file so that it is available after restart. If you want to disable this
feature, the following Java property should be set while running
java -Doracle.kv.shell.jline.disable=true -jar KVHOME/kvstore.jar runadmin -host <hostname> -port <portname>
The CLI attempts to save the history in a
file, which is created and opened automatically. The default history
saved is 500 lines. If the history file cannot be opened, it will fail
silently and the CLI will run without saved history.
The default history file path can be overridden by setting
oracle.kv.shell.history.file=path Java property.
The default number of lines to save to the file can be modified by
Modified the admin CLI
aggregate command to provide
subcommands for tables and key/value entries. The
table subcommand performs simple data aggregation operations on
numeric fields of a table, while the
subcommand performs aggregation operations on keys. [#23258]
Modified the implementation of index iterators to use weak references so
that the garbage collector can remove the resources associated with
unused index iterators. [#23306]
Improved the handling of metadata propagation and other
internal operations. [#23355] [#23368] [#23385]
Modified the external tables integration to distribute the concurrent
processing load more evenly across processes. [#23363]
Fixed a problem where a failure during a partition migration performed
during a topology redistribution for a store that has indexes resulted
SecondaryIntegrityExceptions being thrown when the
migration was restarted. [#23392]
FieldRange.setEndDate method, in favor of the
setEnd method. [#23399]
Modified schema evolution to prevent changes that could resurrect an old
field using a different type. Such a change would cause old data to
become unreadable by the current table. This fix prevents resurrection
of a field name unless it exactly matches the previous
Fixed a problem with handling network timeouts that could result
FaultException being thrown from KVStore operations
RequestTimeout when a timeout occurs. Here's a
sample stack trace:
Caused by: oracle.kv.FaultException: Problem during unmarshalling (188.8.131.52.24)
Fault class name: java.rmi.UnmarshalException
... 41 more
Caused by: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is:
java.net.SocketException: Socket closed
at com.sun.proxy.$Proxy21.execute(Unknown Source)
... 46 more
Caused by: java.net.SocketException: Socket closed
... 52 more
Modified table iteration to optimize performance when all matching
entries fall within a single shard. [#23412]
Fixed an issue where the index scan iterator would fail to return
records from a shard if there was a record that compared equal to a
record from another shard. This situation occurred when there was more
than one shard in a store and there were equivalent index entries for a
given index in both shards. The symptom was index iteration returning
fewer rows than expected. [#23421]
Added several interfaces to the table package:
The related javadoc was also updated to indicate that the lists and
maps returned from these, and similar interfaces, are immutable.
List<String>IndexKey.getFields() to return the
fields used to define the index.
List<String>PrimaryKey.getFields() to return the
fields used to define the primary key.
List<String>RecordValue.getFields() to return the
fields used to define the record, in declaration order.
Map<String, FieldValue> MapValue.getFields() to
return an immutable view of the map.
The data Command Line Interface (CLI) has a method to input table rows
from a file with a JSON representation. This input method had an
issue where a blank line could cause an infinite loop in the input
path. This has been fixed in a way that will result in silently
skipping blank lines as well as comment lines (those whose first
non-whitespace character is "#").
Fixed handling of null values in indexed fields and in IndexKey.
Previously, a null value in an indexed field could cause a server side
exception. During a put, null values in indexed fields will result in
no index entries for indexes in which that field participates. Further,
null values are not allowed in IndexKey instances.
IllegalArgumentException is thrown if an attempt is made to set a null
value in an IndexKey. [#23588]
Modified the Admin Service to listen on all interfaces on a host. This
change permits deployment of KVStore in heterogeneous network
environments, where a hostname may be resolved to different IP
addresses to make the best possible use of the available network
Changes in 12cR184.108.40.206
A new client interface has been added that includes a set of
datatypes and a tabular data model using those types. The tabular
data model is used to provide support for secondary indexes which
are defined on fields in a table. The model is discussed in the
Getting Started Guide for Tables
Tables and indexes are defined using the administrative CLI and
accessed via programmatic API. The data CLI has been enhanced to
perform operations on tables and indexes as well. The API is
documented in the
Oracle NoSQL Database javadoc, and is primarily in the
It is possible to define tables that overlay data created with NoSQL
DB Release 2 if that data was created using a conforming Avro
schema. This overlay is required in order to create secondary
indexes on conforming Release 2 data.
The existing key/value interface remains available.
It is now possible to define secondary indexes for records. See the previous
changelog entry about tables. Index entries for a given record have
transactional consistency with their corresponding primary records.
Index iteration operations are part of the new table API. Index scan
operations allow applications to iterate over raw indexes in 3
ways -- forward order, reverse order, and unordered. It is possible
to define exact match and range scans in indexes. Indexes can be on
single fields or defined as composite indexes on multiple fields in a
Support for username/password authentication and secure network communications
has been added. Existing applications that do not require this feature are not
impacted except for a change to makebootconfig, which adds a new required
-store-security). Users that wish to use the new
capabilities should be aware of the following areas of change:
Users should also familiarize themselves with security property files, which
are required when using a KVStore command-line utility program against a secure
store, and which may also be useful when running an application against a secure
This feature is described in much greater depth in the Oracle NoSQL Database Security Guide, as well as in the Administrators Guide and product Javadoc.
The administrative CLI has been modified to use new terminology to refer
to data centers. Data centers are now called zones. The new
terminology is meant to clarify that these node groupings may not always
coincide with physical data centers. A zone is a collection of nodes
that have good network connectivity with each other and have some level
of physical separation from nodes in other zones. That physical
separation may mean that different zones are located in different
physical data center buildings, but could also represent different
floors, rooms, pods, or racks, depending on the particular deployment.
Commands that contained the word "datacenter" have been deprecated, and
are replaced with commands using the word "zone". The previous commands
will continue to work in this release. New commands are:
Command flags that specify a zone have been changed to
for a zone ID, and
-znname, for a zone name. The
-dcname flags have been
deprecated but will continue to work in this release. In addition, zone
IDs can now be specified using the "zn" prefix, with the earlier "dc"
prefix still currently supported.
The administrative GUI has also been modified to use the new Zone
There are now two types of zones. Primary zones contain
electable nodes, which can serve as masters or replicas, and vote in
master elections. All zones (or data centers) created in earlier
releases are primary zones, and new zones are created as primary zones
by default. Secondary zones contain nodes of the
new secondary node type, which can only serve as replicas and do
not vote in master elections. Secondary zones can be used to make a
copy of the data available at a distant location, or to maintain an
extra copy of the data to increase redundancy or read capacity. [#22483]
show plan command now provides an estimated migration completion time. For example:
Plan Deploy Topo (12)
Attempt number: 1
Started: 2014-01-14 17:35:09 UTC
Ended: 2014-01-14 17:35:27 UTC
Total tasks: 27
3 partition migrations queued
1 partition migrations running
11 partition migrations succeeded, avg migration time = 550164 ms.
Estimated completion: 2014-01-14 19:57:37 UTC
A new read consistency option has been added for this release.
oracle.kv.Consistency.NONE_REQUIRED_NO_MASTER can now be
used to specify that the desired read operations must always be serviced
by a replica, never the master. For read-heavy applications (ex. analytics),
it may be desirable to isolate read requests so that they are performed
only on replicas, never a master; reducing the load on the master. The
preferred mechanism for achieving this sort of read isolation is the new
secondary zone feature; which users are encouraged to employ for this
purpose. But for cases where the use of secondary zones is not desired
can be used to achieve a similar effect, without the additional resources
that secondary zones may require. [#22338]
The following methods have been added:
These methods make it possible to require that read operations only be
performed on nodes located in the specified zones.
The show plans command has been changed so that a range of plan
history can be specified. With no arguments, show plans now
displays only the ten most recently created plans, but new arguments
can be used to select ranges by creation time and by plan id. Issue
the command "show plans -help" to see the complete set of options.
The makebootconfig utility has a new optional
command-line argument, which allows the SNA to force the start of a bootstrap
admin even if the value of
-admin is set to 0.
-port of plan deploy-admin within the admin CLI
has been changed to be able to control the start of the admin web service. No
web service of the admin will be started if the
-port is set to
0 in deploying.
Users can also change the http port of an admin after deployment via
plan change-param command of admin CLI to change the setting for whether
an admin runs a web server. [#22344]
The plan change-param command has been changed to allow changing the
parameters for a single admin service. [#22244]
NoSQL topology information is stored both in the Admin services and on
Storage Nodes, and can become inconsistent if topology changing plans
such as deploy-topology and migrate-sn are canceled before
completion. Inconsistencies can be repaired by redeploying the target
topology. In this release, a "plan repair-topology" command is also provided
as an additional way of repairing topology inconsistencies. The verify
configuration command now generates recommendations for when it may be
beneficial to use repair-topology. [#22753]
The makebootconfig command now prints a message when it declines to
overwrite existing configuration files. [#23012]
The "plan remove-admin" now permits removal of an Admin that is hosted
by Storage Node that is not running. [#23061]
Fixed a bug that sometimes caused a duplication of the admin section
in a Storage Node's config.xml file. As a result, the "plan
change-parameters" command, when applied to an Admin service with this
configuration irregularity, could unexpectedly have no effect. The
bug could be provoked by attempting to deploy an Admin that is already
deployed; but it could also happen when re-executing a failed
"plan migrate-storagenode" command. [#23152]
Fixed a problem that caused the
command to ignore storage directory settings when creating new
replication nodes. [#23161]
Previously, when there was no activity during a RepNode's
metrics-gathering period (the statsInterval), the previous
period's metric values would be reported via JMX and SNMP. This
behavior has changed so that the metrics are updated at every
NoSQL DB automatically adjusts mastership identity so that master
nodes are distributed across a store for optimal performance.
Fixed a problem that prevented Master Balancing from being performed
across multiple zones. [#22857]
Modified the LOB implementation to repeat calls to
InputStream.skip as needed to position the input stream to
the start location, so long as the calls return non-zero values.
IllegalArgumentException will be thrown if the calls do
not advance the stream to the required start location.
The administrative and data command line interfaces (CLI) have been
merged into a single program. The usage of the merged CLI is
compatible with most old usage but has additional options that allow
it to work for administrative operations, data operations, or both.
This change requires the use of kvstore.jar for data operations where
in previous releases, the data CLI only required kvcli.jar, which
depended on kvclient.jar.
The CLI has been enhanced with commands necessary to manage tables,
indexes, security information, and zones.
With the introduction of the tabular data model and secondary
indexes, a new
Getting Started with the Table API guide has been added.
With the introduction of the new security features, a new Security
Guide has been added.
The versions of the Avro and Jackson libraries bundled with Oracle
NoSQL database have been upgraded to the more recent Avro 1.7.6 and
Jackson 1.9.3. These versions are compatible with the previous API
Changes in 12cR220.127.116.11
The new method
KVStore.appendLOB() now permits appending to an
existing LOB (Large Object). As part of this change, the
PartialLOBException.isPartiallyDeleted() has been deprecated
in favor of the new
PartialLOBException.getPartialState(). Please consult the
javadoc associated with these new methods, as well as the updated doc for the
KVLargeObject, for a detailed description of this new
This release is backwards compatible with LOBs created in previous releases,
with one exception: Only LOBs created in this, or a later, release support the
append operation. Attempts to use the append operation on LOBs created in
previous releases will result in the method throwing an
LOBs created in this release cannot be read or deleted by clients using earlier
releases. Such operations will typically fail with a ConcurrentModificationException.
Please ensure that all clients are updated to this release before creating new
GC log files for the Admin and RepNode services are now generated by default and
placed in the
KVROOT/<storename>/log directory (the standard
location for all NoSQL related logging information). This default behavior only
applies when using JDK release 1.7 or a later release, since gc log rotation is
only supported in the more recent JDKs. The logging has minimal resource
overheads. Having these log files readily available, conforms to deployment best
practices for production java applications making it simpler to diagnose GC
issues should the need arise. [#22858]
The heap requirement of the Admin service, when operating on a store that has
undergone numerous changes, has been reduced. [#21143]
Fixed a bug in the Admin CLI "show plan -id <id>" command, which
resulted in the omission of information about partition migration
tasks from the plan history report. The command now correctly includes
information about partition migrations. [#22611]
Reduce internal timeout values, associated with the network connection
between a master and a replica, to permit faster master failover upon
encountering a network hardware failure. [#22861]
An attempt to resume a failed put operation on a LOB larger than 3968K bytes
could result in an incorrect ConcurrentModificationException in some
circumstances. The bug has been fixed in this release. [#22876]
Changed the way plans are represented in the Admin's memory.
Previously, there was no limit on the potential size of the in-memory
representation of currently active and historical plans. With this
fix, only active plans are kept in memory. [#22963]
Eliminated deadlocks in plan management in the Admin. [#22992]
A bug in the argument checking for the
StoreIteratorConfig setter methods has been
The makebootconfig command now prints a message when it declines to
overwrite existing configuration files. [#23012]
The Replication Node configuration has been tuned to reduce CPU
utilization when the Replication Node's cache is smaller than
required, and cache eviction is taking place. [#23026]
The remove-admin command now permits removal of an Admin that is
hosted by Storage Node that is not running. [#23061]
The show plans command could sometimes cause a crash in the Admin CLI
because it would consume too much memory. This has been fixed. [#23105].
Changes in 12cR18.104.22.168
Oracle NoSQL Database now offers a client only package. Oracle NoSQL
Database Client Software Library is licensed pursuant to the Apache
2.0 License (Apache 2.0). The Apache License and third party notices
for the NoSQL DB Client Software Library may be viewed at this
or in the downloaded software.
A new overloading of the
KVStore.storeKeysIterator() implements Parallel
Scans. The other
storeIterator() methods scan all
shards and Replication Nodes in serial order. The new Parallel
Scan methods allow the programmer to specify a number of client-side
threads that are used to scan Replication Nodes in parallel. [#22146]
Improved error messages in the Data Command Line Interface
(kvshell). For example, a put command with invalid inputs might have
returned this error message in the past:
kvshell-> put -key /test -value ./emp.insert -file -json Employee
Could not create JSON from input:
Unable to serialize JsonNode
but will now produce this more useful response:
kvshell-> put -key /test -value ./emp.insert -file -json Employee
Exception handling command put -key /test -value ./emp.insert -file -json Employee:
Could not create JSON from input:
Expected Avro type STRING but got JSON value: null in field Address of Employee
Fixed a bug when using the plan deploy-admin command. In some cases,
if an Admin service encountered an error at start up, the process would
become unresponsive. The correct behavior is for the process to shut
down and be restarted by its owning Storage Node. [#22908]
Changes in 12cR22.214.171.124
If a Storage Node Agent process received a master balancing related
remote request while shutting down, it could in rare instances throw an
exception that would disable the master balancing function in the Storage Node
Agent that initiated the request. This problem can be identified via the
following (or similar) output in the log of the Storage Node Agent that
initiated the request:
2014-03-28 12:13:34.544 UTC SEVERE [sn2] MasterRebalanceThread thread exiting due to exception.
null (126.96.36.199.24) java.lang.NullPointerException
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at com.sun.proxy.$Proxy1.getMDInfo(Unknown Source)
2014-03-28 12:13:34.546 UTC INFO [sn2] Master balance manager shutdown
2014-03-28 12:13:34.546 UTC INFO [sn2] MasterRebalanceThread thread exited.
Changes in 12cR188.8.131.52
Under certain circumstances, a replication node which was on the verge
of shutting down or in the midst of transitioning from master to
replica state could experience this failure while cleaning up
outstanding requests. Since the node would automatically restart,
and the operation would be retried, the failure was transparent to the
application, but could cause an unnecessary node failover. This has
java.lang.IllegalStateException: Transaction 30 detected open cursors while aborting
In past releases of NoSQL DB, a replication node which transitioned
from master to replica state would have to close and reopen its
database environment as part of the change in status. This transition
has now been streamlined so that in the majority of cases, the
database environment is not perturbed, the transition requires fewer
resources, and the node is more available.
plan deploy-topology command has additional
safeguards to increase the reliability of the topology rebalance and
redistribute plans. When moving a replication node from one Storage
Node to another, the command will now check that the Storage Nodes
involved in the operation are up and running before any action is
Under certain circumstances it was possible for a replication node to
use out of date master identity information when joining a
shard. This could cause a delay if the targeted node was
unavailable. This has been fixed. [#22851]
Under certain circumstances operations would end prematurely
with oracle.kv.impl.fault.TTLFaultException. This exception is now
handled internally by the server and client library and the operation
is retried. If the fault condition continues the operation will
eventually fail with a oracle.kv.RequestTimeoutException. [#22860]
Previously, there were cases where a replication node would require
the transfer of a copy of the shard data in order to come up and join
the shard, even though it was unnecessary. This has been
When new storage nodes are added to an Oracle NoSQL DB deployment and
a new topology is deployed, the store takes that opportunity to
redistribute master roles for optimal performance. In some cases, the
store might not notice the new storage nodes until other events, such
as failovers or mastership changes had occurred, which caused a delay
in master balancing. This has been fixed. [#22888]
The setting of the JE configuration parameter:
je.evictor.criticalPercentage used by the store has been corrected. It
used to be set to 105 and has been changed to 20. This new setting
will provide better cache management behavior in cases where the data
set size exceeds the optimal memory settings. [#22899]
A timestamp has been added to the output of the CLI "ping" command. [#22859]
Changes in 12cR184.108.40.206
This release includes a new document, Oracle NoSQL Database
Availability and Failover. It explains the general concepts and
issues surrounding data availability when using Oracle NoSQL Database.
The intended audiences for this document are system architects and
developers. The new information can be found under the "For the
Developer" section in the documentation index page.
Clarify the instructions for adding .avsc files to the classpath for
the example on Avro bindings in <KVHOME>/examples/avro.
Improve the error message when the .avsc files are not properly available.
Increased an internal parameter for lock timeouts from 500ms to 10
seconds. Since NoSQL DB ensures that data access is deadlock free, the
small timeout values were unnecessary and could cause spurious errors
in the face of transient network failures. [#22583]
Changing the store topology through the
deploy-topology command could result in the following
error if there was a transient network failure, or if the movement of
the replication node took longer than a few seconds. Although the store
state was still consistent, and the command could be manually retried,
the command should be more resilient to communication glitches.
... [admin1] Task 2/RelocateRN ended in state ERROR with
java.lang.RuntimeException: Time out while waiting for rg4-rn1 to come
up on sn1 and become consistent with the master of the shard before
deleting the RepNode from its old home on sn4 2/RelocateRN failed.
java.lang.RuntimeException: Time out while waiting for rg4-rn1 to come
up on sn1 and become consistent with the master of the shard before
deleting the RepNode from its old home
The command will now adjust waiting times and retry appropriately to
ascertain whether the movement of a
replication node has finished.[#22596]
Fixed a bug where a replication node would not restart automatically if
the directory containing its data files was removed, or its data files
were corrupted, but were later repaired. [#22626]
Added additional testing to reinforce the existing, correct behavior
that a client directs write requests to the authoritative master in a
segmented network split brain scenario. [#22636]
In some cases, the
java -jar kvstore.jar ping command
could generate spurious messages about components that are no longer
legitimately within the store.
Failed to connect to service commandService
Connection refused to host: 10.32.17.12; nested exception is:
java.net.ConnectException: Connection refused
SNA at hostname:localhost registry port: 6000 has no available
Admins or RNs registered.
In particular, these messages could happen for bootstrap Admins on
Storage Nodes that do not host deployed Admin Services. While the
store was consistent, the error messages were confusing and have been
Fixed a small timing window in Replication Node master transfer that
could incorrectly cause the transfer transaction catch up point to
regress, when a master transfer is occurring under heavy application
load. The result is that shard mastership can take too long a time or
too short a time to transfer. If the transfer time is too short, the
target master may not be optimally caught up, and a third member of
the shard may detect this and throw an exception.
Preemptively shut down and restart the replication node when a node
transitions from master to replica, to reduce GC cost from refreshing
the database environment.
Made changes to the NoSQL client library to adapt to replication node
failures more rapidly, by retrying or forwarding data requests when it
detects that its original target is unavailable sooner.
A NoSQL deployment could see this transient error when
undergoing topology changes. Although the store remained consistent, the
error messages were confusing and could incorrectly cause a
deploy-topology command to fail. This has been corrected.
... INFO [rg1-rn1] Failed pushing entire topology push to rg1-rn3
updating from topo seq#: 0 to 1001 Problem:Update to topology seq#
1001 failed ... oracle.kv.impl.fault.OperationFaultException:
Update to topology seq# 1001 failed
...INFO [rg1-rn3] Topology update skipped. Current seq #: 1001 Update seq #: 1001
Fixed a bug where a replication node which experiences an out of
memory error did not restart automatically.
Corrected the default calculation of available Storage Node memory when
the Storage Node has been configured without a value for the bootstrap
memory_mb parameter. In the past, the calculation was done using units
of decimal megabytes, rather than MB, resulting in an overestimation
of the appropriate replication node heap size. This default
calculation is only used if the store has been configured without any
bootstrap value for the memory_mb property, and the memory_mb storage node
parameter has never been set.
Update the Storage Node more quickly about the replica/master status
of the replication nodes it hosts. The fix applies when executing the
plan deploy-topology command on a store that contains Storage Nodes
that have capacity values greater than 1, and can host multiple
Replication Nodes. A delay in notifying the Storage Node of its
replication nodes status can make the distribution of mastership
responsibilities less optimal.
Fixed a bug where the Admin service became unresponsive when executing
plan deploy-topology command. During this time, the
admin service process appeared idle, only burning a second or two of
CPU time once in a while and would not respond to new attempts to
connect with the Admin CLI. The problem would likely only occur in
large clusters with hundreds of components. [#22694]
Topology changes invoked by the
which result in the movement of a replication node from one storage
node to another are now more resilient to transient network failures.
There are now more advance checks to ensure that the shard and storage
nodes involved in the movement are available and ready to accept the
change. In the advent of a network failure mid-move, the command is
better at handling retries issued by the system administrator.
Fixed a bug where application requests failed to be processed while the
store is executing topology changes that require partition migration
under heavy load.
Adjust the default replication node garbage collection parameters to
be more optimal, reducing CPU utilization in some cases.
Reduce the time taken for a replica Replication Node to become up to
date and available to handle application requests when it has fallen
significantly behind due to downtime or to network communication
failures. Previously, it exited and restarted the process before
starting the catch up stage, but will now skip the restart.
Fixed a bug where an internal queue in the Storage Node could fill up
if its Replication Node repeatedly and unsuccessfully attempts to
restart, as might happen when a resource is unavailable. In that case,
the Storage Node was no longer able to automatically restart
the replication node, and would have to be rebooted.
Fixed a bug where a Replication Node that has been stopped due to
repeated errors, perhaps due to a lack of resource, and then
re-enabled with the "plan start-service" command, still did not
Fixed a bug where the following null pointer exception could happen
for a restarting Replication Node. The problem was transient.
INFO [sn1] rg2-rn2: ProcessMonitor: startProcess
INFO [sn1] rg2-rn2: ProcessMonitor: stopProcess
SEVERE [sn1] rg2-rn2: ProcessMonitor: Unexpected exception in
Improve the client library's interpretation of UnknownHostException
and ConnectIOException so that it more rapidly detects a network
problem and updates its set of unavailable replication nodes.
Changes in 12cR220.127.116.11
This release includes support for upgrading the NoSQL DB software (client or
server) without taking the store offline or without significant impact to
ongoing operations. In addition, upgrades can be made incrementally, that is,
it should not be necessary to update the software on every component at the
same time. This support includes client and server code changes and new
command line interface (CLI) commands. [#22421]
The new CLI commands provide the
administrator tools to help with the upgrade process. Using these commands, the
general upgrade procedure is:
- Install the new software on a Storage Node running an admin
- Install the new client and connect to the store.
- Use the
verify prerequisite command to verify that the
entire store is at the proper software version to be upgraded (All
2.0 versions of NoSQL DB will qualify as prerequisites).
show upgrade-order to get an ordered list of nodes to
- Install the new software on the Storage Nodes (individually or in groups
based on the ordered list).
- Use the
verify upgrade to monitor progress and verify that
the upgrade was successful.
1 In future releases this step will not be necessary
If the upgrade procedure is interrupted steps 4-6 can be repeated as necessary
to complete the upgrade.
Unless configured specifically by the application, NoSQL DB specifies the
-XX:ParallelGCThread jvm flag for each Replication Node process to
indicate the number of garbage collector threads that the process
should use. In the past, the algorithm in use generated a minimum
value of 1 thread. After more testing, the minimum value has been
raised to the min(4, <the number of cores on the node>). [#22475]
The admin command line interface (CLI) provides the following new
verify prerequisite [-silent] [-sn snX]*
This command will verify that a set of storage nodes in the store meets the
required prerequisites for upgrading to the current version and display the
components which do not meet prerequisites or cannot be contacted. It will also
check and report an illegal downgrade situation where the installed software is
of a newer minor release than the current version. In this command the current
version is the version of the software running the command line interface. If no
storage nodes are specified, all of the nodes in the store will be checked.
verify upgrade [-silent] [-sn snX]*
This command will verify that a set of storage nodes in the store has been
successfully upgraded to the current version and display the components which
have not yet been upgraded or cannot be contacted. In this command the current
version is the version of the software running the command line interface. If no
storage nodes are specified, all of the nodes in the store will be checked.
This command will display the list of storage nodes in the order that they
should be upgraded to maintain the store's availability. This command will
display one or more storage nodes on a line. Multiple storage nodes on a line
are separated by a space. If multiple storage nodes appear on a single line,
then those nodes can be safely upgraded at the same time. When multiple nodes
are upgraded at the same time, the upgrade must be completed on all nodes
before the nodes next on the list can be upgraded.
verify [-silent] command has been deprecated and is replaced by
verify configuration [-silent]. The
command will continue to work in this release.
- In this release, the sample code provided by the utility
(located in the
examples directory) now includes methods that
perform write operations for large objects (or LOB,
new utility methods added in this release will properly retry the associated
LOB operation when
is encountered. Prior to this release,
utility only provided retry methods for objects that are not large
The number of JE lock tables used by Replication Nodes (controlled via
the je.lock.nLockTables JE configuration parameter) has been increased from
1 to 97. This change helps improve performance of applications characterized by
very high levels of concurrent updates, by reducing lock contention. [#22373]
The Administration CLI now permits the creation of multiple Datacenters.
By choosing Datacenter replication factors so that each Datacenter holds
less than a quorum of replicas, this change makes it possible to create
store layouts where the failure of a single Datacenter does not result
in the loss of write availability for any shards in the store. In the
current release, nodes in any Datacenter can participate in master
elections and contribute to durability acknowledgments. As a
consequence, master failover and durability acknowledgments will take
longer if they involve datacenters that are separated by large
distances. Future releases will provide greater flexibility in this
Changes in 11gR18.104.22.168
An integration with Oracle Coherence has been provided that allows
Oracle NoSQL Database to be used as a cache for Oracle Coherence
applications, also allowing applications to directly access cached
data from Oracle NoSQL Database. This integration is a feature of the
Enterprise Edition of the product and implemented as a new, independent jar
file. It requires installation of the Oracle Coherence product as
well. The feature is described in the
Guide as well as the javadoc. [#22291].
The Enterprise Edition now has support for semantic technologies.
Specifically, the Resource Description Framework (RDF), SPARQL query
language, and a subset of the Web Ontology Language (OWL) are now
supported. These capabilities are referred to as the RDF Graph
feature of Oracle NoSQL Database. The RDF Graph feature provides a
Java-based interface to store and query semantic data in Oracle NoSQL
Database Enterprise Edition. The feature is described in the
RDF Graph manual.
The preferred approach for setting NoSQL DB memory resources is to
specify the memory_mb parameter for each SN when running the
makebootconfig utility, and to let the system calculate the ideal
Replication Node heap and cache sizes. However, it is possible to override the
standard memory configurations by explicitly setting heap and cache
sizes using the Replication Node javaMiscParams and cacheSize
parameters. In past releases, setting the explicit values worked
correctly when using the plan change-parameters command, but did not
work correctly when using the change-policy command. This has been
fixed, so that if desired, one can use the change-policy command for
the javaMiscParams and cacheSize parameters to override the default
memory allocation heuristics. [#22097]
A NoSQL DB deployment that executes on a node with no network
available, as might happen when running a NoSQL DB demo or tutorial,
would fail with this error:
java.net.InetAddress.getLocalHost() returned loopback address:<hostname> and
no suitable address associated with network interfaces.
This has been fixed. [#22252]
Prior to this release, if a write operation encountered an exception from the underlying persistent store indicating that the write completed on the shard's master but not necessarily on the desired combination of replicas within the specified time interval, then that exception would be swallowed and thus never propagated to the client. Originally, this behavior was considered desirable because not only is that exception rare (because of various preceding checks performed by the implementation), but swallowing the exception would keep the API simple by avoiding the introduction of an additional exception and/or additional communication at the API level. After further thought and discussion, the team concluded that clients should know when a write operation fails to complete because of such an exception. As a result, when such a condition occurs during a write operation, a
RequestTimeoutException will now be propagated to the client; wrapping the original exception from the underlying persistent store as the cause. For additional information, including strategies one might employ when this exception is encountered, refer to the
This has been fixed. [#21210]
A new parameter has been added which controls the display of records
in exception and error messages. When
hideUserData is set
to true, as it is by default, error messages which are printed to the
server side logs or are displayed via the show CLI commands replace
any key/values with the string "[hidden]". To see the actual record content
in errors, set the parameter to false. [#22376]
In previous releases, information about errors that occurred during
NoSQL DB component start up as a result of a
deploy-topology command would often be visible only within
the NoSQL DB logs, which made
installation troubleshooting difficult. In this
release, such start up errors can now be seen via the Admin CLI
show plan -id <id> command. [#22101]
The Storage Node Agent exposes MBeans on a non-default MBeanServer
instance. In this release, the non-default MBeanServer now exposes
the standard JVM platform MBeans as well as those relating only to
Oracle NoSQL Database.
In both SNMP and JMX interfaces, the new totalRequests metric is now available. This metric counts the number of multi-operation sequences that occurred during the sampling period.
Prior to this release, the product was compiled and built against the 1.x version of Hadoop (CDH3). Thus, when employing a previous release, if one were to run the
examples.hadoop.CountMinorKeys example against a cluster based on the 2.x version of Hadoop (CDH4), the MapReduce job initiated by that example would fail as a result of an
IncompatibleClassChangeError; which is caused by an incompatibility introduced in
org.apache.hadoop.mapreduce.JobContext between Hadoop 1.x and Hadoop 2.x. This failure occurs whether the example is compiled and built against Hadoop 1.x or Hadoop 2.x. Because the product's customer base almost exclusively uses Hadoop 2.x, this release will provide support for Hadoop 2.x instead of 1.x. Future releases may revisit support for both Hadoop version paths, but doing so will involve refactoring the codebase and its associated release artifacts, as well as substantial changes to the product's current build process.
Support of Hadoop 2.x (CDH4) has been provided. [#22157]
The java -jar kvstore.jar makebootconfig -mount flag has been changed
to -storagedir. The "plan change-mountpoints -path <storage
directory>" command is deprecated in favor of "plan
change-storagedir -storagedir <storage directory>". [#21880]
The concept of Storage Node capacity is better explained in the
The Administrator's Guide has a revamped section on how to calculate the
resources needed for operating a NoSQL DB deployment.
Changes in 11gR22.214.171.124
This release adds the capability to remove an Admin service replica.
If you have deployed more than one Admin, you can remove one of them
using the following command:
plan remove-admin -admin <adminId>
You cannot remove the sole Admin if only one Admin instance is
For availability and durability reasons, it is highly recommended that
you maintain at least three Admin instances at all times. For that
reason, if you try to remove an Admin when the removal would result in
there being fewer than three, the command will fail unless you give
the -force flag.
If you try to remove the Admin that is currently the master,
mastership will transfer to another Admin. The plan will be
interrupted, and subsequently can be re-executed on the new master
Admin. To re-execute the interrupted plan, you would use this command:
plan execute -id <planId>
The Admin CLI verify has an added check to verify that the Replication
Nodes hosted on a single Storage Node have memory settings that fit
within the Storage Node's memory budget. This guards against mistakes
that may occur if the system administrator overrides defaults and
manually sets Replication Node heap sizes.[#21727]
The Admin CLI verify command now labels any verification issues as
violations or notes. Violations are of greater importance, and the
system administrator should determine how to adjust the system to
address the problem. Notes are warnings, and are of lesser
Several corrections were made to latency statistics. These corrections apply
to the service-side statistics in the Admin console, CLI
command, .perf files and .csv files, as well as the client-side statistics
returned by KVStore.getStats. However, corrections to the 95% and 99% values do
not apply to the client-side statistics, since these values do not appear in
the client-side API.
- The definition of latency has been corrected for the "multi"
operation requests (multiGet, multiDelete, execute, etc). These are
labeled "multi" in the
Op Type column where latency
information is displayed. The previous definition was "latency in
milliseconds per operation" while the new definition is "latency
in milliseconds per request". In other words, for a "multi"
operation request, latency now applies to the entire request rather than
to each operation. For "single" operation requests, the definition of
latency has not changed.
- To go along with the change above, a new column containing the number
of requests in the sample has been added to all latency information
TotalReq. This is also available for client-side
statistics using the new
method. For "multi" operation requests, the total number of requests is
normally smaller than the total number of operations (the
TotalOps column). For "single" operation requests, the
total number of requests and operations are equal.
- Improved the consistency of the values reported in each sample so
that, for example, the minimum latency is always less than the maximum
latency. However, note that statistics are collected without
synchronization to avoid impacting performance, and for small sample
sizes the values in a sample are not always accurate or self-consistent.
- Fixed a bug that caused the 95% and 99% values to show the maximum
latency recorded (within 1000 ms), rather than the lowest 95% or 99% as
intended. This bug only applied to the "multi" operation requests.
- Fixed a bug that caused the 95% and 99% values to sometimes
mistakenly appear as -1. These values should only appear as -1 when
there were no operations in the sample with a latency below 1000 ms.
Modified the Administration Process to allocate ports from within a port range
if one is specified by the -servicerange argument to
the makebootconfig utility. If the argument is not specified the
Administration Process will use any available port. Please see
the Admin Guide
for details regarding the configuration of ports used by Oracle NoSQL
Modified the replication node to handle the unlikely case that the locally
stored topology is missing. A missing topology results in a
java.lang.NullPointerException being thrown in the TopologyManager and will
prevent the replication node from starting. [#22015]
Replication Node memory calculations are more robust for Storage Nodes
that host multiple Replication Nodes. In previous releases, using the
plan change-params command to reduce the capacity parameter for a
Storage Node which hosts multiple Replication Nodes could result in an
over aggressive increase in RN heap, which would make the Replication
Nodes fail at start up. The problem would be fixed when a topology was
rebalanced, but until that time, the Replication Nodes were
unavailable. The default memory sizing calculation now factors in the
number of RNs resident on a Storage Node, and adjusts RN heap sizes as
Replication Nodes are relocated by the deploy-topology
Fixed a bug that could cause a NullPointerException, such as the one below,
during RN start-up. The exception would appear in the RN log and the RN would
fail to start. The conditions under which this problem occurred include
partition migration between shards along with multiple abnormal RN shutdowns.
If this bug is encountered, it can be corrected by upgrading to the current
release, and no data loss will occur.
Exception in thread "main" com.sleepycat.je.EnvironmentFailureException: (JE
5.0.XX) ... last LSN=.../... LOG_INTEGRITY: Log information is incorrect,
problem is likely persistent. Environment is invalid and must be closed.
Caused by: java.lang.NullPointerException
... 10 more
Fixed a bug that causes excess memory to be used in the storage engine cache on
an RN, which could result in poor performance as a result of cache eviction and
additional I/O. The problem occurred only when the
method was used. [#21973]
The replicas in a shard now dynamically configure the JE property
RepParams.REPLAY_MAX_OPEN_DB_HANDLES which controls the size of the cache
used to hold database handles during replication. The cache size is determined
dynamically based upon the number of partitions currently hosted by the
shard. This improved cache sizing can result in better write performance for
shards hosting large numbers of partitions. [#21967]
The names of the client and server JAR files no longer include release
version numbers. The files are now called:
This change should reduce the amount of work needed to switch to a new
release because the names of JAR files will no longer change between
releases. Note that the name of the installation directory continues to
include the release version number. [#22034]
A SEVERE level message is now logged and an admin alert is fired when the
storage engine's average log cleaner (disk reclamation) backlog increases over
time. An example of the message text is below.
121215 13:48:57:480 SEVERE [...] Average cleaner backlog has grown from 0.0 to
6.4. If the cleaner continues to be unable to make progress, the JE cache size
and/or number of cleaner threads are probably too small. If this is not
corrected, eventually all available disk space will be used.
For more information on setting the cache size appropriately to avoid such
problems, see "Determining the Per-Node Cache Size" in the Administrator's
The storage engine's log cleaner will now delete files in the latter portion of
the log, even when the application is not performing any write operations.
Previously, files were prohibited from being deleted in the portion of the log
after the last application write. When a log cleaner backlog was present (for
example, when the cache had been configured too small, relative to the data set
size and write rate), this could cause the cleaner to operate continuously
without being able to delete files or make forward progress. [#21069]
NoSQL DB 2.0.23 introduced a performance regression over R1.2.23. The
kvstore client library and Replication Node consumed a greater
percentage of system CPU time. This regression has been fixed. [#22096]
Changes in 11gR126.96.36.199
This release provides the ability to add storage nodes to the system
after it has been deployed. The system will rebalance and redistribute
the data onto the new nodes without stopping operations. See Chapter
6, of the Admin
your Store's Configuration, for more details.
oracle.kv.lob package provides operations that can
be used to read and write Large Objects (LOBs) such as audio and video
files. As a general rule, any object larger than 1 MB is a good
candidate for representation as a LOB. The LOB API permits access to
large values without having to materialize the value in its entirety
by providing streaming APIs for reading and writing these objects.
A C API has been added. The implementation uses Java JNI and requires
a Java virtual machine to run on the client. It is available as a
Added a new remove-storagenode plan. This command will
remove a storage node which is not hosting any NoSQL Database components
from the system's topology. Two examples of when this might be useful
A storage node was incorrectly configured, and cannot be deployed.
A storage node was once part of a NoSQL Database, but all components have
been migrated from it using the migrate-storagenode command, and the
storage node should be decommissioned.
Added the ability to specify additional physical configuration
information about storage nodes including:
This information is used by the system to make more intelligent
choices about resource allocation and consumption. The administration
documentation discusses how these parameters are set and used.
- Capacity - the number of RepNodes the SN may host
- Number of CPUs
- Amount of memory to use
- Specific directory paths (mount points) to use for RepNodes
- Added Avro support. The value of a kv pair can now be stored in Avro
binary format. An Avro schema is defined for each type of data stored. The
Avro schema is used to efficiently and compactly serialize the data, to
guarantee that the data conforms to the schema, and to perform automatic
evolution of the data as the schema changes over time. Bindings are supplied
that allow representing Avro data as a POJO (Plain Old Java Object), a JSON
object, or a generic Map-like data structure. For more information, see
Chapter 7 - Avro Schemas and
Chapter 8 - Avro Bindings
in the Getting Started Guide. The
oracle.kv.avro package is
described in the Javadoc. The use of the Avro format is strongly
recommended. NoSQL DB will leverage Avro in the future to provide additional
features and capabilities. [#21213]
- Added Avro support for the Hadoop
oracle.kv.hadoop.KVAvroInputFormat class returns Avro
IndexedRecords to the caller. When this class is used in
conjunction with Oracle Loader for Hadoop, it is possible to read data
directly from NoSQL Database using OLH without using an interim Map-Reduce
job to store data in HDFS. [#21157]
- Added a feature which allows Oracle Database External Tables to be
used to access Oracle NoSQL Database records. There is more
information in javadoc for the
oracle.kv.exttab package and an "cookbook" example in the
examples/externaltables directory. [#20981]
The following new methods:
have been added to allow clients to configure the socket timeouts used to make
client requests. Please review the javadoc for details.
R1 installations must ensure that the software on the storage nodes has
been upgraded as described in the upgrade documentation
accompanying this release before using the above APIs on the client.
New service parameters have been added to control the backlog
associated with sockets created by NoSQL Database. These are
controllable for the Rep Node and Storage Nodes' Monitor, Admin, and
Registry Handler interfaces. The parameters
rnMonitorSOBacklog (default 0), rnAdminSOBacklog
rnAdminSOBacklog (default 0),
snMonitorSOBacklog (default 0), and
snRegistrySOBacklog (default 1024).
Key.isPrefix with an argument containing
a smaller major or minor path than the target Key object caused an
IndexOutOfBoundsException in certain cases. This has been fixed.
KeyRange() constructor now checks that the start
Key is less than the end
Key if both are specified,
IllegalArgumentException is thrown.
KeyRange also has
fromString() methods for encoding and decoding
KeyRange instances, similar to the same methods in
Many new commands have been added to the CLI. See
Appendix A - Command Line Interface (CLI) Command Reference
of the Administrator's Guide
The Admin Console is now for monitoring only.
Administration CLI commands have been changed so that component ids
match the ids used in the topology display. Previously Datacenters,
Storage Nodes, Admin instances and Replication Nodes were identified
only by number. For example, the syntax to add Storage Node 17 to a
Storage Node pool, or to show the parameters for a given Replication Node was:
Datacenters can now be expressed as # or dc#
joinPool myStorageNodePool 17
show repnode-params 5,3
Admin instances can now be expressed as # or admin#
Storage Nodes can now be expressed as # or sn#
Replication Nodes can now be expressed as groupNum,nodeNum, or rgX-rnY
The commands shown above are still valid, but can also be expressed as:
joinPool myStorageNodePool sn17
show repnode-params rg5-rn3
The javadoc for the
Key.createKey methods has been improved to
warn that List instances passed as parameters are owned by the Key object
after calling the method. To avoid unpredictable results, they must not be
Changes in 11gR188.8.131.52
Previously, executing a change-repnode-params plan in order to
change Replication Node parameters for a node other than the one
running the Admin service would fail. This operation will now
A deploy-storage-node plan which ran into problems when attempting
to deploy a new storage node would leave the problematic SN in the
store. This would require that the user either take manual action to
remove the bad SN, or fix the problem and retry the plan. For
convenience, the deploy-storage-node plan will now clean up if it
runs into errors, and will not leave the failed SN behind. [#20530]
The command line interface's
snapshot create command
has been made significantly faster. Previously, it could take
minutes if executed on a store with a large amount of data. This
should be reduced to seconds. [#20772]
The two scripts for starting kvlite and executing control commands,
bin/kvctl, have been replaced
java -jar lib/kvstore-M.N.P.jar command. This provides portability
to all Java platforms, including Windows. The two scripts are deprecated, but
will be supported for at least one release cycle.
The translation from the old script commands to the new -jar commands is as
|Old script command||New -jar command|
java -jar lib/kvstore-M.N.P.jar kvlite args...
bin/kvctl command args...
java -jar lib/kvstore-M.N.P.jar command args...
There are a few differences to be aware of between the old and new commands.
nohup, if desired, must be explicitly specified. In
bin/kvctl script, nohup was added automatically for the
restart commands. To specify the
equivalent command, use:
nohup java -jar lib/kvstore-M.N.P.jar start args... > /dev/null <
/dev/null 2<&1 &
The logging configuration file for kvlite is now specified using
standard Java syntax. Previously, the
examples/logging.properties configuration file was added
automatically when passing
-logging to the
run-kvlite.sh script. The new equivalent is:
-jar lib/kvstore-M.N.P.jar kvlite args...
-host argument defaulted to
the local machine name (via the
`hostname` command) when
kvctl script. Now, for all control commands, no
default hostname is used and the
-host argument must be
specified explicitly. This change was made for two reasons: 1)
consistency, since the port and other arguments have no default value for
control commands, and 2) safety, since specifying an explicit hostname
guards against accidental errors.
-host argument defaulted to
localhost when running the
Now, the default is the local machine name rather than (literally)
localhost. Note that the kvlite command, unlike the control
commands, has default values for all arguments. This is because the kvlite
command is designed for ease-of-use during development on a single machine.
kvlite should not be used in production or for performance testing.
java -jar lib/kvstore-M.N.P.jar, with or
without arguments, printed the product version. Now, if no arguments are
specified, a usage message is printed. To print the version, use the
java -jar lib/kvstore-M.N.P.jar version