NGSIMongoSink

Content:

Functionality

com.iot.telefonica.cygnus.sinks.NGSIMongoSink, or simply NGSIMongoSink is a sink designed to persist NGSI-like context data events within a MongoDB server. Usually, such a context data is notified by a Orion Context Broker instance, but could be any other system speaking the NGSI language.

Independently of the data generator, NGSI context data is always transformed into internal Flume events at Cygnus sources. In the end, the information within these Flume events must be mapped into specific HDFS data structures at the Cygnus sinks.

Next sections will explain this in detail.

Top

Mapping NGSI events to flume events

Notified NGSI events (containing context data) are transformed into Flume events (such an event is a mix of certain headers and a byte-based body), independently of the NGSI data generator or the final backend where it is persisted.

This is done at the Cygnus Http listeners (in Flume jergon, sources) thanks to NGSIRestHandler. Once translated, the data (now, as a Flume event) is put into the internal channels for future consumption (see next section).

Top

Mapping Flume events to MongoDB data structures

MongoDB organizes the data in databases that contain collections of Json documents. Such organization is exploited by NGSIMongoSink each time a Flume event is going to be persisted.

Top

MongoDB databases naming conventions

A database called as the fiware-service header value within the event is created (if not existing yet). A configured prefix is added (by default, sth_).

It must be said MongoDB does not accept /, \, ., " and $ in the database names. This leads to certain encoding is applied depending on the enable_encoding configuration parameter.

MongoDB namespaces (database + collection) name length is limited to 113 bytes.

Top

MongoDB collections naming conventions

The name of these collections depends on the configured data model and analysis mode (see the Configuration section for more details):

  • Data model by service path (data_model=dm-by-service-path). As the data model name denotes, the notified FIWARE service path (or the configured one as default in NGSIRestHandler) is used as the name of the collection. This allows the data about all the NGSI entities belonging to the same service path is stored in this unique table. The configured prefix is prepended to the collection name.
  • Data model by entity (data_model=dm-by-entity). For each entity, the notified/default FIWARE service path is concatenated to the notified entity ID and type in order to compose the collections name. If the FIWARE service path is the root one (/) then only the entity ID and type are concatenated. The configured prefix is prepended to the collection name.
  • Data model by attribute (data_model=dm-by-attribute). For each entity's attribute, the notified/default FIWARE service path is concatenated to the notified entity ID and type and to the notified attribute name in order to compose the collection name. If the FIWARE service path is the root one (/) then only the entity ID and type and the attribute name and type are concatenated. The configured prefix is prepended to the collection name.

It must be said MongoDB does not accept $ in the collection names, so it will be replaced by underscore, _. This leads to certain encoding is applied depending on the enable_encoding configuration parameter.

MongoDB namespaces (database + collection) name length is limited to 113 bytes.

The following table summarizes the table name composition (assuming default sth_ prefix, old encoding):

FIWARE service path dm-by-service-path dm-by-entity dm-by-attribute
/ sth_/ sth_/<entityId>_<entityType> sth_/<entityId>_<entityType>_<attrName>
/<svcPath> sth_/<svcPath> sth_/<svcPath>_<entityId>_<entityType> sth_/<svcPath>_<entityId>_<entityType>_<attrName>

Using the new encoding:

FIWARE service path dm-by-service-path dm-by-entity dm-by-attribute
/ sth_x002f sth_x002fxffff<entityId>xffff<entityType> sth_x002fxffff<entityId>xffff<entityType>xffff<attrName>
/<svcPath> sth_x002f<svcPath> sth_x002f<svcPath>xffff<entityId>xffff<entityType> sth_x002f<svcPath>xffff<entityId>xffff<entityType>xffff<attrName>

Please observe the concatenation of entity ID and type is already given in the notified_entities/grouped_entities header values (depending on using or not the grouping rules, see the Configuration section for more details) within the Flume event.

Top

Row-like storing

Regarding the specific data stored within the above collections, if attr_persistence parameter is set to row (default storing mode) then the notified data is stored attribute by attribute, composing a Json document for each one of them. Each document contains a variable number of fields, depending on the configured data_model:

  • Data model by service path:
    • recvTimeTs: UTC timestamp expressed in miliseconds.
    • recvTime: UTC timestamp in human-readable format (ISO 8601).
    • entityId: Notified entity identifier.
    • entityType: Notified entity type.
    • attrName: Notified attribute name.
    • attrType: Notified attribute type.
    • attrValue: In its simplest form, this value is just a string, but since Orion 0.11.0 it can be Json object or Json array.
  • Data model by entity:
    • recvTimeTs: UTC timestamp expressed in miliseconds.
    • recvTime: UTC timestamp in human-readable format (ISO 8601).
    • attrName: Notified attribute name.
    • attrType: Notified attribute type.
    • attrValue: In its simplest form, this value is just a string, but since Orion 0.11.0 it can be Json object or Json array.
  • Data model by attribute:
    • recvTimeTs: UTC timestamp expressed in miliseconds.
    • recvTime: UTC timestamp in human-readable format (ISO 8601).
    • attrType: Notified attribute type.
    • attrValue: In its simplest form, this value is just a string, but since Orion 0.11.0 it can be Json object or Json array.

Top

Column-like storing

Regarding the specific data stored within the above collections, if attr_persistence parameter is set to column then a single Json document is composed for the whole notified entity. Each document contains a variable number of fields, depending on the configured data_model:

  • Data model by service path:
    • recvTime: Timestamp in human-readable format (Similar to ISO 8601, but avoiding the Z character denoting UTC, since all MySQL timestamps are supposed to be in UTC format).
    • fiwareServicePath: The notified one or the default one.
    • entityId: Notified entity identifier.
    • entityType: Notified entity type.
    • For each notified attribute, a field named as the attribute is considered. This field will store the attribute values along the time.
    • For each notified attribute, a field named as the concatenation of the attribute name and _md is considered. This field will store the attribute's metadata values along the time.
  • Data model by entity:
    • recvTime: Timestamp in human-readable format (Similar to ISO 8601, but avoiding the Z character denoting UTC, since all MySQL timestamps are supposed to be in UTC format).
    • fiwareServicePath: The notified one or the default one.
    • For each notified attribute, a field named as the attribute is considered. This field will store the attribute values along the time.
    • For each notified attribute, a field named as the concatenation of the attribute name and _md is considered. This field will store the attribute's metadata values along the time.
  • Data model by attribute. This combination has no sense, so it is avoided.

Top

Example

Flume event

Assuming the following Flume event is created from a notified NGSI context data (the code below is an object representation, not any real data format):

flume-event={
    headers={
         content-type=application/json,
         timestamp=1429535775,
         transactionId=1429535775-308-0000000000,
         ttl=10,
         fiware-service=vehicles,
         fiware-servicepath=/4wheels,
         notified-entities=car1_car
         notified-servicepaths=/4wheels
         grouped-entities=car1_car
         grouped-servicepath=/4wheels
    },
    body={
        entityId=car1,
        entityType=car,
        attributes=[
            {
                attrName=speed,
                attrType=float,
                attrValue=112.9
            },
            {
                attrName=oil_level,
                attrType=float,
                attrValue=74.6
            }
        ]
    }
}

Top

Database and collection names

A MongoDB database named as the concatenation of the prefix and the notified FIWARE service path, i.e. sth_vehicles, will be created.

Regarding the collection names, the MongoDB collection names will be, depending on the configured data model, the following ones (old encoding):

FIWARE service path dm-by-service-path dm-by-entity dm-by-attribute
/ sth_/ sth_/car1_car sth_/car1_car_speed
sth_/car1_car_oil_level
/4wheels sth_/4wheels sth_/4wheels_car1_car sth_/4wheels_car1_car_speed
sth_/4wheels_car1_car_oil_level

Using the new encoding:

FIWARE service path dm-by-service-path dm-by-entity dm-by-attribute
/ sth_x002f sth_x002fxffffcar1xffffcar sth_x002fxffffcar1xffffcarxffffspeed
sth_x002fxffffcar1xffffcarxffffoil_level
/4wheels sth_x002f4wheels sth_x002f4wheelsxffffcar1xffffcar sth_x002f4wheelsxffffcar1xfffcarxffffspeed
sth_x002f4wheelsxffffcar1xffffcarxffffoil_level

Top

Row-like storing

Assuming data_model=dm-by-service-path and attr_persistence=row as configuration parameters, then NGSIMongoSink will persist the data within the body as:

$ mongo -u myuser -p
MongoDB shell version: 2.6.9
connecting to: test
> show databases
admin              (empty)
local              0.031GB
sth_vehicles       0.031GB
test               0.031GB
> use vehicles
switched to db vehicles
> show collections
sth_/4wheels
system.indexes
> db['sth_/4wheels'].find()
{ "_id" : ObjectId("5534d143fa701f0be751db82"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "entityId" : "car1", "entityType" : "car", "attrName" : "speed", "attrType" : "float", "attrValue" : "112.9" }
{ "_id" : ObjectId("5534d143fa701f0be751db83"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "entityId" : "car1", "entityType" : "car", "attrName" : "oil_level", "attrType" : "float", "attrValue" : "74.6" }

If data_model=dm-by-entity and attr_persistence=row then NGSIMongoSink will persist the data within the body as:

$ mongo -u myuser -p
MongoDB shell version: 2.6.9
connecting to: test
> show databases
admin              (empty)
local              0.031GB
sth_vehicles       0.031GB
test               0.031GB
> use vehicles
switched to db vehicles
> show collections
sth_/4wheels_car1_car
system.indexes
> db['sth_/4wheels_car1_car'].find()
{ "_id" : ObjectId("5534d143fa701f0be751db82"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "attrName" : "speed", "attrType" : "float", "attrValue" : "112.9" }
{ "_id" : ObjectId("5534d143fa701f0be751db83"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "attrName" : "oil_level", "attrType" : "float", "attrValue" : "74.6" }

If data_model=dm-by-attribute and attr_persistence=row then NGSIMongoSink will persist the data as:

$ mongo -u myuser -p
MongoDB shell version: 2.6.9
connecting to: test
> show databases
admin              (empty)
local              0.031GB
sth_vehicles       0.031GB
test               0.031GB
> use vehicles
switched to db vehicles
> show collections
sth_/4wheels_car1_car_speed
sth_/4wheels_car1_car_oil_level
system.indexes
> db['sth_/4wheels_car1_car_speed'].find()
 { "_id" : ObjectId("5534d143fa701f0be751db87"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "attrType" : "float", "attrValue" : "112.9" }
> db['sth_/4wheels_car1_oil_level'].find()
 { "_id" : ObjectId("5534d143fa701f0be751db87"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "attrType" : "float", "attrValue" : "74.6" }

Top

Column-like storing

If data_model=dm-by-service-path and attr_persistence=column then NGSIMongoSink will persist the data within the body as:

$ mongo -u myuser -p
MongoDB shell version: 2.6.9
connecting to: test
> show databases
admin              (empty)
local              0.031GB
sth_vehicles       0.031GB
test               0.031GB
> use vehicles
switched to db vehicles
> show collections
sth_/4wheels
system.indexes
> db['sth_/4wheels'].find()
{ "_id" : ObjectId("5534d143fa701f0be751db86"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "entityId" : "car1", "entityType" : "car", "speed" : "112.9", "oil_level" : "74.6" }

If data_model=dm-by-entity and attr_persistence=column then NGSIMongoSink will persist the data within the body as:

$ mongo -u myuser -p
MongoDB shell version: 2.6.9
connecting to: test
> show databases
admin              (empty)
local              0.031GB
sth_vehicles       0.031GB
test               0.031GB
> use vehicles
switched to db vehicles
> show collections
sth_/4wheels_car1_car
system.indexes
> db['sth_/4wheels_car1_car'].find()
{"_id" : ObjectId("56337ea4c9e77c1614bfdbb7"), "recvTimeTs": "1402409899391", "recvTime" : "2015-04-20T12:13:22.41.412Z", "speed" : "112.9", "oil_level" : "74.6"}

Top

Administration guide

Configuration

NGSIMongoSink is configured through the following parameters:

Parameter Mandatory Default value Comments
type yes N/A com.telefonica.iot.cygnus.sinks.NGSIMongoSink
channel yes N/A
enable_encoding no false true or false, true applies the new encoding, false applies the old encoding.
enable_grouping no false Always false
enable_name_mappings no false true or false. Check this link for more details.
enable_lowercase no false true or false.
data_model no dm-by-entity dm-by-service-path, dm-by-entity or . dm-by-service is not currently supported.
attr_persistence no row row or column.
mongo_hosts no localhost:27017 FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run.
mongo_username no empty If empty, no authentication is done.
mongo_password no empty If empty, no authentication is done.
db_prefix no sth_
collection_prefix no sth_ system. is not accepted.
batch_size no 1 Number of events accumulated before persistence.
batch_timeout no 30 Number of seconds the batch will be building before it is persisted as it is.
batch_ttl no 10 Number of retries when a batch cannot be persisted. Use 0 for no retries, -1 for infinite retries. Please, consider an infinite TTL (even a very large one) may consume all the sink's channel capacity very quickly.
data_expiration no 0 Collections will be removed if older than the value specified in seconds. The reference of time is the one stored in the recvTime property. Set to 0 if not wanting this policy.
collections_size no 0 The oldest data (according to insertion time) will be removed if the size of the data collection gets bigger than the value specified in bytes. Notice that the size-based truncation policy takes precedence over the time-based one. Set to 0 if not wanting this policy. Minimum value (different than 0) is 4096 bytes.
max_documents no 0 The oldest data (according to insertion time) will be removed if the number of documents in the data collections goes beyond the specified value. Set to 0 if not wanting this policy.
ignore_white_spaces no true true if exclusively white space-based attribute values must be ignored, false otherwise.

A configuration example could be:

cygnus-ngsi.sinks = mongo-sink
cygnus-ngsi.channels = mongo-channel
...
cygnus-ngsi.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMongoSink
cygnus-ngsi.sinks.mongo-sink.channel = mongo-channel
cygnus-ngsi.sinks.mongo-sink.data_model = dm-by-entity
cygnus-ngsi.sinks.mongo-sink.attr_persistence = column
cygnus-ngsi.sinks.mongo-sink.enable_encoding = false
cygnus-ngsi.sinks.mongo-sink.enable_grouping = false
cygnus-ngsi.sinks.mongo-sink.enable_lowercase = false
cygnus-ngsi.sinks.mongo-sink.enable_name_mappings = false
cygnus-ngsi.sinks.mongo-sink.mongo_hosts = 192.168.80.34:27017
cygnus-ngsi.sinks.mongo-sink.mongo_username = myuser
cygnus-ngsi.sinks.mongo-sink.mongo_password = mypassword
cygnus-ngsi.sinks.mongo-sink.db_prefix = cygnus_
cygnus-ngsi.sinks.mongo-sink.collection_prefix = cygnus_
cygnus-ngsi.sinks.mongo-sink.data_model = dm-by-entity
cygnus-ngsi.sinks.mongo-sink.batch_size = 100
cygnus-ngsi.sinks.mongo-sink.batch_timeout = 30
cygnus-ngsi.sinks.mongo-sink.batch_ttl = 10
cygnus-ngsi.sinks.mongo-sink.data_expiration = 0
cygnus-ngsi.sinks.mongo-sink.collections_size = 0
cygnus-ngsi.sinks.mongo-sink.max_documents = 0
cygnus-ngsi.sinks.mongo-sink.ignore_white_spaces = true

Top

Use cases

Use NGSIMongoSink if you are looking for a Json-based document storage not growing so much in the mid-long term.

Top

Important notes

About batching

As explained in the programmers guide, NGSIMongoSink extends NGSISink, which provides a built-in mechanism for collecting events from the internal Flume channel. This mechanism allows extending classes have only to deal with the persistence details of such a batch of events in the final backend.

What is important regarding the batch mechanism is it largely increases the performance of the sink, because the number of writes is dramatically reduced. Let's see an example, let's assume a batch of 100 Flume events. In the best case, all these events regard to the same entity, which means all the data within them will be persisted in the same MongoDB collection. If processing the events one by one, we would need 100 inserts into MongoDB; nevertheless, in this example only one insert is required. Obviously, not all the events will always regard to the same unique entity, and many entities may be involved within a batch. But that's not a problem, since several sub-batches of events are created within a batch, one sub-batch per final destination MongoDB collection. In the worst case, the whole 100 entities will be about 100 different entities (100 different MongoDB collections), but that will not be the usual scenario. Thus, assuming a realistic number of 10-15 sub-batches per batch, we are replacing the 100 inserts of the event by event approach with only 10-15 inserts.

The batch mechanism adds an accumulation timeout to prevent the sink stays in an eternal state of batch building when no new data arrives. If such a timeout is reached, then the batch is persisted as it is.

By default, NGSIMongoSink has a configured batch size and batch accumulation timeout of 1 and 30 seconds, respectively. Nevertheless, as explained above, it is highly recommended to increase at least the batch size for performance purposes. Which are the optimal values? The size of the batch it is closely related to the transaction size of the channel the events are got from (it has no sense the first one is greater then the second one), and it depends on the number of estimated sub-batches as well. The accumulation timeout will depend on how often you want to see new data in the final storage. A deeper discussion on the batches of events and their appropriate sizing may be found in the performance document.

Top

About recvTime and TimeInstant metadata

By default, NGSIMongoSink stores the notification reception timestamp. Nevertheless, if (and only if) working in row mode and a metadata named TimeInstant is notified, then such metadata value is used instead of the reception timestamp. This is useful when wanting to persist a measure generation time (which is thus notified as a TimeInstant metadata) instead of the reception time.

Top

About the encoding

NGSIMongoSink follows the MongoDB naming restrictions. In a nutshell:

Until version 1.2.0 (included), Cygnus applied a very simple encoding:

  • Database names will have the characters \, /, ., $, " and encoded as _.
  • Collections names will have the characters $ encoded as _.

From version 1.3.0 (included), Cygnus applies this specific encoding tailored to MongoDB data structures:

  • Equals character, =, is encoded as xffff.
  • All the forbidden characters are encoded as a x character followed by the Unicode of the character.
  • User defined strings composed of a x character and a Unicode are encoded as xx followed by the Unicode.
  • xffff is used as concatenator character.

Despite the old encoding will be deprecated in the future, it is possible to switch the encoding type through the enable_encoding parameter as explained in the configuration section.

Top

Programmers guide

NGSISTHSink class

NGSIMongoSink extends NGSIMongoBaseSink, which as any other NGSI-like sink, extends the base NGSISink. The methods that are extended are:

void persistBatch(Batch batch) throws Exception;

A Batch contains a set of CygnusEvent objects, which are the result of parsing the notified context data events. Data within the batch is classified by destination, and in the end, a destination specifies the MongoDB collection where the data is going to be persisted. Thus, each destination is iterated in order to compose a per-destination data string to be persisted thanks to any MongoBackend implementation.

public void start();

An implementation of MongoBackend is created. This must be done at the start() method and not in the constructor since the invoking sequence is NGSIMongoSink() (contructor), configure() and start().

public void configure(Context);

A complete configuration as the described above is read from the given Context instance.

Top

MongoBackend class

This is a convenience backend class for MongoDB that provides methods to persist the context data both in raw of aggregated format. Relevant methods regarding raw format are:

public void createDatabase(String dbName) throws Exception;

Creates a database, given its name, if not exists.

public void createCollection(String dbName, String collectionName) throws Exception;

Creates a collection, given its name, if not exists in the given database.

public void insertContextDataRaw(String dbName, String collectionName, long recvTimeTs, String recvTime, String entityId, String entityType, String attrName, String attrType, String attrValue, String attrMd) throws Exception;

Updates or inserts (depending if the document already exists or not) a set of documents in the given collection within the given database. Such a set of documents contains all the information regarding current and past notifications (historic) for a single attribute. a set of documents is managed since historical data is stored using several resolutions and range combinations (second-minute, minute-hour, hour-day, day-month and month-year). See STH Comet at Github for more details.

Nothing special is done with regards to the encoding. Since Cygnus generally works with UTF-8 character set, this is how the data is written into the collections. It will responsability of the MongoDB client to convert the bytes read into UTF-8.

Top

Authentication and authorization

Current implementation of NGSIMongoSink relies on the username and password credentials created at the MongoDB endpoint.

Top