Skip to content

Siddhi 5.0 Config Guide

This section covers the following.

Configuring Databases

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

It is recommended to configure RDBMS databases as datasources under wso2.datasources section of Siddhi configuration yaml, and pass it during startup, this will allow database to reuse connections across multiple Siddhi Apps.

By default Siddhi stores product-specific data in predefined embedded H2 database located in <SIDDHI_RUNNER_HOME>/wso2/runner/database directory. Here, the default H2 database is only suitable for development, testing, and some production environments which do not store data.

However, for most production environments we recommend using industry-standard RDBMS such as Oracle, PostgreSQL, MySQL, or MSSQL. In this case users are expected to add the relevant database drivers to Siddhi's class-path.

Including database drivers.

The database driver corresponding to the database should be an OSGi bundle and it need to be added to <SIDDHI_RUNNER_HOME>/lib/ directory. If the driver is a jar then this should be converted to an OSGi bundle before adding.

Converting Non OSGi drivers.

If the database driver is not an OSGi bundle, then it should be converted to OSGi. Please refer Converting Jars to OSGi Bundles documentation for details.

The necessary table schemas are self generated by the features themselves, other than the tables needed for statistics reporting via databases.

Below are the sample datasource configuration for each supported database types:

  • MySQL

    wso2.datasources:
     dataSources:
       - name: SIDDHI_TEST_DB
         description: The datasource used for test database
         jndiConfig:
           name: jdbc/SIDDHI_TEST_DB
         definition:
           type: RDBMS
           configuration:
             jdbcUrl: jdbc:mysql://hostname:port/testdb
             username: root
             password: root
             driverClassName: com.mysql.jdbc.Driver
             maxPoolSize: 50
             idleTimeout: 60000
             connectionTestQuery: SELECT 1
             validationTimeout: 30000
             isAutoCommit: false 

  • Oracle
    There are two ways to configure Oracle. If you have a System Identifier (SID), such as

    jdbc:oracle:thin:@[HOST][:PORT]:SID
    Use this (older) format:
    wso2.datasources:
     dataSources:
       - name: SIDDHI_TEST_DB
         description: The datasource used for test database
         jndiConfig:
           name: jdbc/SIDDHI_TEST_DB
         definition:
           type: RDBMS
           configuration:
             jdbcUrl: jdbc:oracle:thin:@hostname:port:SID
             username: testdb
             password: root
             driverClassName: oracle.jdbc.driver.OracleDriver
             maxPoolSize: 50
             idleTimeout: 60000
             connectionTestQuery: SELECT 1
             validationTimeout: 30000
             isAutoCommit: false
    If you have an Oracle service name, such as
    jdbc:oracle:thin:@//[HOST][:PORT]/SERVICE
    Use this (newer) format:
    wso2.datasources:
     dataSources:
       - name: SIDDHI_TEST_DB
         description: The datasource used for test database
         jndiConfig:
           name: jdbc/SIDDHI_TEST_DB
         definition:
           type: RDBMS
           configuration:
             jdbcUrl: jdbc:oracle:thin:@hostname:port/SERVICE
             username: testdb
             password: root
             driverClassName: oracle.jdbc.driver.OracleDriver
             maxPoolSize: 50
             idleTimeout: 60000
             connectionTestQuery: SELECT 1
             validationTimeout: 30000
             isAutoCommit: false

  • PostgreSQL

    wso2.datasources:
     dataSources:
       - name: SIDDHI_TEST_DB
         description: The datasource used for test database
         jndiConfig:
           name: jdbc/SIDDHI_TEST_DB
         definition:
           type: RDBMS
           configuration:
             jdbcUrl: jdbc:postgresql://hostname:port/testdb
             username: root
             password: root
             driverClassName: org.postgresql.Driver
             maxPoolSize: 10
             idleTimeout: 60000
             connectionTestQuery: SELECT 1
             validationTimeout: 30000
             isAutoCommit: false

  • MSSQL

    wso2.datasources:
     dataSources:
       - name: SIDDHI_TEST_DB
         description: The datasource used for test database
         jndiConfig:
           name: jdbc/SIDDHI_TEST_DB
         definition:
           type: RDBMS
           configuration:
             jdbcUrl: jdbc:sqlserver://hostname:port;databaseName=testdb
             username: root
             password: root
             driverClassName: com.microsoft.sqlserver.jdbc.SQLServerDriver
             maxPoolSize: 50
             idleTimeout: 60000
             connectionTestQuery: SELECT 1
             validationTimeout: 30000
             isAutoCommit: false

Configuring Periodic State Persistence

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

This explains how to periodically persisting the state of Siddhi either into a database system or file system, in order to prevent data losses that can result from a system failure.

Persistence on Database

To perform periodic state persistence on a database, the database should be configured as a datasource and the relevant jdbc drivers should be added to Siddhi's class-path. Refer Database Configuration section for more information.

To configure database based periodic data persistence, add state.persistence section with the following properties on the Siddhi configuration yaml, and pass that during startup.

Parameter Purpose Required Value
enabled This enables data persistence. true
intervalInMin The time interval in minutes that defines the interval in which state of Siddhi applications should be persisted 1
revisionsToKeep The number of revisions to keep in the system. Here when a new persistence takes place, the older revisions are removed. 3
persistenceStore The persistence store io.siddhi.distribution.core.persistence.DBPersistenceStore
config > datasource The datasource to be used in persisting the state. The datasource should be defined in the Siddhi configuration yaml. For detailed instructions of how to configure a datasource, see Database Configuration. SIDDHI_PERSISTENCE_DB (A datasource that is defined in wso2.datasources in Siddhi configuration yaml)
config > table The table that should be created and used for persisting states. PERSISTENCE_TABLE

The following is a sample configuration for database based state persistence.

state.persistence:
  enabled: true
  intervalInMin: 1
  revisionsToKeep: 3
  persistenceStore: io.siddhi.distribution.core.persistence.DBPersistenceStore
  config:
    datasource: <DATASOURCE NAME>   # A datasource with this name should be defined in wso2.datasources namespace
    table: <TABLE NAME>

Persistence on File System

To configure file system based periodic data persistence, add state.persistence section with the following properties on the Siddhi configuration yaml, and pass that during startup.

Parameter Purpose Required Value
enabled This enables data persistence. true
intervalInMin The time interval in minutes that defines the interval in which state of Siddhi applications should be persisted 1
revisionsToKeep The number of revisions to keep in the system. Here when a new persistence takes place, the older revisions are removed. 3
persistenceStore The persistence store io.siddhi.distribution.core.persistence.FileSystemPersistenceStore
config > location A fully qualified folder location to where the revision files should be persisted. siddhi-app-persistence

The following is a sample configuration for file system based state persistence.

state.persistence:
  enabled: true
  intervalInMin: 1
  revisionsToKeep: 2
  persistenceStore: io.siddhi.distribution.core.persistence.FileSystemPersistenceStore
  config:
    location: siddhi-app-persistence

Configuring Siddhi Elements

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

You can configure some of there environment specific configurations in the Siddhi Configuration yaml rather than configuring in-line, such that your Siddhi Application can become potable between environments.

Configuring Sources, Sinks and Stores

Multiple sources, sinks, and stores could be defined in Siddhi Configuration yaml as ref, and referred by several Siddhi Applications as described below.

The following is the syntax for the configuration.

siddhi:
  refs:
    -
      ref:
        name: '<name>'
        type: '<type>'
        properties:
          <property1>: <value1>
          <property2>: <value2>
For each separate refs you want to configure, add a sub-section named ref under the refs subsection.

The ref configured in Siddhi Configuration yaml can be referred from a Siddhi Application Source as follows.

@Source(ref='<name>',
        @map(type='json', @attributes( name='$.name', amount='$.quantity')))
define stream SweetProductionStream (name string, amount double);
Similarly Sinks and Store Tables can also be configured and referred from Siddhi Apps.

For each separate refs you want to configure, add a sub-section named ref under the refs subsection.

Example: Configuring http source using ref

Following configuration defines the url and details about basic.auth, in the Siddhi Configuration yaml.

siddhi:
  refs:
    -
      ref:
        name: 'http-passthrough'
        type: 'http'
        properties:
          receiver.url: 'http://0.0.0.0:8008/sweet-production'
          basic.auth.enabled: false

This can be referred in the Siddhi Applications as follows.

@Source(ref='http-passthrough',
        @map(type='json', @attributes( name='$.name', amount='$.quantity')))
define stream SweetProductionStream (name string, amount double);

Configuring Extensions

Siddhi extensions cater use-case specific logic that is not available by default in Siddhi. Some of these extensions have system parameter configurations to define/modify their behavior. These extensions usually have default values for the parameters, but when needed, they can be overridden by configuring the parameters in Siddhi Configuration yaml and passing it at startup.

The following is the syntax for the configuration.

siddhi:
  extensions:
    -
      extension:
        name: <extension name>
        namespace: <extension namespace>
        properties:
          <key>: <value>
For each separate extension you want to configure, add a sub-section named extension under the extensions subsection.

Following are some examples on overriding default system properties via Siddhi Configuration yaml

Example 1: Defining service host and port for the TCP source

siddhi:
  extensions:
    - extension:
        name: tcp
        namespace: source
        properties:
          host: 0.0.0.0
          port: 5511
Example 2: Overwriting the default RDBMS extension configuration
siddhi:
  extensions:
    - extension:
        name: rdbms
        namespace: store
        properties:
          mysql.batchEnable: true
          mysql.batchSize: 1000
          mysql.indexCreateQuery: "CREATE INDEX {{TABLE_NAME}}_INDEX ON {{TABLE_NAME}} ({{INDEX_COLUMNS}})"
          mysql.recordDeleteQuery: "DELETE FROM {{TABLE_NAME}} {{CONDITION}}"
          mysql.recordExistsQuery: "SELECT 1 FROM {{TABLE_NAME}} {{CONDITION}} LIMIT 1"

Configuring Authentication

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

By default, Siddhi is configured with user name admin, and password admin. This can be updated by adding related user management configuration as auth.configs to the Siddhi Configuration yaml, and pass it at startup.

A sample auth.configs is as follows.

# Authentication configuration
auth.configs:
  type: 'local'        # Type of the IdP client used
  userManager:
    adminRole: admin   # Admin role which is granted all permissions
    userStore:         # User store
      users:
       -
         user:
           username: admin
           password: YWRtaW4=
           roles: 1
      roles:
       -
         role:
           id: 1
           displayName: admin

Adding Extensions and Third Party Dependencies

Applicable for all modes.

For certain use-cases, Siddhi might require extensions and/or third party dependencies to fulfill some characteristics that it does not provide by default.

This section provides details on how to add or update extension and/or third party dependencies that is needed by Siddhi.

Adding to Siddhi Java Program

When running Siddhi as a Java library, the extension jars and/or third-party dependencies needed for Siddhi can be simply added to Siddhi class-path. When Maven is used as the build tool add them to the pom.xml file along with the other mandatory jars needed by Siddhi as given is Using Siddhi as a library guide.

A sample on adding siddhi-io-http extension to the Maven pom.xml is as follows.

<!--HTTP extension-->
<dependency>
  <groupId>org.wso2.extension.siddhi.io.http</groupId>
  <artifactId>siddhi-io-http</artifactId>
  <version>${siddhi.io.http.version}</version>
</dependency>   

Refer guide for more details on using Siddhi as a Java Library.

Adding to Siddhi Local Microservice

The most used Siddhi extensions are packed by default with the Siddhi Local Microservice distribution.

To add or update Siddhi extensions and/or third-party dependencies, adding or replacing the relevant OSGi JARs in <SIDDHI_RUNNER_HOME>/lib directory.

Since Local Microservice is OSGi-based, when adding libraries/drivers they need to be checked if they are OSGi bundles, and if not convert to OSGi before adding them to the <SIDDHI_RUNNER_HOME>/lib directory.

Converting Jars to OSGi Bundles..

If the database driver is not an OSGi bundle, then it should be converted to OSGi. Please refer Converting Jars to OSGi Bundles documentation for details.

Refer guide for more details on using Siddhi as Local Microservice.

Adding to Siddhi Docker Microservice

The most used Siddhi extensions are packed by default with the Siddhi Docker Microservice distribution.

To add or update Siddhi extensions and/or third-party dependencies, a new docker image has to be built from either siddhi-runner-base-ubuntu or siddhi-runner-base-alpine images. These images contain Linux OS, JDK and the Siddhi distribution.

Sample docker file using siddhi-runner-base-alpine is as follows.

# use siddhi-runner-base
FROM siddhiio/siddhi-runner-base-alpine:5.1.0-m2
MAINTAINER Siddhi IO Docker Maintainers "siddhi-dev@googlegroups.com"

ARG HOST_BUNDLES_DIR=./files/bundles
ARG HOST_JARS_DIR=./files/jars
ARG JARS=${RUNTIME_SERVER_HOME}/jars
ARG BUNDLES=${RUNTIME_SERVER_HOME}/bundles

# copy entrypoint bash script to user home
COPY --chown=siddhi_user:siddhi_io init.sh ${WORKING_DIRECTORY}/

# copy bundles & jars to the siddhi-runner distribution
COPY --chown=siddhi_user:siddhi_io ${HOST_BUNDLES_DIR}/ ${BUNDLES}
COPY --chown=siddhi_user:siddhi_io ${HOST_JARS_DIR}/ ${JARS}

# expose ports
EXPOSE 9090 9443 9712 9612 7711 7611 7070 7443

RUN bash ${RUNTIME_SERVER_HOME}/bin/install-jars.sh

STOPSIGNAL SIGINT

ENTRYPOINT ["/home/siddhi_user/init.sh",  "--"]

Find the necessary artifacts to build the docker from docker-siddhi repository.

The necessary OSGi jars and extensions that need to be added to the Siddhi Docker Microservice should be placed at ${BUNDLE_JAR_DIR} (./files/lib) folder as defined in the above docker file, such that they would be bundled during the docker build phase.

Converting Jars to OSGi Bundles

If the database driver is not an OSGi bundle, then it should be converted to OSGi. Please refer Converting Jars to OSGi Bundles documentation for details.

Refer guide for more details on using Siddhi as Docker Microservice.

Adding to Siddhi Kubernetes Microservice

To add or update Siddhi extensions and/or third-party dependencies, a custom docker image has to be created using the steps described in Adding to Siddhi Docker Microservice documentation including the necessary extensions and dependencies. The created image can be then referenced in the sepc.pod subsection in the SiddhiProcess Kubernetes artifact created to deploy Siddhi in Kubernetes.

For details on creating the Kubernetes artifacts refer Using Siddhi as Kubernetes Microservice documentation.

Configuring Statistics

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

Siddhi uses dropwizard metrics library to calculate Siddhi and JVM statistics, and it can report the results via JMX Mbeans, console or database.

To enable statistics, the relevant configuration should be added to the Siddhi Configuration yaml as follows, and at the same time the statistics collection should be enabled in the Siddhi Application which is being monitored. Refer Siddhi Application Statistics documentation for enabling Siddhi Application level statistics.

To enable statistics the relevant matrics related configurations should be added under wso2.metrics section in the Siddhi Configurations yaml file, and pass that during startup.

Configuring Metrics reporting level.

To modify the statistics reporting, relevant metric names can be added under the wso2.metrics.levels subsection in the Siddhi Configurations yaml, along with the matrics level (i.e., OFF, INFO, DEBUG, TRACE, or ALL) as given below.

wso2.metrics:
  # Metrics Levels are organized from most specific to least:
  # OFF (most specific, no metrics)
  # INFO
  # DEBUG
  # TRACE (least specific, a lot of data)
  # ALL (least specific, all data)
  levels:
    # The root level configured for Metrics
    rootLevel: INFO
    # Metric Levels
    levels:
      jvm.buffers: 'OFF'
      jvm.class-loading: INFO
      jvm.gc: DEBUG
      jvm.memory: INFO

The available metrics reporting options are as follows.

Reporting via JMX Mbeans

JMX Mbeams is the default statistics reporting option of Siddhi. To enable stats with the default configuration add the metric-related properties under wso2.metrics section in the Siddhi Configurations yaml file, and pass that during startup.

A sample configuration is as follows.

wso2.metrics:
  enabled: true
This will report JMX Mbeans in the name of org.wso2.carbon.metrics. However, in this default configuration the JVM metrics will not be measured.

A detail JMX configuration along with the metrics reporting level is as follows.

wso2.metrics:
  # Enable Metrics
  enabled: true
  jmx:
    # Register MBean when initializing Metrics
    registerMBean: true
    # MBean Name
    name: org.wso2.carbon:type=Metrics
  # Metrics Levels are organized from most specific to least:
  # OFF (most specific, no metrics)
  # INFO
  # DEBUG
  # TRACE (least specific, a lot of data)
  # ALL (least specific, all data)
  levels:
    # The root level configured for Metrics
    rootLevel: INFO
    # Metric Levels
    levels:
      jvm.buffers: 'OFF'
      jvm.class-loading: INFO
      jvm.gc: DEBUG
      jvm.memory: INFO

Reporting via Console

To enable statistics by periodically printing the metrics on console add the following configuration to the the Siddhi Configurations yaml file, and pass that during startup.

# This is the main configuration for metrics
wso2.metrics:
  # Enable Metrics
  enabled: false
  reporting:
    console:
      - # The name for the Console Reporter
        name: Console
        # Enable Console Reporter
        enabled: false
        # Polling Period in seconds.
        # This is the period for polling metrics from the metric registry and printing in the console
        pollingPeriod: 5

Reporting via Database

To enable JDBC reporting and to periodically clean up the outdated statistics from the database, first a datasource should be created with the relevant database configurations and then the related metrics properties as given below should be added to in the Siddhi Configurations yaml file, and pass that during startup.

The below sample is referring to the datasource with JNDI name jdbc/SiddhiMetricsDB, hence the datasource configuration in yaml should have jndiConfig.name as jdbc/SiddhiMetricsDB. For detailed instructions on configuring a datasource, refer Configuring Databases.

. The scripts to create these tables are provided in the <SIDDHI_RUNNER_HOME>/wso2/runner/dbscripts directory.

Sample configuration of reporting via database.

wso2.metrics:
  # Enable Metrics
  enabled: true
  jdbc:
  # Data Source Configurations for JDBC Reporters
    dataSource:
      # Default Data Source Configuration
      - &JDBC01
        # JNDI name of the data source to be used by the JDBC Reporter.
        # This data source should be defined under wso2.datasources.
        dataSourceName: java:comp/env/jdbc/SiddhiMetricsDB
        # Schedule regular deletion of metrics data older than a set number of days.
        # It is recommended that you enable this job to ensure your metrics tables do not get extremely large.
        # Deleting data older than seven days should be sufficient.
        scheduledCleanup:
          # Enable scheduled cleanup to delete Metrics data in the database.
          enabled: false
          # The scheduled job will cleanup all data older than the specified days
          daysToKeep: 7
          # This is the period for each cleanup operation in seconds.
          scheduledCleanupPeriod: 86400
  reporting:
    jdbc:
      - # The name for the JDBC Reporter
        name: JDBC
        # Enable JDBC Reporter
        enabled: true
        # Source of Metrics, which will be used to identify each metric in database -->
        # Commented to use the hostname by default
        # source: Siddhi
        # Alias referring to the Data Source configuration
        dataSource: *JDBC01
        # Polling Period in seconds.
        # This is the period for polling metrics from the metric registry and updating the database with the values
        pollingPeriod: 60      

Metrics history and reporting interval

If the wso2.metrics.reporting.jdbc subsection is not enabled, the information relating to metrics history will not be persisted for future references. Also note the that the reporting will only start to update the database after the given pollingPeriod time has elapsed.

Information about the parameters configured under the jdbc.dataSource subsection in the Siddhi Configuration yaml is as follows.

Parameter Default Value Description
dataSourceName java:comp/env/jdbc/SiddhiMetricsDB java:comp/env/<datasource JNDI name>. The JNDI name of the datasource used to store metric data.
scheduledCleanup.enabled false If this is set to true, metrics data stored in the database is cleared periodically based on scheduled time interval.
scheduledCleanup.daysToKeep 3 If scheduled clean-up of metric data is enabled, all metric data in the database that are older than the number of days specified in this parameter are deleted.
scheduledCleanup.scheduledCleanupPeriod 86400 The parameter specifies the time interval in seconds at which metric data should be cleaned.

Converting Jars to OSGi Bundles

To convert jar files to OSGi bundles, first download and save the non-OSGi jar it in a preferred directory in your machine. Then from the CLI, navigate to the <SIDDHI_RUNNER_HOME>/bin directory, and issue the following command.

./jartobundle.sh <path to non OSGi jar> ../lib

This converts the Jar to OSGi bundles and place it in <SIDDHI_RUNNER_HOME>/lib directory.

Top