Skip to content

Siddhi 5.1 Config Guide

Configuring Databases

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

It is recommended to configure RDBMS databases as datasources under datasources section of Siddhi configuration yaml, and pass it during startup, this will allow database to reuse connections across multiple Siddhi Apps.

By default Siddhi stores product-specific data in predefined embedded H2 database located in <SIDDHI_RUNNER_HOME>/wso2/runner/database directory. Here, the default H2 database is only suitable for development, testing, and some production environments which do not store data.

However, for most production environments we recommend using industry-standard RDBMS such as Oracle, PostgreSQL, MySQL, or MSSQL. In this case users are expected to add the relevant database drivers to Siddhi's class-path.

Including database drivers.

The database driver corresponding to the database should be an OSGi bundle and it need to be added to <SIDDHI_RUNNER_HOME>/lib/ directory. If the driver is a jar then this should be converted to an OSGi bundle before adding.

Converting Non OSGi drivers.

If the database driver is not an OSGi bundle, then it should be converted to OSGi. Please refer Converting Jars to OSGi Bundles documentation for details.

The necessary table schemas are self generated by the features themselves, other than the tables needed for statistics reporting via databases.

Below are the sample datasource configuration for each supported database types:

  • MySQL

    dataSources:
      - name: SIDDHI_TEST_DB
        description: The datasource used for test database
        jndiConfig:
          name: jdbc/SIDDHI_TEST_DB
        definition:
          type: RDBMS
          configuration:
            jdbcUrl: jdbc:mysql://hostname:port/testdb
            username: root
            password: root
            driverClassName: com.mysql.jdbc.Driver
            maxPoolSize: 10
            idleTimeout: 60000
            connectionTestQuery: SELECT 1
            validationTimeout: 30000
            isAutoCommit: false

  • Oracle
    There are two ways to configure Oracle. If you have a System Identifier (SID), use this (older) format:

      jdbc:oracle:thin:@[HOST][:PORT]:SID
      
    dataSources:
    - name: SIDDHI_TEST_DB
      description: The datasource used for test database
      jndiConfig:
        name: jdbc/SIDDHI_TEST_DB
      definition:
        type: RDBMS
        configuration:
          jdbcUrl: jdbc:oracle:thin:@hostname:port:SID
          username: testdb
          password: root
          driverClassName: oracle.jdbc.driver.OracleDriver
          maxPoolSize: 10
          idleTimeout: 60000
          connectionTestQuery: SELECT 1
          validationTimeout: 30000
          isAutoCommit: false
    If you have an Oracle service name, use this (newer) format:
      jdbc:oracle:thin:@//[HOST][:PORT]/SERVICE
      
    dataSources:
    - name: SIDDHI_TEST_DB
      description: The datasource used for test database
      jndiConfig:
        name: jdbc/SIDDHI_TEST_DB
      definition:
        type: RDBMS
        configuration:
          jdbcUrl: jdbc:oracle:thin:@hostname:port/SERVICE
          username: testdb
          password: root
          driverClassName: oracle.jdbc.driver.OracleDriver
          maxPoolSize: 50
          idleTimeout: 60000
          connectionTestQuery: SELECT 1
          validationTimeout: 30000
          isAutoCommit: false

  • PostgreSQL

    dataSources:
    - name: SIDDHI_TEST_DB
      description: The datasource used for test database
      jndiConfig:
        name: jdbc/SIDDHI_TEST_DB
      definition:
        type: RDBMS
        configuration:
          jdbcUrl: jdbc:postgresql://hostname:port/testdb
          username: root
          password: root
          driverClassName: org.postgresql.Driver
          maxPoolSize: 10
          idleTimeout: 60000
          connectionTestQuery: SELECT 1
          validationTimeout: 30000
          isAutoCommit: false

  • MSSQL

     dataSources:
     - name: SIDDHI_TEST_DB
       description: The datasource used for test database
       jndiConfig:
         name: jdbc/SIDDHI_TEST_DB
       definition:
         type: RDBMS
         configuration:
           jdbcUrl: jdbc:sqlserver://hostname:port;databaseName=testdb
           username: root
           password: root
           driverClassName: com.microsoft.sqlserver.jdbc.SQLServerDriver
           maxPoolSize: 10
           idleTimeout: 60000
           connectionTestQuery: SELECT 1
           validationTimeout: 30000
           isAutoCommit: false

Configuring Periodic State Persistence

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

This explains how to periodically persisting the state of Siddhi either into a database system or file system, in order to prevent data losses that can result from a system failure.

Persistence on Database

To perform periodic state persistence on a database, the database should be configured as a datasource and the relevant jdbc drivers should be added to Siddhi's class-path. Refer Database Configuration section for more information.

To configure database based periodic data persistence, add statePersistence section with the following properties on the Siddhi configuration yaml, and pass that during startup.

Parameter Purpose Required Value
enabled This enables data persistence. true
intervalInMin The time interval in minutes that defines the interval in which state of Siddhi applications should be persisted 1
revisionsToKeep The number of revisions to keep in the system. Here when a new persistence takes place, the older revisions are removed. 3
persistenceStore The persistence store io.siddhi.distribution.core.persistence.DBPersistenceStore
config > datasource The datasource to be used in persisting the state. The datasource should be defined in the Siddhi configuration yaml. For detailed instructions of how to configure a datasource, see Database Configuration. SIDDHI_PERSISTENCE_DB (A datasource that is defined in datasources in Siddhi configuration yaml)
config > table The table that should be created and used for persisting states. PERSISTENCE_TABLE

The following is a sample configuration for database based state persistence.

statePersistence:
  enabled: true
  intervalInMin: 1
  revisionsToKeep: 3
  persistenceStore: io.siddhi.distribution.core.persistence.DBPersistenceStore
  config:
    datasource: <DATASOURCE NAME>   # A datasource with this name should be defined in datasources namespace
    table: <TABLE NAME>

Persistence on File System

To configure file system based periodic data persistence, add statePersistence section with the following properties on the Siddhi configuration yaml, and pass that during startup.

Parameter Purpose Required Value
enabled This enables data persistence. true
intervalInMin The time interval in minutes that defines the interval in which state of Siddhi applications should be persisted 1
revisionsToKeep The number of revisions to keep in the system. Here when a new persistence takes place, the older revisions are removed. 3
persistenceStore The persistence store io.siddhi.distribution.core.persistence.FileSystemPersistenceStore
config > location A fully qualified folder location to where the revision files should be persisted. siddhi-app-persistence

The following is a sample configuration for file system based state persistence.

statePersistence:
  enabled: true
  intervalInMin: 1
  revisionsToKeep: 2
  persistenceStore: io.siddhi.distribution.core.persistence.FileSystemPersistenceStore
  config:
    location: siddhi-app-persistence

Persistence on AWS-S3

To configure AWS-S3 based periodic data persistence, add statePersistence section with the following properties on the Siddhi configuration yaml, and pass that during startup.

Parameter Purpose Required Value
enabled This enables data persistence. true
intervalInMin The time interval in minutes that defines the interval in which state of Siddhi applications should be persisted 1
revisionsToKeep The number of revisions to keep in the system. Here when a new persistence takes place, the older revisions are removed. 3
persistenceStore The persistence store io.siddhi.distribution.core.persistence.S3PersistenceStore
config > credentialProviderClass CredentialProviderClass name. software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider
config > accessKey Access key of the user (only if credentialProviderClass property is not provided. *****access-key*****
config > secretKey Secret key of the user (only if credentialProviderClass property is not provided. *****secret-key*****
config > bucketName Name of the bucket where revision files should be persisted. siddhi-app-persistence
config > region Name of the region where bucket belongs to. us-west-2

The following are some samples configuration for aws-s3 based state persistence.

  • Sample configuration with credential provider class
    statePersistence:
      enabled: true
      intervalInMin: 1
      revisionsToKeep: 2
      persistenceStore: io.siddhi.distribution.core.persistence.S3PersistenceStore
      config:
        credentialProviderClass: software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider
        region: us-west-2
        bucketName: siddhi-app-persistence
  • Sample configuration with secret-key and access-key
    statePersistence:
      enabled: true
      intervalInMin: 1
      revisionsToKeep: 2
      persistenceStore: io.siddhi.distribution.core.persistence.S3PersistenceStore
      config:
        accessKey: access-key
        secretKey: secret-key
        region: us-west-2
        bucketName: siddhi-app-persistence

Persistence on GCS

To configure GCS based periodic data persistence, add statePersistence section with the following properties on the Siddhi configuration yaml, and pass that during startup.

Parameter Purpose Required Value
enabled This enables data persistence. true
intervalInMin The time interval in minutes that defines the interval in which state of Siddhi applications should be persisted 1
revisionsToKeep The number of revisions to keep in the system. Here when a new persistence takes place, the older revisions are removed. 2
persistenceStore The persistence store io.siddhi.distribution.core.persistence.GCSPersistenceStore
config > credentialPath Path to the file than contains the secret key ${carbon.home}/resources/key.json
config > bucketName Name of the bucket where revision files should be persisted. siddhi-app-persistence

The following are some samples configuration for aws-s3 based state persistence.

  • Sample configuration with secret key file

    statePersistence:
      enabled: true
      intervalInMin: 1
      revisionsToKeep: 2
      persistenceStore: io.siddhi.distribution.core.persistence.GCSPersistenceStore
      config:
        credentialPath:  "${carbon.home}/resources/key.json"
        bucketName: siddhi-persistence

  • Sample configuration when path to the key file is set as a environment variable

    statePersistence:
      enabled: true
      intervalInMin: 1
      revisionsToKeep: 2
      persistenceStore: io.siddhi.distribution.core.persistence.GCSPersistenceStore
      config:
        bucketName: siddhi-persistence

Configuring Siddhi Elements

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

You can configure some of there environment specific configurations in the Siddhi Configuration yaml rather than configuring in-line, such that your Siddhi Application can become potable between environments.

Configuring Sources, Sinks and Stores References

Multiple sources, sinks, and stores could be defined in Siddhi Configuration yaml as ref, and referred by several Siddhi Applications as described below.

The following is the syntax for the configuration.

refs:
  -
    ref:
      name: '<name>'
      type: '<type>'
      properties:
        <property1>: <value1>
        <property2>: <value2>

For each separate refs you want to configure, add a sub-section named ref under the refs subsection.

The ref configured in Siddhi Configuration yaml can be referred from a Siddhi Application Source as follows.

@Source(ref='<name>',
        @map(type='json', @attributes( name='$.name', amount='$.quantity')))
define stream SweetProductionStream (name string, amount double);
Similarly Sinks and Store Tables can also be configured and referred from Siddhi Apps.

Example: Configuring http source using ref

Following configuration defines the url and details about basic.auth, in the Siddhi Configuration yaml.

refs:
  -
    ref:
      name: 'http-passthrough'
      type: 'http'
      properties:
        receiver.url: 'http://0.0.0.0:8008/sweet-production'
        basic.auth.enabled: false

This can be referred in the Siddhi Applications as follows.

@Source(ref='http-passthrough',
        @map(type='json', @attributes( name='$.name', amount='$.quantity')))
define stream SweetProductionStream (name string, amount double);

Configuring Extensions System Parameters

Siddhi extensions cater use-case specific logic that is not available by default in Siddhi. Some of these extensions have system parameter configurations to define/modify their behavior. These extensions usually have default values for the parameters, but when needed, they can be overridden by configuring the parameters in Siddhi Configuration yaml and passing it at startup.

The following is the syntax for the configuration.

extensions:
  -
    extension:
      name: <extension name>
      namespace: <extension namespace>
      properties:
        <key>: <value>
For each separate extension you want to configure, add a sub-section named extension under the extensions subsection.

Following are some examples on overriding default system properties via Siddhi Configuration yaml

Example 1: Defining service host and port for the TCP source

extensions:
  - extension:
      name: tcp
      namespace: source
      properties:
        host: 0.0.0.0
        port: 5511
Example 2: Overwriting the default RDBMS extension configuration
extensions:
  - extension:
      name: rdbms
      namespace: store
      properties:
        mysql.batchEnable: true
        mysql.batchSize: 1000
        mysql.indexCreateQuery: "CREATE INDEX {{TABLE_NAME}}_INDEX ON {{TABLE_NAME}} ({{INDEX_COLUMNS}})"
        mysql.recordDeleteQuery: "DELETE FROM {{TABLE_NAME}} {{CONDITION}}"
        mysql.recordExistsQuery: "SELECT 1 FROM {{TABLE_NAME}} {{CONDITION}} LIMIT 1"

Configuring Siddhi Properties

Siddhi supports setting following properties to be specify distribution based behaviours, for instance all Named Aggregation in the distribution can be changed to Distributed Named Aggregation with the following siddhi properties.

System Property Description Possible Values Optional Default Value
shardId The id of the shard one of the distributed aggregation is running in. This should be unique to a single shard Any string No
partitionById This allows user to enable/disable distributed aggregation for all aggregations running in one siddhi manager .(Available from v4.3.3) true/false Yes false

Following is the example of setting Distributed Named Aggregation

properties:
  partitionById : true
  shardId : shard1

Configuring Authentication

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

Siddhi is configured with user name admin, and password admin. This can be updated by adding related user management configuration as authentication to the Siddhi Configuration yaml, and pass it at startup.

A sample authentication is as follows.

# Authentication configuration
authentication:
  type: 'local'        # Type of the IdP client used
  userManager:
    adminRole: admin   # Admin role which is granted all permissions
    userStore:         # User store
      users:
       -
         user:
           username: admin
           password: YWRtaW4=
           roles: 1
      roles:
       -
         role:
           id: 1
           displayName: admin

Adding Extensions and Third Party Dependencies

Applicable for all modes.

For certain use-cases, Siddhi might require extensions and/or third party dependencies to fulfill some characteristics that it does not provide by default.

This section provides details on how to add or update extension and/or third party dependencies that is needed by Siddhi.

Adding to Siddhi Java Program

When running Siddhi as a Java library, the extension jars and/or third-party dependencies needed for Siddhi can be simply added to Siddhi class-path. When Maven is used as the build tool add them to the pom.xml file along with the other mandatory jars needed by Siddhi as given is Using Siddhi as a library guide.

A sample on adding siddhi-io-http extension to the Maven pom.xml is as follows.

<!--HTTP extension-->
<dependency>
  <groupId>org.wso2.extension.siddhi.io.http</groupId>
  <artifactId>siddhi-io-http</artifactId>
  <version>${siddhi.io.http.version}</version>
</dependency>   

Refer guide for more details on using Siddhi as a Java Library.

Adding to Siddhi Local Microservice

The most used Siddhi extensions are packed by default with the Siddhi Local Microservice distribution.

To add or update Siddhi extensions and/or third-party dependencies, you can use <SIDDHI_RUNNER_HOME>/jars and <SIDDHI_RUNNER_HOME>/bundles directories.

  1. <SIDDHI_RUNNER_HOME>/jars directory : Maintained for Jar files which may not have their corresponding OSGi bundle implementation. These Jars will be converted as OSGI bundles and copied to Siddhi Runner distribution during server startup.
  2. <SIDDHI_RUNNER_HOME>/bundles directory : Maintained for OSGI bundles which you need to copy to Siddhi Runner distribution during server startup.

Updates to these directories will be adapted after a server restart.

Refer guide for more details on using Siddhi as Local Microservice.

Adding to Siddhi Docker Microservice

The most used Siddhi extensions are packed by default with the Siddhi Docker Microservice distribution.

To add or update Siddhi extensions and/or third-party dependencies, a new docker image has to be built from either siddhi-runner-base-ubuntu or siddhi-runner-base-alpine images. These images contain Linux OS, JDK and the Siddhi distribution.

Sample docker file using siddhi-runner-base-alpine is as follows.

# use siddhi-runner-base
FROM siddhiio/siddhi-runner-base-alpine:5.1.2
MAINTAINER Siddhi IO Docker Maintainers "siddhi-dev@googlegroups.com"

ARG HOST_BUNDLES_DIR=./files/bundles
ARG HOST_JARS_DIR=./files/jars
ARG JARS=${RUNTIME_SERVER_HOME}/jars
ARG BUNDLES=${RUNTIME_SERVER_HOME}/bundles

# copy bundles & jars to the siddhi-runner distribution
COPY --chown=siddhi_user:siddhi_io ${HOST_BUNDLES_DIR}/ ${BUNDLES}
COPY --chown=siddhi_user:siddhi_io ${HOST_JARS_DIR}/ ${JARS}

# expose ports
EXPOSE 9090 9443 9712 9612 7711 7611 7070 7443

RUN bash ${RUNTIME_SERVER_HOME}/bin/install-jars.sh

STOPSIGNAL SIGINT

ENTRYPOINT ["/home/siddhi_user/siddhi-runner/bin/runner.sh",  "--"]

Find the necessary artifacts to build the docker from docker-siddhi repository.

<DOCKERFILE_HOME&gt/siddhi-runner/files contains two directories (bundles and jars directories) where you can copy the Jars and Bundles you need to bundle into the docker image.

  1. Jars directory - Maintained for Jar files which may not have their corresponding OSGi bundle implementation. These Jars will be converted as OSGI bundles and copied to Siddhi Runner docker image during docker build phase.
  2. Bundles directory - Maintained for OSGI bundles which you need to copy to Siddhi Runner docker image directory during docker build phase.

Refer guide for more details on using Siddhi as Docker Microservice.

Adding to Siddhi Kubernetes Microservice

To add or update Siddhi extensions and/or third-party dependencies, a custom docker image has to be created using the steps described in Adding to Siddhi Docker Microservice documentation including the necessary extensions and dependencies.

The created image can be then referenced in the sepc.pod subsection in the SiddhiProcess Kubernetes artifact created to deploy Siddhi in Kubernetes.

For details on creating the Kubernetes artifacts refer Using Siddhi as Kubernetes Microservice documentation.

Configuring Statistics

Applicable only for Local, Docker, and Kubernetes modes.

This section is not applicable for Java and Python modes.

Siddhi uses dropwizard metrics library to calculate Siddhi and JVM statistics, and it can report the results via JMX Mbeans, console or database.

To enable statistics, the relevant configuration under metrics section should be added to the Siddhi Configuration yaml as follows, and at the same time the statistics collection should be enabled in the Siddhi Application which is being monitored. Refer Siddhi Application Statistics documentation for enabling Siddhi Application level statistics.

Configuring Metrics reporting level.

To modify the statistics reporting, relevant metric names can be added under the metrics.levels subsection in the Siddhi Configurations yaml, along with the metrics level (i.e., OFF, INFO, DEBUG, TRACE, or ALL) as given below.

metrics:
  # Metrics Levels are organized from most specific to least:
  # OFF (most specific, no metrics)
  # INFO
  # DEBUG
  # TRACE (least specific, a lot of data)
  # ALL (least specific, all data)
  levels:
    # The root level configured for Metrics
    rootLevel: INFO
    # Metric Levels
    levels:
      jvm.buffers: 'OFF'
      jvm.class-loading: INFO
      jvm.gc: DEBUG
      jvm.memory: INFO

The available metrics reporting options are as follows.

Reporting via JMX Mbeans

JMX Mbeans is the default statistics reporting option of Siddhi. To enable stats with the default configuration add the metric-related properties under metrics section in the Siddhi Configurations yaml file, and pass that during startup.

A sample configuration is as follows.

metrics:
  enabled: true
This will report JMX Mbeans in the name of org.wso2.carbon.metrics. However, in this default configuration the JVM metrics will not be measured.

A detail JMX configuration along with the metrics reporting level is as follows.

metrics:
  # Enable Metrics
  enabled: true
  jmx:
    # Register MBean when initializing Metrics
    registerMBean: true
    # MBean Name
    name: org.wso2.carbon:type=Metrics
  # Metrics Levels are organized from most specific to least:
  # OFF (most specific, no metrics)
  # INFO
  # DEBUG
  # TRACE (least specific, a lot of data)
  # ALL (least specific, all data)
  levels:
    # The root level configured for Metrics
    rootLevel: INFO
    # Metric Levels
    levels:
      jvm.buffers: 'OFF'
      jvm.class-loading: INFO
      jvm.gc: DEBUG
      jvm.memory: INFO

Reporting via Console

To enable statistics by periodically printing the metrics on console add the following configuration to the the Siddhi Configurations yaml file, and pass that during startup.

# This is the main configuration for metrics
metrics:
  # Enable Metrics
  enabled: true
  reporting:
    console:
      - # The name for the Console Reporter
        name: Console
        # Enable Console Reporter
        enabled: true
        # Polling Period in seconds.
        # This is the period for polling metrics from the metric registry and printing in the console
        pollingPeriod: 5

Reporting via Database

To enable JDBC reporting and to periodically clean up the outdated statistics from the database, first a datasource should be created with the relevant database configurations and then the related metrics properties as given below should be added to in the Siddhi Configurations yaml file, and pass that during startup.

The below sample is referring to the datasource with JNDI name jdbc/SiddhiMetricsDB, hence the datasource configuration in yaml should have jndiConfig.name as jdbc/SiddhiMetricsDB. For detailed instructions on configuring a datasource, refer Configuring Databases.

. The scripts to create these tables are provided in the <SIDDHI_RUNNER_HOME>/wso2/runner/dbscripts directory.

Sample configuration of reporting via database.

metrics:
  enabled: true
  jdbc:
  # Data Source Configurations for JDBC Reporters
    dataSource:
      - &JDBC01
        dataSourceName: java:comp/env/jdbc/SiddhiMetricsDB
        scheduledCleanup:
          enabled: true
          daysToKeep: 7
          scheduledCleanupPeriod: 86400
  reporting:
    jdbc:
      - # The name for the JDBC Reporter
        name: JDBC
        enabled: true
        dataSource: *JDBC01
        pollingPeriod: 60      

Metrics history and reporting interval

If the metrics.reporting.jdbc subsection is not enabled, the information relating to metrics history will not be persisted for future references. Also note the that the reporting will only start to update the database after the given pollingPeriod time has elapsed.

Information about the parameters configured under the jdbc.dataSource subsection in the Siddhi Configuration yaml is as follows.

Parameter Default Value Description
dataSourceName java:comp/env/jdbc/SiddhiMetricsDB java:comp/env/<datasource JNDI name>. The JNDI name of the datasource used to store metric data.
scheduledCleanup.enabled false If this is set to true, metrics data stored in the database is cleared periodically based on scheduled time interval.
scheduledCleanup.daysToKeep 3 If scheduled clean-up of metric data is enabled, all metric data in the database that are older than the number of days specified in this parameter are deleted.
scheduledCleanup.scheduledCleanupPeriod 86400 The parameter specifies the time interval in seconds at which metric data should be cleaned.

Converting Jars to OSGi Bundles

To convert jar files to OSGi bundles, first download and save the non-OSGi jar it in a preferred directory in your machine. Then from the CLI, navigate to the <SIDDHI_RUNNER_HOME>/bin directory, and issue the following command.

./jartobundle.sh <path to non OSGi jar> ../lib

This converts the Jar to OSGi bundles and place it in <SIDDHI_RUNNER_HOME>/lib directory.

Encrypt sensitive deployment configurations

Cipher tool is used to encrypt sensitive data in deployment configurations. This tool works in conjunction with Secure Vault to replace sensitive data that is in plain text with an alias. The actual value is then encrypted and securely stored in the SecureVault. At runtime, the actual value is retrieved from the alias and used. For more information, see Secure Vault.

Below is the default configurations for Secure Vault

# Secure Vault Configuration
securevault:
  secretRepository:
    type: org.wso2.carbon.secvault.repository.DefaultSecretRepository
    parameters:
      privateKeyAlias: wso2carbon
      keystoreLocation: ${SIDDHI_RUNNER_HOME}/resources/security/securevault.jks
      secretPropertiesFile: ${SIDDHI_RUNNER_HOME}/conf/runner/secrets.properties
  masterKeyReader:
    type: org.wso2.carbon.secvault.reader.DefaultMasterKeyReader
    parameters:
      masterKeyReaderFile: ${SIDDHI_RUNNER_HOME}/conf/runner/master-keys.yaml
Information about the parameters configured under the securevault subsection in the Siddhi Configuration yaml is as follows.

Parameter Default Value Description
secretRepository > type org.wso2.carbon.secvault.repository.DefaultSecretRepository The default implementation of Secret Repository is based on the passwords and aliases given in the secrets.properties file and the JKS that is configured in the secure-vault.yaml file
secretPropertiesFile ${SIDDHI_RUNNER_HOME}/conf/runner/secrets.properties Location of the secrect.properties file which matches alias with encrypted data
secretPropertiesFile ${SIDDHI_RUNNER_HOME}/resources/security/securevault.jks Keystore which contains the certificate to encrypt sensitive data
privateKeyAlias wso2carbon Alias of the certificate in the key store used for encryption
masterKeyReader > type org.wso2.carbon.secvault.reader.DefaultMasterKeyReader The default implementation of MasterKeyReader gets a list of required passwords from the Secret Repository and provides the values for those passwords by reading system properties, environment variables and the master-keys.yaml file.
masterKeyReaderFile ${SIDDHI_RUNNER_HOME\}/conf/runner/master-keys.yaml Location of master-keys.yaml file which contains password used to access the key store to decrypt the encrypted passwords at runtime

Configuring server properties

Siddhi runner and tooling distribution is based on WSO2 Carbon 5 Kernel platform. The properties for the server can be configure under wso2.carbon namespace.

Sample configurations is as follows,

wso2.carbon:
  id: siddhi-runner
  name: Siddhi Runner Distribution

Configure port offset

Port offset defines the number by which all ports defined in the runtime such as the HTTP/S ports will be offset. For example, if the default HTTP port is 9090 and the ports >> offset is 1, the effective HTTP port will be 9091. This configuration allows to change ports in a uniform manner across the transports.

Below is the sample configurations for offsets,

wso2.carbon:
  id: siddhi-runner
  name: Siddhi Runner Distribution
  ports:
    offset: 1

Disabling host name verification

Hostname verification can be disabled in Admin APIs in analytics server side, with hostnameVerificationEnabled

Below is the sample configuration,

wso2.carbon:
  id: siddhi-runner
  name: Siddhi Runner Distribution
  hostnameVerificationEnabled: false

Configuring Admin REST APIs

Admin API can be configured under the namespace transports >> http.

Sample Config and the parameters are as follows,

transports:
  http:
    listenerConfigurations:
      - 
        id: "default"
        host: "0.0.0.0"
        port: 9090
      - 
        id: "msf4j-https"
        host: "0.0.0.0"
        port: 9443
        scheme: https
        sslConfig:
          keyStore: "${carbon.home}/resources/security/wso2carbon.jks"
          keyStorePassword: wso2carbon
    transportProperties:
      - name: "server.bootstrap.socket.timeout"
        value: 60
      - name: "latency.metrics.enabled"
        value: false

Parameter Default Value Description
id default Id of the server
host 0.0.0.0 Hostname of the server
port 8080 Port of the APIs
scheme http Scheme of the APIs. It can be either http or https
httpTraceLogEnabled false Enable HTTP trace logs
httpAccessLogEnabled false Enable HTTP access logs
socketIdleTimeout 0 Timeout for socket for which requests received. Not set by default.

SSL configurations (listenerConfigurations >> sslConfig)

Parameter Default Value Description
keyStore ${carbon.home}/resources/security/wso2carbon.jks The file containing the private key of the client
keyStorePass wso2carbon Password of the private key if it is encrypted
enableProtocols All SSL/TLS protocols to be enabled (e.g.: TLSv1,TLSv1.1,TLSv1.2)
cipherSuites All List of ciphers to be used eg: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
enableSessionCreation Enable/disable new SSL session creation
sessionTimeOut 0 SSL session time out. Not set by default.
handshakeTimeOut 0 SSL handshake time out. Not set by default.

Transport Properties (transportProperties)

Parameter Default Value Description
server.bootstrap.connect.timeout 15000 Timout in millisecond to establish connection
server.bootstrap.socket.timeout 60 Socket connection timeouts
latency.metrics.enabled false Enable/Disable latency metrics by carbon metrics component

Configuring Databridge Transport

Siddhi uses Databridge transport to send and receive events over Thrift/Binary protocols, This can be used through siddhi-io-wso2event extension.

Sample Configuration is as follows,

transports:      
  databridge:
  # Configuration used for the databridge communication
    listenerConfigurations:
      workerThreads: 10
      .
      .
      .
    senderConfigurations:
    # Configuration of the Data Agents - to publish events through databridge
      agents:
          agentConfiguration:
            name: Thrift
            dataEndpointClass: org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftDataEndpoint
            .
            .
            .
Here, transports >> databridge includes listenerConfigurations, to configure databridge receiver in WSO2Event Source, and senderConfigurations, to configure agents used to publish events over databridge in WSO2Event Sink

Configuring databridge listener

Sample configuration for databridge listener and properties are as follows,

transports:
  databridge:
    listenerConfigurations:
      workerThreads: 10
      maxEventBufferCapacity: 10
      eventBufferSize: 2000
      keyStoreLocation: ${sys:carbon.home}/resources/security/wso2carbon.jks
      keyStorePassword: wso2carbon
      clientTimeoutMin: 30
      # Data receiver configurations
      dataReceivers:
        -
          dataReceiver:
            type: Thrift
            properties:
              tcpPort: '7611'
              sslPort: '7711'
        - 
          dataReceiver:
            type: Binary
            properties:
              tcpPort: '9611'
              sslPort: '9711'
              tcpReceiverThreadPoolSize: '100'
              sslReceiverThreadPoolSize: '100'
              hostName: 0.0.0.0
Parameter Default Value Description
workerThreads 10 No of worker threads to consume events
maxEventBufferCapacity 10 Maximum amount of messages that can be queued internally in Message Buffer
eventBufferSize 2000 Maximum number of events that can be stored in the queue
clientTimeoutMin 30 Session timeout value in minutes
keyStoreLocation ${SIDDHIRUNNER_HOME}/resources/security/wso2carbon.jks Keystore file path
Keystore password wo2carbon Keystore password
dataReceivers Generalised configuration for different types of data receivers
dataReceivers >> dataReceiver >> type Type of the data receiver

Parameters for Thrift data receiver,

Parameter Default Value Description
tcpPort 7611 TCP port for the Thrift data receiver
sslPort 7711 SSL port for the Thrift data receiver

Parameters for Binary data receiver,

Parameter Default Value Description
tcpPort 7611 TCP port for the Binary data receiver
sslPort 7711 SSL port for the Binary data receiver
tcpReceiverThreadPoolSize 100 Receiver pool size for Thrift TCP protocol
sslReceiverThreadPoolSize 100 Receiver pool size for Thrift SSL protocol
hostname 0.0.0.0 Hostname for the Thrift receiver

Configuring databridge publisher

Note

By default both Thrift and Binary agents will be started.

Sample configuration for databridge agent(publisher) and properties are as follows,

transports:
  databridge:
    senderConfigurations:
      agents:
        -
          agentConfiguration:
            name: Thrift
            dataEndpointClass: org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftDataEndpoint
            publishingStrategy: async
            trustStorePath: '${sys:carbon.home}/resources/security/client-truststore.jks'
            trustStorePassword: 'wso2carbon'
            queueSize: 32768
            batchSize: 200
            corePoolSize: 1
            socketTimeoutMS: 30000
            maxPoolSize: 1
            keepAliveTimeInPool: 20
            reconnectionInterval: 30
            maxTransportPoolSize: 250
            maxIdleConnections: 250
            evictionTimePeriod: 5500
            minIdleTimeInPool: 5000
            secureMaxTransportPoolSize: 250
            secureMaxIdleConnections: 250
            secureEvictionTimePeriod: 5500
            secureMinIdleTimeInPool: 5000
            sslEnabledProtocols: TLSv1.1,TLSv1.2
            ciphers: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
        -
          agentConfiguration:
            name: Binary
            dataEndpointClass: org.wso2.carbon.databridge.agent.endpoint.binary.BinaryDataEndpoint
            publishingStrategy: async
            trustStorePath: '${sys:carbon.home}/resources/security/client-truststore.jks'
            trustStorePassword: 'wso2carbon'
            queueSize: 32768
            batchSize: 200
            corePoolSize: 1
            socketTimeoutMS: 30000
            maxPoolSize: 1
            keepAliveTimeInPool: 20
            reconnectionInterval: 30
            maxTransportPoolSize: 250
            maxIdleConnections: 250
            evictionTimePeriod: 5500
            minIdleTimeInPool: 5000
            secureMaxTransportPoolSize: 250
            secureMaxIdleConnections: 250
            secureEvictionTimePeriod: 5500
            secureMinIdleTimeInPool: 5000
            sslEnabledProtocols: TLSv1.1,TLSv1.2
            ciphers: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256

Parameter Default Value Description
name Thrift / Binary Name of the databridge agent
dataEndpointClass org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftDataEndpoint / org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftDataEndpoint Class of the databridge agent initialised
publishingStrategy async Strategy used for publishing. Can be either sync or async
trustStorePath ${sys:carbon.home\}/resources/security/client-truststore.jks Truststore file path
trustStorePassword wso2carbon Trust store password
queueSize 32768 Queue size used to hold events before publishing
batchSize 200 Size of a publishing batch of events
corePoolSize 1 Pool size of the threads used to buffer before publishing
maxPoolSize 1 Maximum pool size for threads used to buffer before publishing
socketTimeoutMS 30000 Time for socket to timeout in Milliseconds
keepAliveTimeInPool 20 Time used to keep the threads live
reconnectionInterval 30 Reconnection interval in case of lost transmission
maxTransportPoolSize 250 Transport threads used for publishing
maxIdleConnections 250 Maximum idle connections maintained in the databridge
evictionTimePeriod 5500 Eviction time interval
minIdleTimeInPool 5500 Min idle time in pool
secureMaxTransportPoolSize 250 Max transport pool size in SSL publishing
secureMaxIdleConnections 250 Max idle connections in SSL publishing
secureEvictionTimePeriod 5500 Eviction time period in SSL publishing
secureMinIdleTimeInPool 5500 Min idle time in pool in SSL publishing
sslEnabledProtocols TLSv1.1,TLSv1.2 SSL enabled protocols
ciphers TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
Ciphers used in transmission
Top