<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=lYCzn1QolK10N8" style="display:none" height="1" width="1" alt="">

Event Processor

The Event Processor provides a configurable mechanism for delivering events from GreenArrow to a database or HTTP server.

GreenArrow only delivers events that it has been configured to log. If you haven’t configured event logging yet, then we recommend using the Event Notification System page as your starting point. It will direct you back to this page after the prerequisites are met.

Configuration

Definition

The event processor is configured in the JSON file found at /var/hvmail/control/event_processor.json.

Below is the definition for this configuration JSON document. The root of the document should be an object that defines the top-level keys like event_destinations.

You must define a place for events to be delivered: (a) At least one destination must be defined in event_destinations or (b) a logfile for writing events to must be defined in logfile.filename.

If the event_destinations list contains at least one destination, the last one must match all events (matches must be set to { "all": true }).

event_destinations

array of hashes



An array of event destination hashes. The first event destination to match the incoming event has the event delivered to it. Subsequent event destinations are not used.

Each entry in the event_destinations array should have the following keys:

matches

hash



The matches hash defines what events this event destination will receive. All of the specified filters must match for this event.

all

boolean

All events deliver to this event destination. This must be the only filter in the matches hash if present. This filter is best used as the final entry in your configuration.

mail_class

string or array of strings

Events of one of the specified Mail Classes match when using this filter. If combined with event_type, both filters must match.

This filter is case-insensitive.

event_type

string or array of strings

Events of the specified event types match when using this filter. If combined with mail_class, both filters must match.

This filter is case-insensitive.

destination

hash



The destination hash defines where events that match this event destination are delivered. All destinations must define the type key, with the individual destination types requiring different keys based on that type value.

type

string

The method of communication to use for this event destination. For HTTP POST, this value must be http_post. For database connections, this value must be custom_sql. To leave the event in the queue, this value must be leave_in_queue. To drop the event from the queue without delivering it, this value must be drop_from_queue.

If type is http_post, the event is delivered to the destination using an HTTP POST request. The following key must be defined for these destinations.

url

string

The URL that will receive the POSTed data.

If type is custom_sql, the event is delivered to the destination using a database connection. The following keys must be defined for these destinations.

db_dsn

string

The DSN of the target database. See the following examples.

For a MySQL database: DBI:mysql:database=event_db;host=127.0.0.1

For a PostgreSQL database: dbi:Pg:dbname=greenarrow;host=127.0.0.1

For an MS SQL database: dbi:ODBC:DRIVER={ms-sql};Server=1.1.1.1;port=1433;database=dbname

db_username

string

/

optional

The username that is used to login to the database.

db_password

string

/

optional

The password that is used to login to the database.

sql

string

The SQL statement to execute.

Use $X for bind variables, where X is a 1-based index of the column in your bind_list array.

For example, with bind_list set to [ "event_time", "email", "event_type" ]:

INSERT INTO my_events ( time, recipient_email, event_type ) VALUES ( ?, ?, ? )

bind_list

array of strings

The list of fields that to bind to your query. See the Types of Events documentation for a list of what fields can be bound.

replace_non_ascii

string

If any of the data contains non-ASCII bytes, those bytes are replaced with the given string.

If type is leave_in_queue, the event will remain in the event queue indefinitely. This should only be used in two situations:

  1. As the only rule.
  2. As a portion of the rule set for a limited period of time. Making this one of a set of rules long term can have drastically hinder GreenArrow’s performance. The performance penalty is proportionate to the number of rows that accumulate.

If type is drop_from_queue, the event is dropped from the event queue without being delivered.

logfile

hash

/

optional



Set the keys in this hash to enable the optional File Delivery Method for events.

filename

string

/

optional

The filename to log events to. There are three requirements:

  1. The filesystem path must be absolute. For example, /var/log/events.log is acceptable, but events.log is not.

  2. The file’s parent directory must exist.

  3. The file must be writable or creatable by the root user.

No events are logged to a file unless a filename is specified.

filename_append_date

boolean

/

optional

When this key is true, GreenArrow appends the date to the configured filename in YYYY-MM-DD format. For example, if the configured filename is /var/log/events.log, and the date is July 15, 2019, events are written to /var/log/events.log.2019-07-15.

The appended date is the date that data is written to the file, in the server’s configured timezone. As a result, some of the events that get written to each file could have occurred on a day that precedes the date that is appended to the filename. For example, if an event is logged a few seconds before midnight on one day, it could get written to the next day’s file.

filename_append_date is false by default.

http_keep_alive

integer

/

optional

Enable HTTP Keep-Alive for HTTP endpoints. Set this value to the maximum number of connections. This will cause the event processor to re-use the same connection for multiple events, increasing throughput. Set to null to disable HTTP Keep-Alive.

configuration_mode

string

/

optional

Set this key to perl> to run the event processor from the /var/hvmail/control/event_processor.conf legacy configuration file. If this value is not present or set to json, this JSON configuration file is used.

concurrency

integer

/

optional

Set this to the number of concurrent event processors that should execute simultaneously. Use this to increase the throughput of the event processor. By default, concurrency is set to 1 for a single event processor. If this value is set to 0, no event processors will run.

query_limit

integer

/

optional

Set the maximum number of events that can be delivered by a single execution of an event processor. By default, there is no limit on the number of events. If this value is set to 0, no limit is used.

Unless you have been instructed by GreenArrow technical support to use this option, please do not use it. (This may be needed in situations where a large backlog of events has accumulated. Attempts to process them all in a single run can degrade performance, or cause significant pressure on available memory.)

db_conn_cache_size

integer

/

optional

The maximum number of database connections that can be active simultaneously. This setting is only used for event destinations of type custom_sql. The default value is 10.

This setting also only applies when different db_dsn values are used. For the same DSN/username/password, the same connection is reused. This maximum applies per process (see the concurrency option) - so the overall possible number of database connections originating from the event processor is db_conn_cache_size * concurrency.

If a new database connection is required and the maximum has already been reached, the least recently used connection is closed.

db_conn_cache_max_idle

integer

/

optional

The maximum length of time, in seconds, that a database connection is allowed to be idle. After this length of time, if no further events have been delivered to it, it is closed. The default value is 10.

Verification

To verify your configuration file, run the following command.

# hvmail_event_processor --check-syntax
No errors found in configuration.

This will verify that your configuration appears to be valid for running. This will not tell you that any successful connection was actually made, nor that your connection settings are correct.

Here’s an example of a syntax check on a configuration file that did not declare its event_destinations array.

# hvmail_event_processor --check-syntax
There was a problem with the configuration file /var/hvmail/control/event_processor.json:
configuration must define an 'event_destinations' array

You may also run the event processor in a mode that processes the events only for a single email address. This provides a good way of testing your configuration without bringing the event processor up, leaving all other events in your queue.

hvmail_event_processor --process-by-email "user@example.com"

Reloading

The configuration file is automatically reloaded every 10 seconds. If an error is found in the configuration during a reload, events are not delivered.

Configuration Examples

HTTP Post Example

Here’s an example configuration that posts all events to an HTTP URL:

{
  "configuration_mode": "json",

  "event_destinations": [
    {
      "matches": { "all": true },
      "destination": {
        "type": "http_post",
        "url": "http://example.com/event_receiver?source=ga"
      }
    }
  ]
}

MySQL Example

Here’s an example configuration that sends all events, including all columns that are present by default as of 2017-8-24 to a MySQL database:

{
  "configuration_mode": "json",

  "event_destinations": [
    {
      "matches": { "all": true },
      "destination": {
        "type": "custom_sql",
        "db_dsn": "DBI:mysql:database=greenarrow;host=127.0.0.1",
        "db_username": "greenarrow",
        "db_password": "secretpassword",
        "sql": "INSERT IGNORE INTO events ( id, event_type, event_time, email, listid, list_name, list_label, sendid, bounce_type, bounce_code, bounce_text, click_url, click_tracking_id, studio_rl_seq, studio_rl_recipid, studio_campaign_id, studio_autoresponder_id, studio_is_unique, studio_mailing_list_id, studio_subscriber_id, studio_ip, studio_rl_seq_id, studio_rl_distinct_id, engine_ip, user_agent, json_before, json_after, timestamp, channel, status, is_retry, msguid, sender, mtaid, injected_time, message, outmtaid, sendsliceid, throttleid ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )",
        "bind_list": [ "id", "event_type", "event_time", "email", "listid", "list_name", "list_label", "sendid", "bounce_type", "bounce_code", "bounce_text", "click_url", "click_tracking_id", "studio_rl_seq", "studio_rl_recipid", "studio_campaign_id", "studio_autoresponder_id", "studio_is_unique", "studio_mailing_list_id", "studio_subscriber_id", "studio_ip", "studio_rl_seq_id", "studio_rl_distinct_id", "engine_ip", "user_agent", "json_before", "json_after", "timestamp", "channel", "status", "is_retry", "msguid", "sender", "mtaid", "injected_time", "message", "outmtaid", "sendsliceid", "throttleid"]
      }
    }
  ]
}

PostgreSQL Example

Here’s an example configuration that sends all events, including all columns that are present by default as of 2017-8-24 to a PostgreSQL database:

{
  "configuration_mode": "json",

  "event_destinations": [
    {
      "matches": { "all": true },
      "destination": {
        "type": "custom_sql",
        "db_dsn": "dbi:Pg:dbname=greenarrow;host=127.0.0.1",
        "db_username": "greenarrow",
        "db_password": "secretpassword",
        "sql": "INSERT INTO events ( id, event_type, event_time, email, listid, list_name, list_label, sendid, bounce_type, bounce_code, bounce_text, click_url, click_tracking_id, studio_rl_seq, studio_rl_recipid, studio_campaign_id, studio_autoresponder_id, studio_is_unique, studio_mailing_list_id, studio_subscriber_id, studio_ip, studio_rl_seq_id, studio_rl_distinct_id, engine_ip, user_agent, json_before, json_after, timestamp, channel, status, is_retry, msguid, sender, mtaid, injected_time, message, outmtaid, sendsliceid, throttleid ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ON CONFLICT DO NOTHING",
        "bind_list": [ "id", "event_type", "event_time", "email", "listid", "list_name", "list_label", "sendid", "bounce_type", "bounce_code", "bounce_text", "click_url", "click_tracking_id", "studio_rl_seq", "studio_rl_recipid", "studio_campaign_id", "studio_autoresponder_id", "studio_is_unique", "studio_mailing_list_id", "studio_subscriber_id", "studio_ip", "studio_rl_seq_id", "studio_rl_distinct_id", "engine_ip", "user_agent", "json_before", "json_after", "timestamp", "channel", "status", "is_retry", "msguid", "sender", "mtaid", "injected_time", "message", "outmtaid", "sendsliceid", "throttleid"]
      }
    }
  ]
}

Microsoft SQL Server Example

Here’s an example configuration that sends all events, including all columns that are present by default as of 2017-8-24 to a Microsoft SQL Server database:

{
  "configuration_mode": "json",

  "event_destinations": [
    {
      "matches": { "all": true },
      "destination": {
        "type": "custom_sql",
        "db_dsn": "dbi:ODBC:DRIVER={ms-sql};Server=10.0.0.1;port=1433;database=greenarrow",
        "db_username": "greenarrow",
        "db_password": "secretpassword",
        "sql": "INSERT INTO events ( id, event_type, event_time, email, listid, list_name, list_label, sendid, bounce_type, bounce_code, bounce_text, click_url, click_tracking_id, studio_rl_seq, studio_rl_recipid, studio_campaign_id, studio_autoresponder_id, studio_is_unique, studio_mailing_list_id, studio_subscriber_id, studio_ip, studio_rl_seq_id, studio_rl_distinct_id, engine_ip, user_agent, json_before, json_after, timestamp, channel, status, is_retry, msguid, sender, mtaid, injected_time, message, outmtaid, sendsliceid, throttleid ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )",
        "bind_list": [ "id", "event_type", "event_time", "email", "listid", "list_name", "list_label", "sendid", "bounce_type", "bounce_code", "bounce_text", "click_url", "click_tracking_id", "studio_rl_seq", "studio_rl_recipid", "studio_campaign_id", "studio_autoresponder_id", "studio_is_unique", "studio_mailing_list_id", "studio_subscriber_id", "studio_ip", "studio_rl_seq_id", "studio_rl_distinct_id", "engine_ip", "user_agent", "json_before", "json_after", "timestamp", "channel", "status", "is_retry", "msguid", "sender", "mtaid", "injected_time", "message", "outmtaid", "sendsliceid", "throttleid"]
      }
    }
  ]
}

HTTP Post and PostgreSQL Example

Here’s an example configuration that sends the id, event_type and event_time values for studio_open events to a database table, and everything else to an HTTP URL:

{
  "configuration_mode": "json",

  "event_destinations": [
    {
      "matches": { "event_type": [ "studio_open" ] },
      "destination": {
        "type": "custom_sql",
        "db_dsn": "dbi:Pg:dbname=greenarrow;host=127.0.0.1",
        "db_username": "greenarrow",
        "db_password": "secretpassword",
        "sql": "INSERT INTO events ( id, event_type, time_int ) VALUES ( ?, ?, ? )",
        "bind_list": [ "id", "event_type", "event_time" ]
      }
    },
    {
      "matches": { "all": true },
      "destination": {
        "type": "http_post",
        "url": "http://example.com/event_receiver?source=ga"
      }
    }
  ]
}

Logfile Example

Here’s an example configuration that writes events to the /var/log/greenarrow-events.log logfile:

{
  "logfile": {
    "filename": "/var/log/greenarrow-events.log",
    "filename_append_date": false
  }
}

Do Nothing Example

This is the default configuration, which leaves all events in queue:

{
  "event_destinations": [
    {
      "matches": {
        "all": true
      },
      "destination": {
        "type": "leave_in_queue"
      }
    }
  ]
}

Event Processor Logs

The event processor logs are kept in /var/hvmail/log/event-processor. Use these commands to diagnose why an event is not being delivered.

For a streaming view of the log as it happens:

tail -F /var/hvmail/log/event-processor/current | tai64nlocal

To see a particular time range of events:

logdir_select_time --start "2015-11-24 19:00" --end "2015-11-25 00:00" --dir /var/hvmail/log/event-processor | tai64nlocal

Starting and Stopping the Event Processor

To check the running state of the event processor:

hvmail_init status | grep hvmail-event-processor

To start the event processor:

svc -u /service/hvmail-event-processor

To stop the event processor:

svc -d /service/hvmail-event-processor

Legacy Event Processor

Prior to the JSON configuration, the event processor was configured using a Perl file /var/hvmail/control/event_processor.conf that would be loaded straight into the running code.

If the JSON file exists and its configuration mode is not set to perl, the legacy configuration file will not be loaded. If you need to continue to use the legacy configuration file, simply edit /var/hvmail/control/event_processor.json, setting configuration_mode to perl.

To convert an existing legacy configuration file to the new syntax, run:

hvmail_event_processor --convert-legacy

This does the following:

  1. Generate a new configuration file /var/hvmail/control/event_processor.json based on the data contained in legacy file /var/hvmail/control/event_processor.conf.
  2. Renames the legacy file /var/hvmail/control/event_processor.conf to /var/hvmail/control/event_processor.conf.old.