Difference between revisions of "Feed Handlers"

From 3forge Documentation
Jump to navigation Jump to search
Line 60: Line 60:
  
 
<span style="font-family: courier;">q replay.q –p server_port</span>
 
<span style="font-family: courier;">q replay.q –p server_port</span>
 +
 +
=Configuring AMI to stream JSON messages over Kafka=
 +
 +
<pre>
 +
ami.relay.fh.active=ssocket,kafka
 +
 +
ami.relay.fh.kafka.class=com.f1.AmiKafkaFH
 +
 +
# insert the hostname of your kafka server here
 +
ami.relay.fh.kafka.props.bootstrap.servers=<HOSTNAME>:9092
 +
 +
# insert a consumer group id string (in case other processes are consuming from the same topics additionally, use that group id). E.g. test-group
 +
ami.relay.fh.kafka.props.group.id=<GROUP ID>
 +
 +
ami.relay.fh.kafka.props.enable.auto.commit=true
 +
 +
# deserializer configuration to handle json messages
 +
ami.relay.fh.kafka.props.value.deserializer=io.confluent.kafka.serializers.KafkaJsonDeserializer
 +
 +
ami.relay.fh.kafka.props.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
 +
 +
# insert the hostname of your kafka server running schema registry here
 +
ami.relay.fh.kafka.props.schema.registry=http://<HOSTNAME>:8081
 +
 +
# insert the comma delimited topic names being used for kafka e.g. topic-1,topic-2,topic-3
 +
ami.relay.fh.kafka.props.topics=<TOPIC_NAME(S)>
 +
 +
</pre>

Revision as of 18:57, 10 January 2023

KDB Feed Handler

Prerequisites

Be sure that KDB is running before adding as a data source. Fully tested versions include 3.x

Configuring KX live streaming inside AMI Relays

KX live streaming integrates with kdb+tick via the AMI Feed handler mechanism. AMI can be configured to connect and subscribe to KX ticker plants. This enables AMI to receive all real-time updates from the ticker plant. Optionally, AMI can also be configured to recover from ticker plant log files before consuming real time updates.

Ami Relay Property Settings for KX feed handler

The following properties should be set in ami relays' config/local.properties

As with all feed handlers, add one uniquely named entry for each KX feed handler to the ami.relay.fh.active property. Be sure to include the default ssocket,cfg and cmd feed handlers. For example if you have only one kx feed handler:

  • ami.relay.fh.active,=cfg,cmd,kx1

Then, for each KX feed handler, include the following properties. NOTE: Be sure to include the proper feed handler name in the property name:

  • ami.relay.fh.kx1.start=true #must be set to true, otherwise it will be disabled
  • ami.relay.fh.kx1.class=com.f1.ami.relay.fh.AmiKxFH #required, must be exact
  • ami.relay.fh.kx1.props.kxUrl=hostname:port #location of the ticker plant
  • ami.relay.fh.kx1.props.kxUsername=username #optional
  • ami.relay.fh.kx1.props.kxPassword=password #optional
  • ami.relay.fh.kx1.props.amiId=KX_APP_ID #indicates what the application id of messages coming from this ticker plant will be mapped to (See AMI Backend API Manual for explanation on application ids)
  • ami.relay.fh.kx1.props.replayUrl=hostname:port #optional, only if recovery is required. See KDB Ticker plant Recovery steps below on how to configure and start kdb replay process.
  • ami.relay.fh.kx1.props.tableKeyMap=table1=col1,col2,coln|table2=col1, col2 #Format is pipe delimited list of tables, with columns delimited by comma. Only the data included in this property will be consumed by AMI.
  • ami.relay.fh.kx1.props. subscribeQuery=subscription_kx_query #optional
    • Default is: .u.sub[`;`]; (.u `i`L;.u.t!{0!meta x} each .u.t)

Example Config:

ami.relay.fh.active=ssocket,kx1 ami.relay.fh.kx1.start=true ami.relay.fh.kx1.class=com.f1.ami.relay.fh.AmiKxFH ami.relay.fh.kx1.props.kxUrl=localhost:1235 ami.relay.fh.kx1.props.kxUsername=demo ami.relay.fh.kx1.props.kxPassword=demo123 ami.relay.fh.kx1.props.amiId=KX_APP_ID ami.relay.fh.kx1.props.replayUrl=localhost:1234 ami.relay.fh.kx1.props.tableKeyMap=table1=col1,col2,coln|table2=col1, col2

KDB Ticker Plant Recovery Steps

In order to support replay you must startup a KDB replay process that will be used by the AMI Relay feed handler to recover data from the ticker plant log file before resuming processing of real time events from the ticker plant.

IMPORTANT: This KDB process must have read access to the ticker plant log file.

Setup process: Create a script called replay.q with the following contents:

upd:{[t;x] (neg first .z.w)(`upd;t;x)}

replay:{if[null first x;:0];-11!x}

Startup Process: (where server_port matches port in ami.relay.fh.kx1.props.replayUrl):

q replay.q –p server_port

Configuring AMI to stream JSON messages over Kafka

 
ami.relay.fh.active=ssocket,kafka

ami.relay.fh.kafka.class=com.f1.AmiKafkaFH

# insert the hostname of your kafka server here
ami.relay.fh.kafka.props.bootstrap.servers=<HOSTNAME>:9092

# insert a consumer group id string (in case other processes are consuming from the same topics additionally, use that group id). E.g. test-group
ami.relay.fh.kafka.props.group.id=<GROUP ID>

ami.relay.fh.kafka.props.enable.auto.commit=true

# deserializer configuration to handle json messages
ami.relay.fh.kafka.props.value.deserializer=io.confluent.kafka.serializers.KafkaJsonDeserializer

ami.relay.fh.kafka.props.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer

# insert the hostname of your kafka server running schema registry here
ami.relay.fh.kafka.props.schema.registry=http://<HOSTNAME>:8081

# insert the comma delimited topic names being used for kafka e.g. topic-1,topic-2,topic-3
ami.relay.fh.kafka.props.topics=<TOPIC_NAME(S)>