Flink sink for Clickhouse
Flink sink for ClickHouse database. Powered by Async Http Client.
High-performance library for loading data to ClickHouse.
It has two triggers for loading data: by timeout and by buffer size.
|flink |flink-clickhouse-sink | |:-------:|:--------------------:| |1.3.* |1.0.0 | |1.9.0 |1.1.0 | |1.9.0 |1.2.0 |
ru.ivi.opensource flink-clickhouse-sink 1.2.0
The flink-clickhouse-sink uses two parts of configuration properties: common and for each sink in you operators chain.
The common part (use like global):
clickhouse.sink.num-writers- number of writers, which build and send requests,
clickhouse.sink.queue-max-capacity- max capacity (batches) of blank's queue,
clickhouse.sink.timeout-sec- timeout for loading data,
clickhouse.sink.retries- max number of retries,
clickhouse.sink.failed-records-path- path for failed records,
clickhouse.sink.ignoring-clickhouse-sending-exception-enabled- required boolean parameter responsible for raising (false) or not (true) ClickHouse sending exception in main thread. if
ignoring-clickhouse-sending-exception-enabledis true, exception while clickhouse sending is ignored and failed data automatically goes to the disk. if
ignoring-clickhouse-sending-exception-enabledis false, clickhouse sending exception thrown in "main" thread (thread which called ClickhHouseSink::invoke) and data also goes to the disk.
The sink part (use in chain):
clickhouse.sink.target-table- target table in ClickHouse,
clickhouse.sink.max-buffer-size- buffer size.
The main thing: the clickhouse-sink works with events in string (ClickHouse insert format, like CSV) format. You have to convert your event to csv format (like usual insert in database).
For example, you have event-pojo: ```java class A { public final String str; public final int integer;
public A(String str, int i){ this.str = str; this.integer = i; }
}
You have to convert this pojo like this:java public static String convertToCsv(A a) { StringBuilder builder = new StringBuilder(); builder.append("(");
// add a.str builder.append("'"); builder.append(a.str); builder.append("', ");// add a.intger builder.append(String.valueOf(a.integer)); builder.append(" )"); return builder.toString();
} ``` And then add record to sink.
You have to add global parameters for Flink environment: ```java StreamExecutionEnvironment environment = StreamExecutionEnvironment.createLocalEnvironment(); Map globalParameters = new HashMap<>();
// ClickHouse cluster properties globalParameters.put(ClickHouseClusterSettings.CLICKHOUSEHOSTS, ...); globalParameters.put(ClickHouseClusterSettings.CLICKHOUSEUSER, ...); globalParameters.put(ClickHouseClusterSettings.CLICKHOUSE_PASSWORD, ...);
// sink common globalParameters.put(ClickHouseSinkConsts.TIMEOUTSEC, ...); globalParameters.put(ClickHouseSinkConsts.FAILEDRECORDSPATH, ...); globalParameters.put(ClickHouseSinkConsts.NUMWRITERS, ...); globalParameters.put(ClickHouseSinkConsts.NUMRETRIES, ...); globalParameters.put(ClickHouseSinkConsts.QUEUEMAXCAPACITY, ...); globalParameters.put(ClickHouseSinkConsts.IGNORINGCLICKHOUSESENDINGEXCEPTION_ENABLED, ...);
// set global paramaters ParameterTool parameters = ParameterTool.fromMap(buildGlobalParameters(config)); environment.getConfig().setGlobalJobParameters(parameters);
And add your sink like this: ```java // create converter public YourEventConverter { String toClickHouseInsertFormat (YourEvent yourEvent){ String chFormat = ...; .... return chFormat; } }// create props for sink Properties props = new Properties(); props.put(ClickHouseSinkConsts.TARGET_TABLE_NAME, "your_table"); props.put(ClickHouseSinkConsts.MAX_BUFFER_SIZE, "10000");
// build chain DataStream dataStream = ...; dataStream.map(YourEventConverter::toClickHouseInsertFormat) .name("convert YourEvent to ClickHouse table format") .addSink(new ClickHouseSink(props)) .name("your_table ClickHouse sink);