Submit Apache Flink Job

$ cd /path/to/flink-1.10.0
$ ./bin/flink run /path/to/target/flinkapp-1.0.jar \
     /path/to/avro-ingest-dev.json \
     /path/to/google/service-account.json
// avro-ingest-dev.json
{
  "bigquery_project_id": "test-project-id",
  "bigquery_dataset_id": "test-dataset-id",
  "bigquery_table_id": "test-table-id",
  "kafka_brokers": "localhost:9092",
  "kafka_group_id": "flink-stream",
  "kafka_topic": "avro-etl-dev",
  "kafka_schema_registry_url": "http://localhost:8081",
  "kafka_start_partition": "0",
  "kafka_start_offset": "0",
  "kafka_start_timestamp": "1583389959359",
  "kafka_fetch_mode": "earliest",
  "format": "avro"
}
$ ./bin/flink run /path/to/target/flinkapp-1.0.jar \
     /path/to/posgresql-ingest-dev.json \
     /path/to/google/service-account.json
circle-info

kafka_fetch_mode: earliest - Reads stream from the the beginning. latest - Reads stream from the latest/last one. timestamp - Reads stream from known timestamp. See kafka_start_timestamp custom - Reads stream from known partition and offset. See kafka_start_partition See kafka_start_offset

Last updated