summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authordoufenghu <[email protected]>2024-02-22 18:00:51 +0800
committerdoufenghu <[email protected]>2024-02-22 18:00:51 +0800
commit8473d8e895066eacbb09fa7a0d8c3dde72d236bc (patch)
tree42d674fce663221a93a11cd041e6b2f900eef851
parent0e93f635739f7143ab729d2ed1d4b71b47140edc (diff)
[Doc] Correct the introduction of log.failures.only.
-rw-r--r--docs/connector/sink/kafka.md16
-rw-r--r--groot-examples/end-to-end-example/src/main/resources/examples/kafka_to_print.yaml2
2 files changed, 9 insertions, 9 deletions
diff --git a/docs/connector/sink/kafka.md b/docs/connector/sink/kafka.md
index 56331bc..6793b21 100644
--- a/docs/connector/sink/kafka.md
+++ b/docs/connector/sink/kafka.md
@@ -11,14 +11,14 @@ In order to use the Kafka connector, the following dependencies are required. Th
Kafka sink custom properties. if properties belongs to Kafka Producer Config, you can use `kafka.` prefix to set.
-| Name | Type | Required | Default | Description |
-|-------------------------|---------|----------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| topic | String | Yes | - | Topic name is required. It used to write data to kafka. |
-| kafka.bootstrap.servers | String | Yes | - | A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. This list should be in the form `host1:port1,host2:port2,...`. |
-| log.failures.only | Boolean | No | true | Whether the producer should fail on errors, or only log them; If this is set to true, then exceptions will be only logged, if set to false, exceptions will be eventually thrown. |
-| format | String | No | json | Data format. The default value is `json`. The Optional values are `json`, `protobuf`. |
-| [format].config | | No | - | Data format properties. Please refer to [Format Options](../formats) for details. |
-| kafka.config | | No | - | Kafka producer properties. Please refer to [Kafka Producer Config](https://kafka.apache.org/documentation/#producerconfigs) for details. |
+| Name | Type | Required | Default | Description |
+|-------------------------|---------|----------|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| topic | String | Yes | - | Topic name is required. It used to write data to kafka. |
+| kafka.bootstrap.servers | String | Yes | - | A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. This list should be in the form `host1:port1,host2:port2,...`. |
+| log.failures.only | Boolean | No | true | Defines whether the producer should fail on errors, or only log them. If this is set to true, then exceptions will be only logged, if set to false, exceptions will be eventually thrown and cause the streaming program to fail (and enter recovery). |
+| format | String | No | json | Data format. The default value is `json`. The Optional values are `json`, `protobuf`. |
+| [format].config | | No | - | Data format properties. Please refer to [Format Options](../formats) for details. |
+| kafka.config | | No | - | Kafka producer properties. Please refer to [Kafka Producer Config](https://kafka.apache.org/documentation/#producerconfigs) for details. |
## Example
This example read data of inline test source and write to kafka topic `SESSION-RECORD-TEST`.
diff --git a/groot-examples/end-to-end-example/src/main/resources/examples/kafka_to_print.yaml b/groot-examples/end-to-end-example/src/main/resources/examples/kafka_to_print.yaml
index 523d529..d3c46b7 100644
--- a/groot-examples/end-to-end-example/src/main/resources/examples/kafka_to_print.yaml
+++ b/groot-examples/end-to-end-example/src/main/resources/examples/kafka_to_print.yaml
@@ -12,7 +12,7 @@ sources:
kafka.session.timeout.ms: 60000
kafka.max.poll.records: 3000
kafka.max.partition.fetch.bytes: 31457280
- kafka.group.id: GROOT-STREAM-example-KAFKA-TO-PRINT
+ kafka.group.id: GROOT-STREAM-EXAMPLE-KAFKA-TO-PRINT
kafka.auto.offset.reset: latest
format: json