Replies: 4 comments
-
Thanks for kicking off this discussion, @derek1ee. This is really a great start. First, one question about this goal, as stated above:
If a developer is working on and testing a consumer application, then if the topic already has messages can't the developer simply rerun their consumer to start from an earlier offset? Sure, the developer may want to manually produce atypical messages to test the consumer application's codepaths, but that seems more in line with the first goal. Plus, goal 1 seems to be the driving objective (at least initially). Second, as you point out in the "Proposal" section, there are multiple places where the ability to manually produce one (or more) messages can be surfaced. I might argue that producing messages via a command is the underlying common mechanism that can be exposed in several ways, as you suggest: from command palette, on a particular topic, as an action in the message viewer, as a copy action on a message (or set of messages) followed by a paste action or "Send messages to..." action a topic, etc. One challenge is that those different calls to action will each have quite different contexts about the topic, schema, or even sample messages to start from. Third, your mention of the value of a file-based approach and the REST Client extension is interesting. If we can define a file format that allows specifying the message contents (key, value, plus optionally schema refs, headers, timestamps, partition numbers) then we could start simple and expand. For example, if our first command was "send messages to topic..." and the input was a file of messages, then in a first release users could hand-craft these files (possibly checking them into their git repo) and easily produce messages. Of course hand-crafting is not ideal and doesn't quite solve goal 3, but it would achieve goal 1 and maybe 4. Plus, subsequent releases could make progress on goal 3 by adding other options, such as by copying messages from message viewer and pasting into a messages file. Or using a form to add more messages. Or generating "template" messages in a file based upon a schema. Or .... The existing Kafka REST API supports producing records with an already-defined JSON structure for the produce REST request payloads. See our docs and the API reference. That API supports multiple records via the “Transfer-Encoding: chunked” header, which means we could have multiple records in each file. The structure already supports all of the optional attributes like partition number, timestamp, headers, etc. And the REST API request structure already accepts the JSON representations of Avro and Protobuf formatted records. Having JSON files with one or more records would be useful, but we might also consider combining this pure JSON representation with the REST Client file's approach of including the HTTP method and URL. Could we just reuse the REST file format (which is used by multiple tools), but have our extension interpret the URL appropriately using the connection information?
One big advantage is that other tooling could use the same already-in-use file formats, such as for testing. Anyway, great start to the discussion! |
Beta Was this translation helpful? Give feedback.
-
I agree, I think Goal #2 is nice-to-have, and lower priority than Goal #1. As we look at the design, I can also imagine Goal #2 may be accomplished via something like, working in conjunction with the file-based appraoch:
|
Beta Was this translation helpful? Give feedback.
-
Notes from offline conversation:
|
Beta Was this translation helpful? Give feedback.
-
For file-based produce, another extension, Tools for Apache Kafka has a very similar approach to what's proposed here. I think it'd be worth taking a look and see if we want to be compatible with it. |
Beta Was this translation helpful? Give feedback.
-
Goals
Non-Goals
Background
When authoring a Kafka consumer client, it's common for the developer to first make sure the client is properly configured and can receive messages from the Kafka topic. For a Kafka topic that's not yet configured with producers, this is done by manually producing a message.
As more logics are added to the consumer client, the developer will want to test the logic. If the code didn't work, then the developer would fix the code, then rerun the test the same message ("re-produce" the message).
As the code becomes more mature, the developer may build up a collection of messages the code should be tested against, e.g., messages with different payload for different code path, malformed messages for error handling, etc.
Manually producing messages to Kafka topic can be done today in a few different ways:
Proposal
This proposal adds one or more additional ways to manually produce messages to a Kafka topic:
HTTP
, and the extension is the file handler for such type and can overlay controls such as "send request", or "produce" in our case.Beta Was this translation helpful? Give feedback.
All reactions