-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: JDBC sink lost update #12026
Comments
Another case shows that even if we update those non-pk columns (from MySQL sink table's perspective), it could cause data lost. create table jdbc_table (id int, v int);
create materialized view v as select distinct on(id, v) id, v from jdbc_table;
CREATE SINK s_sink FROM v WITH (
connector='jdbc',
jdbc.url='jdbc:mysql://xxx',
table.name='jdbc_table',
primary_key='id',
type='upsert'
);
insert into jdbc_table select i, i from generate_series(1, 10000) i;
// Data lost
update jdbc_table set v = v + 1; |
Just FYI, we've banned updates on the primary key column in #8569 in case. However, as we allow users to specify the primary key columns for sinks, that cannot cover the issue here. |
It is because the stream key is different from the user defined primary key columns for sinks.
A solution is do additional compaction in the sink executor per barrier.
|
Describe the bug
In RisingWave
Table definition in MySQL:
Even if we set streaming_parallelism = 1 would still meet this issue because we issue delete + insert for each row partially, instead of delete all before rows and then insert after rows in the sink executor.
Error message/log
No response
To Reproduce
No response
Expected behavior
No response
How did you deploy RisingWave?
No response
The version of RisingWave
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: