-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(dao) apply cache_key for target uniqueness detection #8179
fix(dao) apply cache_key for target uniqueness detection #8179
Conversation
Hi @locao would like to discuss below points I have deleted the functions you wrote in kong/api/routes/upstreams.lua. Before: After: Please let me know your opinion. If we want to keep previous behavior(see Before), then I should not modify kong/api/routes/upstreams.lua |
Related to #6483. Please see comments there for context. |
return nil, err | ||
end | ||
|
||
for _, row in ipairs(rows) do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Are there any safety/reentrantcy/atomicity concerns with performing these updates within the SELECT loop? Would it be any safer (or less safe?) to build a full list of rows (exhausting the SELECT iterator) and perform the updates in a separate loop, or is the difference negligible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your suggestion!
I think it should not make much difference as the flow is still the same as below
SELECT from targets
and pass targets list to rows
then execute UPDATE targets
in a loop with rows
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for bringing this up @yankun-li-kong! That's something we've been waiting for 3.0 to do.
The code seems OK to me, just a few suggestions on the Cassandra migration. My main concern is that we should not merge this change to master
at this moment. We are still having another release before 3.0.
return endpoints.handle_error(err_t) | ||
end | ||
if entity then | ||
return kong.response.exit(200, entity, { ["Deprecation"] = "true" }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense to change the POST behaviour, but shouldn't we wait until 3.0 to merge this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. Please let me know when we should merge it so I could rename 012_213_to_220.lua
file
@@ -0,0 +1,122 @@ | |||
-- remove repeated targets, the older ones are not useful anymore. targets with | |||
-- weight 0 will be kept, as we cannot tell which were deleted and which were | |||
-- explicitly set as 0. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the target history removal in 2.2 we don't keep deleted targets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I saw kong has run c_remove_unused_targets
in 012_213_to_220.lua
to delete duplicate targets in 2.2.
But duplicate targets may still exist after that.
Regarding https://github.com/Kong/kong/blob/master/kong/db/dao/targets.lua#L50-L59
kong will do uniqueness detection and then insert targets.
It is not an atomic operation and a duplicate target may be added during the above process.
upstream_targets[key] = { n = 0 } | ||
end | ||
|
||
upstream_targets[key].n = upstream_targets[key].n + 1 | ||
upstream_targets[key][upstream_targets[key].n] = { row.id, row.created_at } | ||
end | ||
end | ||
|
||
local sort = function(a, b) | ||
return a[2] > b[2] | ||
end | ||
|
||
for _, targets in pairs(upstream_targets) do | ||
if targets.n > 1 then | ||
table.sort(targets, sort) | ||
|
||
for i = 2, targets.n do | ||
local _, err = coordinator:execute("DELETE FROM targets WHERE id = ?", { | ||
cassandra.uuid(targets[i][1]) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(somehow this comment was not added to my review)
I haven't tested this suggestion, but it seems to me that it's possible to delete the duplicated targets in a single loop.
upstream_targets[key] = { n = 0 } | |
end | |
upstream_targets[key].n = upstream_targets[key].n + 1 | |
upstream_targets[key][upstream_targets[key].n] = { row.id, row.created_at } | |
end | |
end | |
local sort = function(a, b) | |
return a[2] > b[2] | |
end | |
for _, targets in pairs(upstream_targets) do | |
if targets.n > 1 then | |
table.sort(targets, sort) | |
for i = 2, targets.n do | |
local _, err = coordinator:execute("DELETE FROM targets WHERE id = ?", { | |
cassandra.uuid(targets[i][1]) | |
}) | |
upstream_targets[key] = { | |
id = row.id, | |
created_at = row.created_at, | |
} | |
else | |
local to_remove | |
if row.created_at > upstream_targets[key].created_at then | |
to_remove = upstream_targets[key].id | |
upstream_targets[key] = { | |
id = row.id, | |
created_at = row.created_at, | |
} | |
else | |
to_remove = row.id | |
end | |
local _, err = coordinator:execute("DELETE FROM targets WHERE id = ?", { | |
cassandra.uuid(to_remove) | |
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed and tested in my local.
It looks much simple than previous one!
Hello @yankun-li-kong , Could you rebase this onto the latest |
Add new cache_key(upstream, target) in targets table for uniqueness detection. Delete useless targets uniqueness detection functions. Targets API returns 409 when creating/updating delicate targets. Add migration functions to add cache_key column, delete duplicate targets, add cache_key for existing targets.
Fix spec test, delete unused variable id.
Fix spec test, do not redefine res variable.
Refactor c_remove_unused_targets method to delete duplicate targets in a single loop. Modify spec test title.
Fix migration file name
6433ab2
to
d3e375b
Compare
reverted by #8705, the changelog entry was missed
This reverts commit 4ee96292de57a6ce1eb6b5f55a6502426f47d98d.
reverted by #8705, the changelog entry was missed
This reverts commit 4ee96292de57a6ce1eb6b5f55a6502426f47d98d.
Summary
Apply cache_key for target uniqueness detection
Improvement for #6483
Full changelog
Issues resolved
CT-97