-
Notifications
You must be signed in to change notification settings - Fork 920
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Postgres backend doesn't handle large ids in the domain table properly #5375
Comments
The problem is the generic SQL backend uses uint32's everywhere to refer to domain IDs. It would be nice if this could be customized to use larger numbers - a perfect world would let us ramp it up to 128-bit sizes, which would mean GUIDs could be used (which would be great for when you're doing multi-master replication). |
Thank you for a wonderfully detailed ticket. I believe the useless |
Another user reports that -1 as an ID also triggers the useless |
Running on cockroachdb works if the ID SERIAL is replaced with a generated sequence. There might be other ways to do this, to autogenerate sequences also. Have not tried that yet. This is the schema I used for cockroachdb v19.1.4 and pdns_server 4.2.0-rc2.780.master, it's based on the 4.2.0 schema from the docs. I assume one would be able to adapt the schema for 4.1.x. I had to remove this command, since the text_pattern_ops does not seem to work in cockroach. This is the error from test-schema, running in a docker container:
|
Does anything happen with this issue at all? Or will it be moved forward forever? |
Changing the domain_id type inside PowerDNS from uint32_t to something else is unlikely to happen before 5.0, and perhaps unlikely to happen ever. For your 'sequence' solution, perhaps you can PR that in some useful form, like a doc update, or some comments in the PG schema we ship? Your ordering issue is separate - but with very few users on CockroachDB, nothing is likely to happen unless we receive a PR for that as well. You should feel free to open a separate ticket if you want to track it, though. |
@Habbie Thank you. I will try to make a PR with my answer, and also do some more testing. |
Looking forward to it! |
Also running into the test-zone issue now with the same error:
How likely is one to run into this issue in actual production? Is there a query I can override to work around this? Or is this a non issue if you have no records starting with underscores? |
Answered my own question. I've overridden the DNSSEC ordering queries with the ones from the MySQL backend, changed to use
|
Most setups will run into it immediately, because there are a lot of -queries- with underscores out there, for things like TLSA. Breaking those queries could break your incoming mail. |
I've been running PowerDNS 4.2.2 on CockroachDB 20.1 for about 4 days now. All seems to be fine. Note that this is a very small setup with only ~10 authoritative zones with low traffic. Regardless, it seems to work. The requirements are therefor two parts: a modified SQL DDL file and modified queries for the generic pgsql backend, because CockroachDB does not support the fancy postgres operators. The SQL DDL: CREATE SEQUENCE domain_id MAXVALUE 2147483648;
CREATE SEQUENCE record_id MAXVALUE 2147483648;
CREATE SEQUENCE comment_id MAXVALUE 2147483648;
CREATE SEQUENCE meta_id MAXVALUE 2147483648;
CREATE SEQUENCE key_id MAXVALUE 2147483648;
CREATE SEQUENCE tsig_id MAXVALUE 2147483648;
CREATE TABLE domains (
id INT DEFAULT nextval('domain_id') PRIMARY KEY,
name VARCHAR(255) NOT NULL,
master VARCHAR(128) DEFAULT NULL,
last_check INT DEFAULT NULL,
type VARCHAR(6) NOT NULL,
notified_serial BIGINT DEFAULT NULL,
account VARCHAR(40) DEFAULT NULL,
CONSTRAINT c_lowercase_name CHECK (((name)::TEXT = LOWER((name)::TEXT)))
);
CREATE UNIQUE INDEX name_index ON domains(name);
CREATE TABLE records (
id INT DEFAULT nextval('record_id') PRIMARY KEY,
domain_id INT DEFAULT NULL,
name VARCHAR(255) DEFAULT NULL,
type VARCHAR(10) DEFAULT NULL,
content VARCHAR(65535) DEFAULT NULL,
ttl INT DEFAULT NULL,
prio INT DEFAULT NULL,
disabled BOOL DEFAULT 'f',
ordername VARCHAR(255),
auth BOOL DEFAULT 't',
CONSTRAINT domain_exists
FOREIGN KEY(domain_id) REFERENCES domains(id)
ON DELETE CASCADE,
CONSTRAINT c_lowercase_name CHECK (((name)::TEXT = LOWER((name)::TEXT)))
);
CREATE INDEX rec_name_index ON records(name);
CREATE INDEX nametype_index ON records(name,type);
CREATE INDEX domain_id ON records(domain_id);
CREATE TABLE supermasters (
ip INET NOT NULL,
nameserver VARCHAR(255) NOT NULL,
account VARCHAR(40) NOT NULL,
PRIMARY KEY(ip, nameserver)
);
CREATE TABLE comments (
id INT DEFAULT nextval('comment_id') PRIMARY KEY,
domain_id INT NOT NULL,
name VARCHAR(255) NOT NULL,
type VARCHAR(10) NOT NULL,
modified_at INT NOT NULL,
account VARCHAR(40) DEFAULT NULL,
comment VARCHAR(65535) NOT NULL,
CONSTRAINT domain_exists
FOREIGN KEY(domain_id) REFERENCES domains(id)
ON DELETE CASCADE,
CONSTRAINT c_lowercase_name CHECK (((name)::TEXT = LOWER((name)::TEXT)))
);
CREATE INDEX comments_domain_id_idx ON comments (domain_id);
CREATE INDEX comments_name_type_idx ON comments (name, type);
CREATE INDEX comments_order_idx ON comments (domain_id, modified_at);
CREATE TABLE domainmetadata (
id INT DEFAULT nextval('meta_id') PRIMARY KEY,
domain_id INT REFERENCES domains(id) ON DELETE CASCADE,
kind VARCHAR(32),
content TEXT
);
CREATE INDEX domainidmetaindex ON domainmetadata(domain_id);
CREATE TABLE cryptokeys (
id INT DEFAULT nextval('key_id') PRIMARY KEY,
domain_id INT REFERENCES domains(id) ON DELETE CASCADE,
flags INT NOT NULL,
active BOOL,
published BOOL DEFAULT TRUE,
content TEXT
);
CREATE INDEX domainidindex ON cryptokeys(domain_id);
CREATE TABLE tsigkeys (
id INT DEFAULT nextval('tsig_id') PRIMARY KEY,
name VARCHAR(255),
algorithm VARCHAR(50),
secret VARCHAR(255),
CONSTRAINT c_lowercase_name CHECK (((name)::TEXT = LOWER((name)::TEXT)))
);
CREATE UNIQUE INDEX namealgoindex ON tsigkeys(name, algorithm); The modified queries (examples taken for the generic MySQL backend and modified for syntax):
I was then able to run Is this something we want to add to the docs? |
Yes! I think it would be best to document the -differences- in the DDL, so we don't have to maintain yet another full schema. And given that you only had to change four queries, I suggest we document that as well. I think a section in https://github.com/PowerDNS/pdns/blob/master/docs/backends/generic-postgresql.rst would make sense. |
untested, but cockroachdb v21.2.x supports |
Short description
When using the
gpgsql
backend, PowerDNS exhibits issues whendomain_id
values exceed theint
boundaries of Postgres. While this seems trivial on its face - the backend expects the column type specified in the schema - consider a scenario where this may not be the case.For instance, CockroachDB does not use incrementing SERIALs, but rather, generates a large integer that is likely to be unique. After initial testing, it appears that PowerDNS and CockroachDB look to be compatible (besides some small quibbles, like the INET type in the schema, which is actively being worked on, and
ON DELETE CASCADE
, which is actively being worked on as well).I would suggest making use of
bigint
orbigserial
types, which should allow for compatibility with CockroachDB, as well as providing additional headroom for (very) large deployments of PowerDNS.Environment
Steps to reproduce
pdnsutil create-zone example.com
and observe that a record is created in thedomains
table withid
of 1.pdnsutil list-all-zones
and observe thatexample.com
is displayed.id
column to the typebigint
.domains
table with id12345678901
(larger than theint
type).pdnsutil list-all-zones
(as in step 3), but observe the following error:Error: stou
Expected behaviour
The zone should be listed and should successfully resolve when queried.
Actual behaviour
The following is returned:
Error: stou
and queries are not successful.Other information
I am happy to provide additional debugging or log information if needed. Thank you!
The text was updated successfully, but these errors were encountered: