Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[2/2]Cherry-Pick Fix gpcheckcat false alarms for pg_default_acl etc #814

Merged
merged 13 commits into from
Dec 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion gpMgmt/bin/gppylib/commands/pg.py
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ def __init__(self, target_datadir, source_host, source_port, create_slot=False,

cmd_tokens.extend(self._xlog_arguments(replication_slot_name))

# GPDB_12_MERGE_FIXME: avoid checking checksum for heap tables
# GPDB_12_MERGE_FEATURE_NOT_SUPPORTED: avoid checking checksum for heap tables
# till we code logic to skip/verify checksum for
# appendoptimized tables. Enabling this results in basebackup
# failures with appendoptimized tables.
Expand Down
1 change: 1 addition & 0 deletions gpMgmt/bin/gppylib/gpcatalog.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ class GPCatalogException(Exception):
'pg_compression',
'pg_conversion',
'pg_database',
'pg_default_acl',
'pg_enum',
'pg_namespace',
'pg_resgroup',
Expand Down
5 changes: 5 additions & 0 deletions gpMgmt/bin/lib/gp_bash_functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,9 @@ LOG_MSG () {
# Limitation: If the token used for separating command output from banner appears in the begining
# of the line in command output/banner output, in that case only partial command output will be returned
REMOTE_EXECUTE_AND_GET_OUTPUT () {
INITIAL_DEBUG_LEVEL=$DEBUG_LEVEL
DEBUG_LEVEL=0

LOG_MSG "[INFO]:-Start Function $FUNCNAME"
HOST="$1"
CMD="echo 'GP_DELIMITER_FOR_IGNORING_BASH_BANNER';$2"
Expand All @@ -274,6 +277,8 @@ REMOTE_EXECUTE_AND_GET_OUTPUT () {
LOG_MSG "[INFO]:-Completed $TRUSTED_SHELL $HOST $CMD"
fi
LOG_MSG "[INFO]:-End Function $FUNCNAME"

DEBUG_LEVEL=$INITIAL_DEBUG_LEVEL
#Return output
echo "$OUTPUT"
}
Expand Down
13 changes: 13 additions & 0 deletions gpMgmt/test/behave/mgmt_utils/gpcheckcat.feature
Original file line number Diff line number Diff line change
Expand Up @@ -561,6 +561,19 @@ Feature: gpcheckcat tests
Then gpcheckcat should return a return code of 3
And the user runs "dropdb mis_attr_db"

Scenario: gpcheckcat should not report dependency error from pg_default_acl
Given database "check_dependency_error" is dropped and recreated
And the user runs "psql -d check_dependency_error -c "CREATE ROLE foo; ALTER DEFAULT PRIVILEGES FOR ROLE foo REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;""
Then psql should return a return code of 0
When the user runs "gpcheckcat check_dependency_error"
Then gpcheckcat should return a return code of 0
And gpcheckcat should not print "SUMMARY REPORT: FAILED" to stdout
And gpcheckcat should not print "has a dependency issue on oid" to stdout
And gpcheckcat should print "Found no catalog issue" to stdout
And the user runs "dropdb check_dependency_error"
And the user runs "psql -c "DROP ROLE foo""


########################### @concourse_cluster tests ###########################
# The @concourse_cluster tag denotes the scenario that requires a remote cluster

Expand Down
13 changes: 13 additions & 0 deletions gpMgmt/test/behave/mgmt_utils/gpinitsystem.feature
Original file line number Diff line number Diff line change
Expand Up @@ -304,3 +304,16 @@ Feature: gpinitsystem tests
And a demo cluster is created using gpinitsystem args " "
And gpinitsystem should return a return code of 0

Scenario: gpinitsystem should create consistent port entry on segments postgresql.conf file
Given the database is not running
When a demo cluster is created using gpinitsystem args " "
And gpinitsystem should return a return code of 0
Then gpstate should return a return code of 0
And check segment conf: postgresql.conf

Scenario: gpinitsystem creates a cluster successfully when run with -D option
Given create demo cluster config
When the user runs command "gpinitsystem -a -c ../gpAux/gpdemo/clusterConfigFile -D"
Then gpinitsystem should return a return code of 0
And gpinitsystem should not print "Start Function REMOTE_EXECUTE_AND_GET_OUTPUT" to stdout
And gpinitsystem should not print "End Function REMOTE_EXECUTE_AND_GET_OUTPUT" to stdout
9 changes: 0 additions & 9 deletions gpcontrib/gpcloud/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,6 @@ ifeq ($(DEBUG_S3_SYMBOL),y)
PG_CPPFLAGS += -g
endif

# GPDB_12_MERGE_FIXME: I had to add these to make "make install" work
# with --with--lvm. I don't undertand what's going on, but I found this
# cure at https://github.com/rdkit/rdkit/issues/2192. Seems to be the
# same issue, whatever it is.
COMPILE.cxx.bc = $(CLANG) -xc++ -Wno-ignored-attributes $(BITCODE_CXXFLAGS) $(CPPFLAGS) -emit-llvm -c
%.bc : %.cpp
$(COMPILE.cxx.bc) -o $@ $<
$(LLVM_BINPATH)/opt -module-summary -f $@ -o $@

# Targets
MODULE_big = gpcloud
OBJS = src/gpcloud.o lib/http_parser.o lib/ini.o $(addprefix src/,$(COMMON_OBJS))
Expand Down
129 changes: 129 additions & 0 deletions src/backend/access/transam/xlogfuncs_gp.c
Original file line number Diff line number Diff line change
Expand Up @@ -163,3 +163,132 @@ gp_create_restore_point(PG_FUNCTION_ARGS)

SRF_RETURN_DONE(funcctx);
}

/*
* gp_switch_wal: switch WAL on all segments and return meaningful info
*/
Datum
gp_switch_wal(PG_FUNCTION_ARGS)
{
typedef struct Context
{
CdbPgResults cdb_pgresults;
Datum qd_switch_lsn;
Datum qd_switch_walfilename;
int index;
} Context;

FuncCallContext *funcctx;
Context *context;

if (SRF_IS_FIRSTCALL())
{
MemoryContext oldcontext;
TupleDesc tupdesc;
char *switch_command;

/* create a function context for cross-call persistence */
funcctx = SRF_FIRSTCALL_INIT();

/* switch to memory context for appropriate multiple function call */
oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);

/* create tupdesc for result */
tupdesc = CreateTemplateTupleDesc(3);
TupleDescInitEntry(tupdesc, (AttrNumber) 1, "segment_id",
INT2OID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber) 2, "switch_lsn",
LSNOID, -1, 0);
TupleDescInitEntry(tupdesc, (AttrNumber) 3, "switch_walfilename",
TEXTOID, -1, 0);

funcctx->tuple_desc = BlessTupleDesc(tupdesc);

context = (Context *) palloc(sizeof(Context));
context->cdb_pgresults.pg_results = NULL;
context->cdb_pgresults.numResults = 0;
context->index = 0;
funcctx->user_fctx = (void *) context;

if (!IS_QUERY_DISPATCHER() || Gp_role != GP_ROLE_DISPATCH)
elog(ERROR,
"cannot use gp_switch_wal() when not in QD mode");

switch_command = psprintf("SELECT switch_lsn, pg_walfile_name(switch_lsn) FROM pg_catalog.pg_switch_wal() switch_lsn");
CdbDispatchCommand(switch_command,
DF_NEED_TWO_PHASE | DF_CANCEL_ON_ERROR,
&context->cdb_pgresults);
context->qd_switch_lsn = DatumGetLSN(DirectFunctionCall1(pg_switch_wal, PointerGetDatum(NULL)));
context->qd_switch_walfilename = DirectFunctionCall1(pg_walfile_name, context->qd_switch_lsn);

pfree(switch_command);

funcctx->user_fctx = (void *) context;
MemoryContextSwitchTo(oldcontext);
}

/*
* Using SRF to return all the segment LSN information of the form
* {segment_id, switch_lsn, switch_walfilename}
*/
funcctx = SRF_PERCALL_SETUP();
context = (Context *) funcctx->user_fctx;

while (context->index <= context->cdb_pgresults.numResults)
{
Datum values[3];
bool nulls[3];
HeapTuple tuple;
Datum result;
Datum switch_lsn;
Datum switch_walfilename;
int seg_index;

if (context->index == 0)
{
/* Setting fields representing QD's switch WAL */
seg_index = GpIdentity.segindex;
switch_lsn = context->qd_switch_lsn;
switch_walfilename = context->qd_switch_walfilename;
}
else
{
struct pg_result *pgresult;
ExecStatusType resultStatus;
uint32 hi, lo;

/* Setting fields representing QE's switch WAL */
seg_index = context->index - 1;
pgresult = context->cdb_pgresults.pg_results[seg_index];
resultStatus = PQresultStatus(pgresult);

if (resultStatus != PGRES_COMMAND_OK && resultStatus != PGRES_TUPLES_OK)
ereport(ERROR,
(errcode(ERRCODE_GP_INTERCONNECTION_ERROR),
(errmsg("could not switch wal from segment"),
errdetail("%s", PQresultErrorMessage(pgresult)))));
Assert(PQntuples(pgresult) == 1);

sscanf(PQgetvalue(pgresult, 0, 0), "%X/%X", &hi, &lo);
switch_lsn = LSNGetDatum(((uint64) hi) << 32 | lo);
switch_walfilename = CStringGetTextDatum(PQgetvalue(pgresult, 0, 1));
}

/*
* Form tuple with appropriate data.
*/
MemSet(values, 0, sizeof(values));
MemSet(nulls, false, sizeof(nulls));

values[0] = Int16GetDatum(seg_index);
values[1] = switch_lsn;
values[2] = switch_walfilename;
tuple = heap_form_tuple(funcctx->tuple_desc, values, nulls);
result = HeapTupleGetDatum(tuple);

context->index++;
SRF_RETURN_NEXT(funcctx, result);
}

SRF_RETURN_DONE(funcctx);
}
31 changes: 0 additions & 31 deletions src/backend/catalog/cdb_schema.sql
Original file line number Diff line number Diff line change
Expand Up @@ -73,35 +73,4 @@ AS
SELECT pg_catalog.gp_execution_segment() as gp_segment_id, * FROM pg_catalog.pg_tablespace_location($1)'
LANGUAGE SQL EXECUTE ON COORDINATOR;

-- pg_switch_wal wrapper functions to switch WAL segment files on Greenplum cluster-wide
CREATE FUNCTION gp_switch_wal_on_all_segments (OUT gp_segment_id int, OUT pg_switch_wal pg_lsn)
RETURNS SETOF RECORD AS
$$
DECLARE
seg_id int;
BEGIN
EXECUTE 'select pg_catalog.gp_execution_segment()' INTO seg_id;
-- check if execute in entrydb QE to prevent giving wrong results
IF seg_id = -1 THEN
RAISE EXCEPTION 'Cannot execute in entrydb, this query is not currently supported by GPDB.';
END IF;
RETURN QUERY SELECT pg_catalog.gp_execution_segment() as gp_segment_id, * FROM pg_catalog.pg_switch_wal();
END;
$$ LANGUAGE plpgsql EXECUTE ON ALL SEGMENTS;

CREATE FUNCTION gp_switch_wal (OUT gp_segment_id int, OUT pg_switch_wal pg_lsn)
RETURNS SETOF RECORD
AS
'SELECT * FROM pg_catalog.gp_switch_wal_on_all_segments()
UNION ALL
SELECT pg_catalog.gp_execution_segment() as gp_segment_id, * FROM pg_catalog.pg_switch_wal()'
LANGUAGE SQL EXECUTE ON COORDINATOR;

COMMENT ON FUNCTION pg_catalog.gp_switch_wal_on_all_segments() IS 'Switch WAL segment files on all primary segments';
COMMENT ON FUNCTION pg_catalog.gp_switch_wal() IS 'Switch WAL segment files on all segments';

REVOKE EXECUTE ON FUNCTION gp_switch_wal_on_all_segments() FROM public;
REVOKE EXECUTE ON FUNCTION gp_switch_wal() FROM public;


RESET log_min_messages;
24 changes: 6 additions & 18 deletions src/backend/commands/tablecmds.c
Original file line number Diff line number Diff line change
Expand Up @@ -8668,17 +8668,12 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
* We have acquired lockmode on the root and first-level partitions
* already. This leaves the deeper subpartitions unlocked, but no
* operations can drop (or alter) those relations without locking
* through the root. Note that find_all_inheritors() also includes
* the root partition in the returned list.
*
* GPDB_12_MERGE_FIXME: we used to have NoLock here, but that caused
* assertion failures in the regression tests:
*
* FATAL: Unexpected internal error (relation.c:74)
* DETAIL: FailedAssertion("!(lockmode != 0 || (Mode == BootstrapProcessing) || CheckRelationLockedByMe(r, 1, 1))", File: "relation.c", Line: 74)
*
* so use AccessShareLock instead. Was it important that we used
* NoLock here?
* through the root. But we still lock them to meet the upstream
* expecation in relation_open that all callers should have acquired
* a lock on the table except in bootstrap mode.

* Note that find_all_inheritors() also includes the root partition
* in the returned list.
*/
List *all_inheritors = find_all_inheritors(tab->relid, AccessShareLock, NULL);
ListCell *lc;
Expand Down Expand Up @@ -10376,13 +10371,6 @@ ATExecAddIndex(AlteredTableInfo *tab, Relation rel,
/* The IndexStmt has already been through transformIndexStmt */
Assert(stmt->transformed);

/* The index should already be built if we are a QE */
/* GPDB_12_MERGE_FIXME: it doesn't seem to work that way anymore. */
#if 0
if (Gp_role == GP_ROLE_EXECUTE)
return InvalidObjectAddress;
#endif

/* suppress schema rights check when rebuilding existing index */
check_rights = !is_rebuild;
/* skip index build if phase 3 will do it or we're reusing an old one */
Expand Down
25 changes: 9 additions & 16 deletions src/backend/executor/nodeModifyTable.c
Original file line number Diff line number Diff line change
Expand Up @@ -671,11 +671,11 @@ ExecInsert(ModifyTableState *mtstate,
* violations before firing these triggers, because they can change the
* values to insert. Also, they can run arbitrary user-defined code with
* side-effects that we can't cancel by just not inserting the tuple.
*/
/*
* GPDB_12_MERGE_FIXME: PostgreSQL *does* fire INSERT and DELETE
* triggers on an UPDATE that moves tuples from one partition to another.
* Should we follow that example with cross-segment UPDATEs too?
*
* Considering that the original command is UPDATE for a SplitUpdate, fire
* insert triggers may lead to the wrong action to be enforced. And the
* triggers in GPDB may require cross segments data changes, disallow the
* INSERT triggers on a SplitUpdate.
*/
if (resultRelInfo->ri_TrigDesc &&
resultRelInfo->ri_TrigDesc->trig_insert_before_row &&
Expand Down Expand Up @@ -1059,11 +1059,8 @@ ExecInsert(ModifyTableState *mtstate,

/* AFTER ROW INSERT Triggers */
/*
* GPDB: Don't fire DELETE triggers on a split UPDATE.
*
* GPDB_12_MERGE_FIXME: PostgreSQL *does* fire INSERT and DELETE
* triggers on an UPDATE that moves tuples from one partition to another.
* Should we follow that example with cross-segment UPDATEs too?
* GPDB: Disallow INSERT triggers on a split UPDATE. See comments in
* BEFORE ROW INSERT Triggers.
*/
if (!splitUpdate)
ExecARInsertTriggers(estate, resultRelInfo, slot, recheckIndexes,
Expand Down Expand Up @@ -1217,9 +1214,7 @@ ExecDelete(ModifyTableState *mtstate,

/* BEFORE ROW DELETE Triggers */
/*
* GPDB_12_MERGE_FIXME: PostgreSQL *does* fire INSERT and DELETE
* triggers on an UPDATE that moves tuples from one partition to another.
* Should we follow that example with cross-segment UPDATEs too?
* Disallow DELETE triggers on a split UPDATE. See comments in ExecInsert().
*/
if (resultRelInfo->ri_TrigDesc &&
resultRelInfo->ri_TrigDesc->trig_delete_before_row &&
Expand Down Expand Up @@ -1511,9 +1506,7 @@ ldelete:;

/* AFTER ROW DELETE Triggers */
/*
* GPDB_12_MERGE_FIXME: PostgreSQL *does* fire INSERT and DELETE
* triggers on an UPDATE that moves tuples from one partition to another.
* Should we follow that example with cross-segment UPDATEs too?
* Disallow DELETE triggers on a split UPDATE. See comments in ExecInsert().
*/
if (!RelationIsNonblockRelation(resultRelationDesc) && !splitUpdate)
{
Expand Down
6 changes: 6 additions & 0 deletions src/backend/executor/nodeRecursiveunion.c
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,11 @@ ExecRecursiveUnion(PlanState *pstate)

/* intermediate table becomes working table */
node->working_table = node->intermediate_table;
for (int k = 1; k < node->refcount; k++)
{
/* The work table hasn't been scanned yet, it must be at start, don't need to rescan here */
tuplestore_alloc_read_pointer(node->working_table, EXEC_FLAG_REWIND);
}

/* create new empty intermediate table */
node->intermediate_table = tuplestore_begin_heap(false, false,
Expand Down Expand Up @@ -199,6 +204,7 @@ ExecInitRecursiveUnion(RecursiveUnion *node, EState *estate, int eflags)
rustate->intermediate_empty = true;
rustate->working_table = tuplestore_begin_heap(false, false, work_mem);
rustate->intermediate_table = tuplestore_begin_heap(false, false, work_mem);
rustate->refcount = 0;

/*
* If hashing, we need a per-tuple memory context for comparisons, and a
Expand Down
Loading
Loading