Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add linux ci #2

Merged
merged 20 commits into from
Oct 16, 2023
Merged

Add linux ci #2

merged 20 commits into from
Oct 16, 2023

Conversation

cooljeanius
Copy link
Owner

Add GitHub CI

talregev and others added 6 commits July 9, 2023 19:58
The following patch lifts further restrictions which limited _BitInt to at
most 16319 bits up to 65535.
The problem was mainly in INTEGER_CST representation, which had 3
unsigned char members to describe lengths in number of 64-bit limbs, which
it wanted to fit into 32 bits.  This patch removes the third one which was
just a cache to save a few compile time cycles for wi::to_offset and
enlarges the other two members to unsigned short.
Furthermore, the same problem has been in some uses of trailing_wide_int*
(in value-range-storage*) and value-range-storage* itself, while other
uses of trailing_wide_int* have been fine (e.g. CONST_POLY_INT, where no
constants will be larger than 3/5/9/11 limbs depending on target, so 255
limit is plenty).  The patch turns all those length representations to be
unsigned short for consistency, so value-range-storage* can handle even
16320-65535 bits BITINT_TYPE ranges.  The cc1plus growth is about 16K,
so not really significant for 38M .text section.

Note, the reason for the new limit is
  unsigned int precision : 16;
TYPE_PRECISION limit, if we wanted to overcome that, TYPE_PRECISION would
need to use some other member for BITINT_TYPE from all the others and
we could reach that way 4194239 limit (65535 * 64 - 1, again implied by
INTEGER_CST and value-range-storage*).  Dunno if that is
worth it or if it is something we want to do for GCC 14 though.

2023-10-14  Jakub Jelinek  <jakub@redhat.com>

	PR c/102989
gcc/
	* tree-core.h (struct tree_base): Remove int_length.offset
	member, change type of int_length.unextended and int_length.extended
	from unsigned char to unsigned short.
	* tree.h (TREE_INT_CST_OFFSET_NUNITS): Remove.
	(wi::extended_tree <N>::get_len): Don't use TREE_INT_CST_OFFSET_NUNITS,
	instead compute it at runtime from TREE_INT_CST_EXT_NUNITS and
	TREE_INT_CST_NUNITS.
	* tree.cc (wide_int_to_tree_1): Don't assert
	TREE_INT_CST_OFFSET_NUNITS value.
	(make_int_cst): Don't initialize TREE_INT_CST_OFFSET_NUNITS.
	* wide-int.h (WIDE_INT_MAX_ELTS): Change from 255 to 1024.
	(WIDEST_INT_MAX_ELTS): Change from 510 to 2048, adjust comment.
	(trailing_wide_int_storage): Change m_len type from unsigned char *
	to unsigned short *.
	(trailing_wide_int_storage::trailing_wide_int_storage): Change second
	argument from unsigned char * to unsigned short *.
	(trailing_wide_ints): Change m_max_len type from unsigned char to
	unsigned short.  Change m_len element type from
	struct{unsigned char len;} to unsigned short.
	(trailing_wide_ints <N>::operator []): Remove .len from m_len
	accesses.
	* value-range-storage.h (irange_storage::lengths_address): Change
	return type from const unsigned char * to const unsigned short *.
	(irange_storage::write_lengths_address): Change return type from
	unsigned char * to unsigned short *.
	* value-range-storage.cc (irange_storage::write_lengths_address):
	Likewise.
	(irange_storage::lengths_address): Change return type from
	const unsigned char * to const unsigned short *.
	(write_wide_int): Change len argument type from unsigned char *&
	to unsigned short *&.
	(irange_storage::set_irange): Change len variable type from
	unsigned char * to unsigned short *.
	(read_wide_int): Change len argument type from unsigned char to
	unsigned short.  Use trailing_wide_int_storage <unsigned short>
	instead of trailing_wide_int_storage and
	trailing_wide_int <unsigned short> instead of trailing_wide_int.
	(irange_storage::get_irange): Change len variable type from
	unsigned char * to unsigned short *.
	(irange_storage::size): Multiply n by sizeof (unsigned short)
	in len_size variable initialization.
	(irange_storage::dump): Change len variable type from
	unsigned char * to unsigned short *.
gcc/cp/
	* module.cc (trees_out::start, trees_in::start): Remove
	TREE_INT_CST_OFFSET_NUNITS handling.
gcc/testsuite/
	* gcc.dg/bitint-38.c: Change into dg-do run test, in addition
	to checking the addition, division and right shift results at compile
	time check it also at runtime.
	* gcc.dg/bitint-39.c: New test.
gcc/fortran/ChangeLog:

	* gfortran.h (ext_attr_t): Add omp_allocate flag.
	* match.cc (gfc_free_omp_namelist): Void deleting same
	u2.allocator multiple times now that a sequence can use
	the same one.
	* openmp.cc (gfc_match_omp_clauses, gfc_match_omp_allocate): Use
	same allocator expr multiple times.
	(is_predefined_allocator): Make static.
	(gfc_resolve_omp_allocate): Update/extend restriction checks;
	remove sorry message.
	(resolve_omp_clauses): Reject corarrays in allocate/allocators
	directive.
	* parse.cc (check_omp_allocate_stmt): Permit procedure pointers
	here (rejected later) for less misleading diagnostic.
	* trans-array.cc (gfc_trans_auto_array_allocation): Propagate
	size for GOMP_alloc and location to which it should be added to.
	* trans-decl.cc (gfc_trans_deferred_vars): Handle 'omp allocate'
	for stack variables; sorry for static variables/common blocks.
	* trans-openmp.cc (gfc_trans_omp_clauses): Evaluate 'allocate'
	clause's allocator only once; fix adding expressions to the
	block.
	(gfc_trans_omp_single): Pass a block to gfc_trans_omp_clauses.

gcc/ChangeLog:

	* gimplify.cc (gimplify_bind_expr): Handle Fortran's
	'omp allocate' for stack variables.

libgomp/ChangeLog:

	* libgomp.texi (OpenMP Impl. Status): Mention that Fortran now
	supports the allocate directive for stack variables.
	* testsuite/libgomp.fortran/allocate-5.f90: New test.
	* testsuite/libgomp.fortran/allocate-6.f90: New test.
	* testsuite/libgomp.fortran/allocate-7.f90: New test.
	* testsuite/libgomp.fortran/allocate-8.f90: New test.

gcc/testsuite/ChangeLog:

	* c-c++-common/gomp/allocate-14.c: Fix directive name.
	* c-c++-common/gomp/allocate-15.c: Likewise.
	* c-c++-common/gomp/allocate-9.c: Fix comment typo.
	* gfortran.dg/gomp/allocate-4.f90: Remove sorry dg-error.
	* gfortran.dg/gomp/allocate-7.f90: Likewise.
	* gfortran.dg/gomp/allocate-10.f90: New test.
	* gfortran.dg/gomp/allocate-11.f90: New test.
	* gfortran.dg/gomp/allocate-12.f90: New test.
	* gfortran.dg/gomp/allocate-13.f90: New test.
	* gfortran.dg/gomp/allocate-14.f90: New test.
	* gfortran.dg/gomp/allocate-15.f90: New test.
	* gfortran.dg/gomp/allocate-8.f90: New test.
	* gfortran.dg/gomp/allocate-9.f90: New test.
… routines

This commit completes the documentation of the OpenMP memory-management
routines, except for the unimplemented TR11 additions.  It also makes clear
in the 'Memory allocation' section of the 'OpenMP-Implementation Specifics'
chapter under which condition OpenMP managed memory/allocators are used.

libgomp/ChangeLog:

	* libgomp.texi: Fix some typos.
	(Memory Management Routines): Document remaining 5.x routines.
	(Memory allocation): Make clear when the section applies.
@cooljeanius
Copy link
Owner Author

Can't even see what the error was due to "This step has been truncated due to its large size. Download the full logs from the menu once the workflow run has completed," and tbh if it's going to be like that every time, I don't really want to bother...

@cooljeanius
Copy link
Owner Author

actually wait, I've got an idea

@cooljeanius cooljeanius reopened this Oct 15, 2023
cooljeanius and others added 13 commits October 15, 2023 01:08
try using bootstrap-lean instead of bootstrap
hopefully this goes quicker...
all right, never mind about tags...
trim down build a bit
I don't get why it needs to compile before building the docs...
docbuilding is annoying
more reordering
latest error was:
"/usr/bin/x86_64-linux-gnu-ld: cannot find crti.o: No such file or directory"
last build timed out; let's see if we can fix that...
add mailutils dependency to see if that will let the mailing of the testresults work properly
`Mail` might be missing...
@cooljeanius cooljeanius merged commit 57f89bb into master Oct 16, 2023
2 checks passed
@talregev
Copy link

You are welcome.
Can you create a PR in the original repo?

@cooljeanius
Copy link
Owner Author

You are welcome. Can you create a PR in the original repo?

Nope, GCC has its own way of doing things. Although it'd be cool if we could get something like git's "gitgitgadget" set up: https://github.com/gitgitgadget/gitgitgadget

@talregev
Copy link

I already sent my patch to the official mailing list. If you open a PR in the original repo, I can send it too for you. Or I can send you my email and you will change small details that will fit you.
Anyway The PR is not for official repo, it is for other people that can use your changes.

@cooljeanius
Copy link
Owner Author

I already sent my patch to the official mailing list. If you open a PR in the original repo, I can send it too for you. Or I can send you my email and you will change small details that will fit you. Anyway The PR is not for official repo, it is for other people that can use your changes.

The upstream repo doesn't work by PR though. I mean I guess I can send my updated version of the file to the mailing list as well, but I doubt it will be accepted; upstream has a pretty consistent "no GitHub" stance...

@talregev
Copy link

When you leave a PR, it not for upstream. It for all the other. Like I left the PR, and you continue my PR because of that.

@talregev
Copy link

You can also leave a patch in the mailing list. You can do both! :)

@cooljeanius
Copy link
Owner Author

talregev#3 ?

@talregev
Copy link

I meant a PR for upstream:
https://github.com/gcc-mirror/gcc

@cooljeanius
Copy link
Owner Author

I meant a PR for upstream: gcc-mirror/gcc

If you merge it into the branch you used to create gcc-mirror#88, the changes will show up there as well

cooljeanius pushed a commit that referenced this pull request Nov 10, 2023
Improve stack protector patterns and peephole2s even more:

a. Use unrelated register clears with integer mode size <= word
   mode size to clear stack protector scratch register.

b. Use unrelated register initializations in front of stack
   protector sequence to clear stack protector scratch register.

c. Use unrelated register initializations using LEA instructions
   to clear stack protector scratch register.

These stack protector improvements reuse 6914 unrelated register
initializations to substitute the clear of stack protector scratch
register in 12034 instances of stack protector sequence in recent linux
defconfig build.

gcc/ChangeLog:

	* config/i386/i386.md (@stack_protect_set_1_<PTR:mode>_<W:mode>):
	Use W mode iterator instead of SWI48.  Output MOV instead of XOR
	for TARGET_USE_MOV0.
	(stack_protect_set_1 peephole2): Use integer modes with
	mode size <= word mode size for operand 3.
	(stack_protect_set_1 peephole2 #2): New peephole2 pattern to
	substitute stack protector scratch register clear with unrelated
	register initialization, originally in front of stack
	protector sequence.
	(*stack_protect_set_3_<PTR:mode>_<SWI48:mode>): New insn pattern.
	(stack_protect_set_1 peephole2): New peephole2 pattern to
	substitute stack protector scratch register clear with unrelated
	register initialization involving LEA instruction.
cooljeanius pushed a commit that referenced this pull request Nov 10, 2023
Use unrelated register initializations using zero/sign-extend instructions
to clear stack protector scratch register.

Hanlde only SI -> DImode extensions for 64-bit targets, as this is the
only extension that triggers the peephole in a non-negligible number.

Also use explicit check for word_mode instead of mode iterator in peephole2
patterns to avoid pattern explosion.

gcc/ChangeLog:

	* config/i386/i386.md (stack_protect_set_1 peephole2):
	Explicitly check operand 2 for word_mode.
	(stack_protect_set_1 peephole2 #2): Ditto.
	(stack_protect_set_2 peephole2): Ditto.
	(stack_protect_set_3 peephole2): Ditto.
	(*stack_protect_set_4z_<mode>_di): New insn patter.
	(*stack_protect_set_4s_<mode>_di): Ditto.
	(stack_protect_set_4 peephole2): New peephole2 pattern to
	substitute stack protector scratch register clear with unrelated
	register initialization involving zero/sign-extend instruction.
cooljeanius pushed a commit that referenced this pull request Dec 18, 2023
Since the last import from upstream libsanitizer, the output has changed
and now looks more like this:

READ of size 6 at 0x7ff7beb2a144 thread T0
    #0 0x101cf7796 in MemcmpInterceptorCommon(void*, int (*)(void const*, void const*, unsigned long), void const*, void const*, unsigned long) sanitizer_common_interceptors.inc:813
    #1 0x101cf7b99 in memcmp sanitizer_common_interceptors.inc:840
    #2 0x108a0c39f in __stack_chk_guard+0xf (dyld:x86_64+0x8039f)

so let's adjust the pattern accordingly.

gcc/testsuite/ChangeLog:

	* c-c++-common/asan/memcmp-1.c: Adjust pattern on darwin.
cooljeanius pushed a commit that referenced this pull request Dec 18, 2023
…-int (PR target/112413)

On m68k the compiler assumes that the PC-relative jump-via-jump-table
instruction and the jump table are adjacent with no padding in between.

When -mlong-jump-table-offsets is combined with -malign-int, a 2-byte
nop may be inserted before the jump table, causing the jump to add the
fetched offset to the wrong PC base and thus jump to the wrong address.

Fixed by referencing the jump table via its label. On the test case
in the PR the object code change is (the moveal at 16 is the nop):

    a:  6536            bcss 42 <f+0x42>
    c:  e588            lsll #2,%d0
    e:  203b 0808       movel %pc@(18 <f+0x18>,%d0:l),%d0
-  12:  4efb 0802       jmp %pc@(16 <f+0x16>,%d0:l)
+  12:  4efb 0804       jmp %pc@(18 <f+0x18>,%d0:l)
   16:  284c            moveal %a4,%a4
   18:  0000 0020       orib #32,%d0
   1c:  0000 002c       orib #44,%d0

Bootstrapped and tested on m68k-linux-gnu, no regressions.

Note: I don't have commit rights to I would need assistance applying this.

	PR target/112413
gcc/

	* config/m68k/linux.h (ASM_RETURN_CASE_JUMP): For
	TARGET_LONG_JUMP_TABLE_OFFSETS, reference the jump table
	via its label.
	* config/m68k/m68kelf.h (ASM_RETURN_CASE_JUMP): Likewise.
	* config/m68k/netbsd-elf.h (ASM_RETURN_CASE_JUMP): Likewise.
cooljeanius pushed a commit that referenced this pull request Dec 23, 2023
During partial ordering, we want to look through dependent alias
template specializations within template arguments and otherwise
treat them as opaque in other contexts (see e.g. r7-7116-g0c942f3edab108
and r11-7011-g6e0a231a4aa240).  To that end template_args_equal was
given a partial_order flag that controls this behavior.  This flag
does the right thing when a dependent alias template specialization
appears as template argument of the partial specialization, e.g. in

  template<class T, class...> using first_t = T;
  template<class T> struct traits;
  template<class T> struct traits<first_t<T, T&>> { }; // #1
  template<class T> struct traits<first_t<const T, T&>> { }; // #2

we correctly consider #2 to be more specialized than #1.  But if the
alias specialization appears as a nested template argument of another
class template specialization, e.g. in

  template<class T> struct traits<A<first_t<T, T&>>> { }; // #1
  template<class T> struct traits<A<first_t<const T, T&>>> { }; // #2

then we incorrectly consider #1 and #2 to be unordered.  This is because

  1. we don't propagate the flag to recursive template_args_equal calls
  2. we don't use structural equality for class template specializations
     written in terms of dependent alias template specializations

This patch fixes the first issue by turning the partial_order flag into
a global.  This patch fixes the second issue by making us propagate
structural equality appropriately when building a class template
specialization.  In passing this patch also improves hashing of
specializations that use structural equality.

	PR c++/90679

gcc/cp/ChangeLog:

	* cp-tree.h (comp_template_args): Remove partial_order parameter.
	(template_args_equal): Likewise.
	* pt.cc (comparing_for_partial_ordering): New global flag.
	(iterative_hash_template_arg) <case tcc_type>: Hash the template
	and arguments for specializations that use structural equality.
	(template_args_equal): Remove partial order parameter and
	use comparing_for_partial_ordering instead.
	(comp_template_args): Likewise.
	(comp_template_args_porder): Set comparing_for_partial_ordering
	instead.  Make static.
	(any_template_arguments_need_structural_equality_p): Return true
	for an argument that's a dependent alias template specialization
	or a class template specialization that itself needs structural
	equality.
	* tree.cc (cp_tree_equal) <case TREE_VEC>: Adjust call to
	comp_template_args.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/alias-decl-75a.C: New test.
	* g++.dg/cpp0x/alias-decl-75b.C: New test.
cooljeanius pushed a commit that referenced this pull request Feb 5, 2024
This patch adjusts the costs so that we treat REG and SUBREG expressions the
same for costing.

This was motivated by bt_skip_func and bt_find_func in xz and results in nearly
a 5% improvement in the dynamic instruction count for input #2 and smaller, but
definitely visible improvements pretty much across the board.  Exceptions would
be perlbench input #1 and exchange2 which showed very small regressions.

In the bt_find_func and bt_skip_func cases we have  something like this:

> (insn 10 7 11 2 (set (reg/v:DI 136 [ x ])
>         (zero_extend:DI (subreg/s/u:SI (reg/v:DI 137 [ a ]) 0))) "zz.c":6:21 387 {*zero_extendsidi2_bitmanip}
>      (nil))
> (insn 11 10 12 2 (set (reg:DI 142 [ _1 ])
>         (plus:DI (reg/v:DI 136 [ x ])
>             (reg/v:DI 139 [ b ]))) "zz.c":7:23 5 {adddi3}
>      (nil))

[ ... ]> (insn 13 12 14 2 (set (reg:DI 143 [ _2 ])
>         (plus:DI (reg/v:DI 136 [ x ])
>             (reg/v:DI 141 [ c ]))) "zz.c":8:23 5 {adddi3}
>      (nil))

Note the two uses of (reg 136). The best way to handle that in combine might be
a 3->2 split.  But there's a much better approach if we look at fwprop...

(set (reg:DI 142 [ _1 ])
    (plus:DI (zero_extend:DI (subreg/s/u:SI (reg/v:DI 137 [ a ]) 0))
        (reg/v:DI 139 [ b ])))
change not profitable (cost 4 -> cost 8)

So that should be the same cost as a regular DImode addition when the ZBA
extension is enabled.  But it ends up costing more because the clause to cost
this variant isn't prepared to handle a SUBREG.  That results in the RTL above
having too high a cost and fwprop gives up.

One approach would be to replace the REG_P  with REG_P || SUBREG_P in the
costing code.  I ultimately decided against that and instead check if the
operand in question passes register_operand.

By far the most important case to handle is the DImode PLUS.  But for the sake
of consistency, I changed the other instances in riscv_rtx_costs as well.  For
those other cases we're talking about improvements in the .000001% range.

While we are into stage4, this just hits cost modeling which we've generally
agreed is still appropriate (though we were mostly talking about vector).  So
I'm going to extend that general agreement ever so slightly and include scalar
cost modeling :-)

gcc/
	* config/riscv/riscv.cc (riscv_rtx_costs): Handle SUBREG and REG
	similarly.

gcc/testsuite/

	* gcc.target/riscv/reg_subreg_costs.c: New test.

	Co-authored-by: Jivan Hakobyan <jivanhakobyan9@gmail.com>
cooljeanius pushed a commit that referenced this pull request Apr 7, 2024
We evaluate constexpr functions on the original, pre-genericization bodies.
That means that the function body we're evaluating will not have gone
through cp_genericize_r's "Map block scope extern declarations to visible
declarations with the same name and type in outer scopes if any".  Here:

  constexpr bool bar() { return true; } // #1
  constexpr bool foo() {
    constexpr bool bar(void); // #2
    return bar();
  }

it means that we:
1) register_constexpr_fundef (#1)
2) cp_genericize (#1)
   nothing interesting happens
3) register_constexpr_fundef (foo)
   does copy_fn, so we have two copies of the BIND_EXPR
4) cp_genericize (foo)
   this remaps #2 to #1, but only on one copy of the BIND_EXPR
5) retrieve_constexpr_fundef (foo)
   we find it, no problem
6) retrieve_constexpr_fundef (#2)
   and here #2 isn't found in constexpr_fundef_table, because
   we're working on the BIND_EXPR copy where #2 wasn't mapped to #1
   so we fail.  We've only registered #1.

It should work to use DECL_LOCAL_DECL_ALIAS (which used to be
extern_decl_map).  We evaluate constexpr functions on pre-cp_fold
bodies to avoid diagnostic problems, but the remapping I'm proposing
should not interfere with diagnostics.

This is not a problem for a global scope redeclaration; there we go
through duplicate_decls which keeps the DECL_UID:
  DECL_UID (olddecl) = olddecl_uid;
and DECL_UID is what constexpr_fundef_hasher::hash uses.

	PR c++/111132

gcc/cp/ChangeLog:

	* constexpr.cc (get_function_named_in_call): Use
	cp_get_fndecl_from_callee.
	* cvt.cc (cp_get_fndecl_from_callee): If there's a
	DECL_LOCAL_DECL_ALIAS, use it.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/constexpr-redeclaration3.C: New test.
	* g++.dg/cpp0x/constexpr-redeclaration4.C: New test.
cooljeanius pushed a commit that referenced this pull request Jul 7, 2024
Here during overload resolution we have two strictly viable ambiguous
candidates #1 and #2, and two non-strictly viable candidates #3 and #4
which we hold on to ever since r14-6522.  These latter candidates have
an empty second arg conversion since the first arg conversion was deemed
bad, and this trips up joust when called on #3 and #4 which assumes all
arg conversions are there.

We can fix this by making joust robust to empty arg conversions, but in
this situation we shouldn't need to compare #3 and #4 at all given that
we have a strictly viable candidate.  To that end, this patch makes
tourney shortcut considering non-strictly viable candidates upon
encountering ambiguity between two strictly viable candidates (taking
advantage of the fact that the candidates list is sorted according to
viability via splice_viable).

	PR c++/115239

gcc/cp/ChangeLog:

	* call.cc (tourney): Don't consider a non-strictly viable
	candidate as the champ if there was ambiguity between two
	strictly viable candidates.

gcc/testsuite/ChangeLog:

	* g++.dg/overload/error7.C: New test.

Reviewed-by: Jason Merrill <jason@redhat.com>
cooljeanius pushed a commit that referenced this pull request Sep 20, 2024
…o_debug_section [PR116614]

cat abc.C
  #define A(n) struct T##n {} t##n;
  #define B(n) A(n##0) A(n##1) A(n##2) A(n##3) A(n##4) A(n##5) A(n##6) A(n##7) A(n##8) A(n##9)
  #define C(n) B(n##0) B(n##1) B(n##2) B(n##3) B(n##4) B(n##5) B(n##6) B(n##7) B(n##8) B(n##9)
  #define D(n) C(n##0) C(n##1) C(n##2) C(n##3) C(n##4) C(n##5) C(n##6) C(n##7) C(n##8) C(n##9)
  #define E(n) D(n##0) D(n##1) D(n##2) D(n##3) D(n##4) D(n##5) D(n##6) D(n##7) D(n##8) D(n##9)
  E(1) E(2) E(3)
  int main () { return 0; }
./xg++ -B ./ -o abc{.o,.C} -flto -flto-partition=1to1 -O2 -g -fdebug-types-section -c
./xgcc -B ./ -o abc{,.o} -flto -flto-partition=1to1 -O2
(not included in testsuite as it takes a while to compile) FAILs with
lto-wrapper: fatal error: Too many copied sections: Operation not supported
compilation terminated.
/usr/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status

The following patch fixes that.  Most of the 64K+ section support for
reading and writing was already there years ago (and especially reading used
quite often already) and a further bug fixed in it in the PR104617 fix.

Yet, the fix isn't solely about removing the
  if (new_i - 1 >= SHN_LORESERVE)
    {
      *err = ENOTSUP;
      return "Too many copied sections";
    }
5 lines, the missing part was that the function only handled reading of
the .symtab_shndx section but not copying/updating of it.
If the result has less than 64K-epsilon sections, that actually wasn't
needed, but e.g. with -fdebug-types-section one can exceed that pretty
easily (reported to us on WebKitGtk build on ppc64le).
Updating the section is slightly more complicated, because it basically
needs to be done in lock step with updating the .symtab section, if one
doesn't need to use SHN_XINDEX in there, the section should (or should be
updated to) contain SHN_UNDEF entry, otherwise needs to have whatever would
be overwise stored but couldn't fit.  But repeating due to that all the
symtab decisions what to discard and how to rewrite it would be ugly.

So, the patch instead emits the .symtab_shndx section (or sections) last
and prepares the content during the .symtab processing and in a second
pass when going just through .symtab_shndx sections just uses the saved
content.

2024-09-07  Jakub Jelinek  <jakub@redhat.com>

	PR lto/116614
	* simple-object-elf.c (SHN_COMMON): Align comment with neighbouring
	comments.
	(SHN_HIRESERVE): Use uppercase hex digits instead of lowercase for
	consistency.
	(simple_object_elf_find_sections): Formatting fixes.
	(simple_object_elf_fetch_attributes): Likewise.
	(simple_object_elf_attributes_merge): Likewise.
	(simple_object_elf_start_write): Likewise.
	(simple_object_elf_write_ehdr): Likewise.
	(simple_object_elf_write_shdr): Likewise.
	(simple_object_elf_write_to_file): Likewise.
	(simple_object_elf_copy_lto_debug_section): Likewise.  Don't fail for
	new_i - 1 >= SHN_LORESERVE, instead arrange in that case to copy
	over .symtab_shndx sections, though emit those last and compute their
	section content when processing associated .symtab sections.  Handle
	simple_object_internal_read failure even in the .symtab_shndx reading
	case.
cooljeanius pushed a commit that referenced this pull request Oct 4, 2024
This is another case of load hoisting breaking UID order in the
preheader, this time between two hoistings.  The easiest way out is
to do what we do for the main stmt - copy instead of move.

	PR tree-optimization/116902
	PR tree-optimization/116842
	* tree-vect-stmts.cc (sort_after_uid): Remove again.
	(hoist_defs_of_uses): Copy defs instead of hoisting them so
	we can zero their UID.
	(vectorizable_load): Separate analysis and transform call,
	do transform on the stmt copy.

	* g++.dg/torture/pr116902.C: New testcase.
cooljeanius pushed a commit that referenced this pull request Oct 14, 2024
Whenever C1 and C2 are integer constants, X is of a wrapping type, and
cmp is a relational operator, the expression X +- C1 cmp C2 can be
simplified in the following cases:

(a) If cmp is <= and C2 -+ C1 == +INF(1), we can transform the initial
comparison in the following way:
   X +- C1 <= C2
   -INF <= X +- C1 <= C2 (add left hand side which holds for any X, C1)
   -INF -+ C1 <= X <= C2 -+ C1 (add -+C1 to all 3 expressions)
   -INF -+ C1 <= X <= +INF (due to (1))
   -INF -+ C1 <= X (eliminate the right hand side since it holds for any X)

(b) By analogy, if cmp if >= and C2 -+ C1 == -INF(1), use the following
sequence of transformations:

   X +- C1 >= C2
   +INF >= X +- C1 >= C2 (add left hand side which holds for any X, C1)
   +INF -+ C1 >= X >= C2 -+ C1 (add -+C1 to all 3 expressions)
   +INF -+ C1 >= X >= -INF (due to (1))
   +INF -+ C1 >= X (eliminate the right hand side since it holds for any X)

(c) The > and < cases are negations of (a) and (b), respectively.

This transformation allows to occasionally save add / sub instructions,
for instance the expression

3 + (uint32_t)f() < 2

compiles to

cmn     w0, #4
cset    w0, ls

instead of

add     w0, w0, 3
cmp     w0, 2
cset    w0, ls

on aarch64.

Testcases that go together with this patch have been split into two
separate files, one containing testcases for unsigned variables and the
other for wrapping signed ones (and thus compiled with -fwrapv).
Additionally, one aarch64 test has been adjusted since the patch has
caused the generated code to change from

cmn     w0, #2
csinc   w0, w1, wzr, cc   (x < -2)

to

cmn     w0, #3
csinc   w0, w1, wzr, cs   (x <= -3)

This patch has been bootstrapped and regtested on aarch64, x86_64, and
i386, and additionally regtested on riscv32.

gcc/ChangeLog:

	PR tree-optimization/116024
	* match.pd: New transformation around integer comparison.

gcc/testsuite/ChangeLog:

	* gcc.dg/tree-ssa/pr116024-2.c: New test.
	* gcc.dg/tree-ssa/pr116024-2-fwrapv.c: Ditto.
	* gcc.target/aarch64/gtu_to_ltu_cmp_1.c: Adjust.
cooljeanius pushed a commit that referenced this pull request Oct 24, 2024
PR jit/117275 reports various jit test failures seen on
powerpc64le-unknown-linux-gnu due to hitting this assertion
in varasm.cc on the 2nd compilation in a process:

#2  0x00007ffff63e67d0 in assemble_external_libcall (fun=0x7ffff2a4b1d8)
    at ../../src/gcc/varasm.cc:2650
2650          gcc_assert (!pending_assemble_externals_processed);
(gdb) p pending_assemble_externals_processed
$1 = true

We're not properly resetting state in varasm.cc after a compile
for libgccjit.

Fixed thusly.

gcc/ChangeLog:
	PR jit/117275
	* toplev.cc (toplev::finalize): Call varasm_cc_finalize.
	* varasm.cc (varasm_cc_finalize): New.
	* varasm.h (varasm_cc_finalize): New decl.

Signed-off-by: David Malcolm <dmalcolm@redhat.com>
cooljeanius pushed a commit that referenced this pull request Dec 22, 2024
…R117557]

The testcase

#include <stdint.h>
#include <string.h>

#define N 8
#define L 8

void f(const uint8_t * restrict seq1,
       const uint8_t *idx, uint8_t *seq_out) {
  for (int i = 0; i < L; ++i) {
    uint8_t h = idx[i];
    memcpy((void *)&seq_out[i * N], (const void *)&seq1[h * N / 2], N / 2);
  }
}

compiled at -O3 -mcpu=neoverse-n1+sve

miscompiles to:

    ld1w    z31.s, p3/z, [x23, z29.s, sxtw]
    ld1w    z29.s, p7/z, [x23, z30.s, sxtw]
    st1w    z29.s, p7, [x24, z12.s, sxtw]
    st1w    z31.s, p7, [x24, z12.s, sxtw]

rather than

    ld1w    z31.s, p3/z, [x23, z29.s, sxtw]
    ld1w    z29.s, p7/z, [x23, z30.s, sxtw]
    st1w    z29.s, p7, [x24, z12.s, sxtw]
    addvl   x3, x24, #2
    st1w    z31.s, p3, [x3, z12.s, sxtw]

Where two things go wrong, the wrong mask is used and the address pointers to
the stores are wrong.

This issue is happening because the codegen loop in vectorizable_store is a
nested loop where in the outer loop we iterate over ncopies and in the inner
loop we loop over vec_num.

For SLP ncopies == 1 and vec_num == SLP_NUM_STMS, but the loop mask is
determined by only the outerloop index and the pointer address is only updated
in the outer loop.

As such for SLP we always use the same predicate and the same memory location.
This patch flattens the two loops and instead iterates over ncopies * vec_num
and simplified the indexing.

This does not fully fix the gcc_r miscompile error in SPECCPU 2017 as the error
moves somewhere else.  I will look at that next but fixes some other libraries
that also started failing.

gcc/ChangeLog:

	PR tree-optimization/117557
	* tree-vect-stmts.cc (vectorizable_store): Flatten the ncopies and
	vec_num loops.

gcc/testsuite/ChangeLog:

	PR tree-optimization/117557
	* gcc.target/aarch64/pr117557.c: New test.
cooljeanius pushed a commit that referenced this pull request Dec 22, 2024
This PR reports a missed optimization.  When we have:

  Str str{"Test"};
  callback(str);

as in the test, we're able to evaluate the Str::Str() call at compile
time.  But when we have:

  callback(Str{"Test"});

we are not.  With this patch (in fact, it's Patrick's patch with a little
tweak), we turn

  callback (TARGET_EXPR <D.2890, <<< Unknown tree: aggr_init_expr
    5
    __ct_comp
    D.2890
    (struct Str *) <<< Unknown tree: void_cst >>>
    (const char *) "Test" >>>>)

into

  callback (TARGET_EXPR <D.2890, {.str=(const char *) "Test", .length=4}>)

I explored the idea of calling maybe_constant_value for the whole
TARGET_EXPR in cp_fold.  That has three problems:
- we can't always elide a TARGET_EXPR, so we'd have to make sure the
  result is also a TARGET_EXPR;
- the resulting TARGET_EXPR must have the same flags, otherwise Bad
  Things happen;
- getting a new slot is also problematic.  I've seen a test where we
  had "TARGET_EXPR<D.2680, ...>, D.2680", and folding the whole TARGET_EXPR
  would get us "TARGET_EXPR<D.2681, ...>", but since we don't see the outer
  D.2680, we can't replace it with D.2681, and things break.

With this patch, two tree-ssa tests regressed: pr78687.C and pr90883.C.

FAIL: g++.dg/tree-ssa/pr90883.C   scan-tree-dump dse1 "Deleted redundant store: .*.a = {}"
is easy.  Previously, we would call C::C, so .gimple has:

  D.2590 = {};
  C::C (&D.2590);
  D.2597 = D.2590;
  return D.2597;

Then .einline inlines the C::C call:

  D.2590 = {};
  D.2590.a = {}; // #1
  D.2590.b = 0;  // #2
  D.2597 = D.2590;
  D.2590 ={v} {CLOBBER(eos)};
  return D.2597;

then #2 is removed in .fre1, and #1 is removed in .dse1.  So the test
passes.  But with the patch, .gimple won't have that C::C call, so the
IL is of course going to look different.  The .optimized dump looks the
same though so there's no problem.

pr78687.C is XFAILed because the test passes with r15-5746 but not with
r15-5747 as well.  I opened <https://gcc.gnu.org/PR117971>.

	PR c++/116416

gcc/cp/ChangeLog:

	* cp-gimplify.cc (cp_fold_r) <case TARGET_EXPR>: Try to fold
	TARGET_EXPR_INITIAL and replace it with the folded result if
	it's TREE_CONSTANT.

gcc/testsuite/ChangeLog:

	* g++.dg/analyzer/pr97116.C: Adjust dg-message.
	* g++.dg/tree-ssa/pr78687.C: Add XFAIL.
	* g++.dg/tree-ssa/pr90883.C: Adjust dg-final.
	* g++.dg/cpp0x/constexpr-prvalue1.C: New test.
	* g++.dg/cpp1y/constexpr-prvalue1.C: New test.

Co-authored-by: Patrick Palka <ppalka@redhat.com>
Reviewed-by: Jason Merrill <jason@redhat.com>
cooljeanius pushed a commit that referenced this pull request Dec 22, 2024
The ASRD instruction on SVE performs an arithmetic shift right by an immediate
for divide.

This patch enables the use of ASRD with Neon modes.

For example:

int in[N], out[N];

void
foo (void)
{
  for (int i = 0; i < N; i++)
    out[i] = in[i] / 4;
}

compiles to:

	ldr	q31, [x1, x0]
	cmlt	v30.16b, v31.16b, #0
	and	z30.b, z30.b, 3
	add	v30.16b, v30.16b, v31.16b
	sshr	v30.16b, v30.16b, 2
	str	q30, [x0, x2]
	add	x0, x0, 16
	cmp	x0, 1024

but can just be:

	ldp	q30, q31, [x0], 32
	asrd	z31.b, p7/m, z31.b, #2
	asrd	z30.b, p7/m, z30.b, #2
	stp	q30, q31, [x1], 32
	cmp	x0, x2

This patch also adds the following overload:
	aarch64_ptrue_reg (machine_mode pred_mode, machine_mode data_mode)
Depending on the data mode, the function returns a predicate with the
appropriate bits set.

The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.

gcc/ChangeLog:

	* config/aarch64/aarch64.cc (aarch64_ptrue_reg): New overload.
	* config/aarch64/aarch64-protos.h (aarch64_ptrue_reg): Likewise.
	* config/aarch64/aarch64-sve.md: Extended sdiv_pow2<mode>3
	and *sdiv_pow2<mode>3 to support Neon modes.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/sve/sve-asrd.c: New test.

Co-authored-by: Richard Sandiford <richard.sandiford@arm.com>
Signed-off-by: Soumya AR <soumyaa@nvidia.com>
cooljeanius pushed a commit that referenced this pull request Dec 22, 2024
This crash started with my r12-7803 but I believe the problem lies
elsewhere.

build_vec_init has cleanup_flags whose purpose is -- if I grok this
correctly -- to avoid destructing an object multiple times.  Let's
say we are initializing an array of A.  Then we might end up in
a scenario similar to initlist-eh1.C:

  try
    {
      call A::A in a loop
      // #0
      try
        {
	  call a fn using the array
	}
      finally
	{
	  // #1
	  call A::~A in a loop
	}
    }
  catch
    {
      // #2
      call A::~A in a loop
    }

cleanup_flags makes us emit a statement like

  D.3048 = 2;

at #0 to disable performing the cleanup at #2, since #1 will take
care of the destruction of the array.

But if we are not emitting the loop because we can use a constant
initializer (and use a single { a, b, ...}), we shouldn't generate
the statement resetting the iterator to its initial value.  Otherwise
we crash in gimplify_var_or_parm_decl because it gets the stray decl
D.3048.

	PR c++/117985

gcc/cp/ChangeLog:

	* init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not
	generating the loop.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/initlist-array23.C: New test.
	* g++.dg/cpp0x/initlist-array24.C: New test.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants