public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Sam James" <sam@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/gcc-patches:master commit in: 15.0.0/gentoo/
Date: Tue, 14 Jan 2025 08:43:52 +0000 (UTC)	[thread overview]
Message-ID: <1736844210.45c3db3dbbdcb7c6a692987a05a40cfb2bdaa034.sam@gentoo> (raw)

commit:     45c3db3dbbdcb7c6a692987a05a40cfb2bdaa034
Author:     Sam James <sam <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 14 08:43:30 2025 +0000
Commit:     Sam James <sam <AT> gentoo <DOT> org>
CommitDate: Tue Jan 14 08:43:30 2025 +0000
URL:        https://gitweb.gentoo.org/proj/gcc-patches.git/commit/?id=45c3db3d

15.0.0: add two more ifcombine patches

Bug: https://gcc.gnu.org/PR118456
Signed-off-by: Sam James <sam <AT> gentoo.org>

 ...xtend-constants-to-compare-with-bitfields.patch | 214 +++++++++++++
 ...PR118456-robustify-decode_field_reference.patch | 354 +++++++++++++++++++++
 15.0.0/gentoo/README.history                       |   2 +
 3 files changed, 570 insertions(+)

diff --git a/15.0.0/gentoo/84_all_PR118456-check-and-extend-constants-to-compare-with-bitfields.patch b/15.0.0/gentoo/84_all_PR118456-check-and-extend-constants-to-compare-with-bitfields.patch
new file mode 100644
index 0000000..e005c02
--- /dev/null
+++ b/15.0.0/gentoo/84_all_PR118456-check-and-extend-constants-to-compare-with-bitfields.patch
@@ -0,0 +1,214 @@
+https://inbox.sourceware.org/gcc-patches/ora5bugmmi.fsf@lxoliva.fsfla.org/
+
+From 4e794a3a5de8e8fa0fcaf98e5ea298d4a3c71192 Mon Sep 17 00:00:00 2001
+Message-ID: <4e794a3a5de8e8fa0fcaf98e5ea298d4a3c71192.1736844127.git.sam@gentoo.org>
+From: Alexandre Oliva <oliva@adacore.com>
+Date: Mon, 13 Jan 2025 23:22:45 -0300
+Subject: [PATCH 1/2] check and extend constants to compare with bitfields
+ [PR118456]
+
+Add logic to check and extend constants compared with bitfields, so
+that fields are only compared with constants they could actually
+equal.  This involves making sure the signedness doesn't change
+between loads and conversions before shifts: we'd need to carry a lot
+more data to deal with all the possibilities.
+
+Regstrapped on x86_64-linux-gnu.  Ok to install?
+
+for  gcc/ChangeLog
+
+	PR tree-optimization/118456
+	* gimple-fold.cc (decode_field_reference): Punt if shifting
+	after changing signedness.
+	(fold_truth_andor_for_ifcombine): Check extension bits in
+	constants before clipping.
+
+for  gcc/testsuite/ChangeLog
+
+PR tree-optimization/118456
+	* gcc.dg/field-merge-21.c: New.
+	* gcc.dg/field-merge-22.c: New.
+---
+ gcc/gimple-fold.cc                    | 40 +++++++++++++++++++-
+ gcc/testsuite/gcc.dg/field-merge-21.c | 53 +++++++++++++++++++++++++++
+ gcc/testsuite/gcc.dg/field-merge-22.c | 31 ++++++++++++++++
+ 3 files changed, 122 insertions(+), 2 deletions(-)
+ create mode 100644 gcc/testsuite/gcc.dg/field-merge-21.c
+ create mode 100644 gcc/testsuite/gcc.dg/field-merge-22.c
+
+diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc
+index 93ed8b3abb05..5b1fbe6db1df 100644
+--- a/gcc/gimple-fold.cc
++++ b/gcc/gimple-fold.cc
+@@ -7712,6 +7712,18 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+ 
+   if (shiftrt)
+     {
++      /* Punt if we're shifting by more than the loaded bitfield (after
++	 adjustment), or if there's a shift after a change of signedness, punt.
++	 When comparing this field with a constant, we'll check that the
++	 constant is a proper sign- or zero-extension (depending on signedness)
++	 of a value that would fit in the selected portion of the bitfield.  A
++	 shift after a change of signedness would make the extension
++	 non-uniform, and we can't deal with that (yet ???).  See
++	 gcc.dg/field-merge-22.c for a test that would go wrong.  */
++      if (*pbitsize <= shiftrt
++	  || (convert_before_shift
++	      && outer_type && unsignedp != TYPE_UNSIGNED (outer_type)))
++	return NULL_TREE;
+       if (!*preversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
+ 	*pbitpos += shiftrt;
+       *pbitsize -= shiftrt;
+@@ -8512,13 +8524,25 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
+      and bit position.  */
+   if (l_const.get_precision ())
+     {
++      /* Before clipping upper bits of the right-hand operand of the compare,
++	 check that they're sign or zero extensions, depending on how the
++	 left-hand operand would be extended.  */
++      bool l_non_ext_bits = false;
++      if (ll_bitsize < lr_bitsize)
++	{
++	  wide_int zext = wi::zext (l_const, ll_bitsize);
++	  if ((ll_unsignedp ? zext : wi::sext (l_const, ll_bitsize)) == l_const)
++	    l_const = zext;
++	  else
++	    l_non_ext_bits = true;
++	}
+       /* We're doing bitwise equality tests, so don't bother with sign
+ 	 extensions.  */
+       l_const = wide_int::from (l_const, lnprec, UNSIGNED);
+       if (ll_and_mask.get_precision ())
+ 	l_const &= wide_int::from (ll_and_mask, lnprec, UNSIGNED);
+       l_const <<= xll_bitpos;
+-      if ((l_const & ~ll_mask) != 0)
++      if (l_non_ext_bits || (l_const & ~ll_mask) != 0)
+ 	{
+ 	  warning_at (lloc, OPT_Wtautological_compare,
+ 		      "comparison is always %d", wanted_code == NE_EXPR);
+@@ -8530,11 +8554,23 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
+ 	 again.  */
+       gcc_checking_assert (r_const.get_precision ());
+ 
++      /* Before clipping upper bits of the right-hand operand of the compare,
++	 check that they're sign or zero extensions, depending on how the
++	 left-hand operand would be extended.  */
++      bool r_non_ext_bits = false;
++      if (rl_bitsize < rr_bitsize)
++	{
++	  wide_int zext = wi::zext (r_const, rl_bitsize);
++	  if ((rl_unsignedp ? zext : wi::sext (r_const, rl_bitsize)) == r_const)
++	    r_const = zext;
++	  else
++	    r_non_ext_bits = true;
++	}
+       r_const = wide_int::from (r_const, lnprec, UNSIGNED);
+       if (rl_and_mask.get_precision ())
+ 	r_const &= wide_int::from (rl_and_mask, lnprec, UNSIGNED);
+       r_const <<= xrl_bitpos;
+-      if ((r_const & ~rl_mask) != 0)
++      if (r_non_ext_bits || (r_const & ~rl_mask) != 0)
+ 	{
+ 	  warning_at (rloc, OPT_Wtautological_compare,
+ 		      "comparison is always %d", wanted_code == NE_EXPR);
+diff --git a/gcc/testsuite/gcc.dg/field-merge-21.c b/gcc/testsuite/gcc.dg/field-merge-21.c
+new file mode 100644
+index 000000000000..042b2123eb63
+--- /dev/null
++++ b/gcc/testsuite/gcc.dg/field-merge-21.c
+@@ -0,0 +1,53 @@
++/* { dg-do run } */
++/* { dg-options "-O2" } */
++
++/* PR tree-optimization/118456 */
++/* Check that shifted fields compared with a constants compare correctly even
++   if the constant contains sign-extension bits not present in the bit
++   range.  */
++
++struct S { unsigned long long o; unsigned short a, b; } s;
++
++__attribute__((noipa)) int
++foo (void)
++{
++  return ((unsigned char) s.a) >> 3 == 17 && ((signed char) s.b) >> 2 == -27;
++}
++
++__attribute__((noipa)) int
++bar (void)
++{
++  return ((unsigned char) s.a) >> 3 == 17 && ((signed char) s.b) >> 2 == -91;
++}
++
++__attribute__((noipa)) int
++bars (void)
++{
++  return ((unsigned char) s.a) >> 3 == 17 && ((signed char) s.b) >> 2 == 37;
++}
++
++__attribute__((noipa)) int
++baz (void)
++{
++  return ((unsigned char) s.a) >> 3 == 49 && ((signed char) s.b) >> 2 == -27;
++}
++
++__attribute__((noipa)) int
++bazs (void)
++{
++  return ((unsigned char) s.a) >> 3 == (unsigned char) -15 && ((signed char) s.b) >> 2 == -27;
++}
++
++int
++main ()
++{
++  s.a = 17 << 3;
++  s.b = (unsigned short)(-27u << 2);
++  if (foo () != 1
++      || bar () != 0
++      || bars () != 0
++      || baz () != 0
++      || bazs () != 0)
++    __builtin_abort ();
++  return 0;
++}
+diff --git a/gcc/testsuite/gcc.dg/field-merge-22.c b/gcc/testsuite/gcc.dg/field-merge-22.c
+new file mode 100644
+index 000000000000..45b29c0bccaf
+--- /dev/null
++++ b/gcc/testsuite/gcc.dg/field-merge-22.c
+@@ -0,0 +1,31 @@
++/* { dg-do run } */
++/* { dg-options "-O2" } */
++
++/* PR tree-optimization/118456 */
++/* Check that compares with constants take into account sign/zero extension of
++   both the bitfield and of the shifting type.  */
++
++#define shift (__CHAR_BIT__ - 4)
++
++struct S {
++  signed char a : shift + 2;
++  signed char b : shift + 2;
++  short ignore[0];
++} s;
++
++__attribute__((noipa)) int
++foo (void)
++{
++  return ((unsigned char) s.a) >> shift == 15
++    && ((unsigned char) s.b) >> shift == 0;
++}
++
++int
++main ()
++{
++  s.a = -1;
++  s.b = 1;
++  if (foo () != 1)
++    __builtin_abort ();
++  return 0;
++}
+
+base-commit: 31c3c1a83fd885b4687c9f6f7acd68af76d758d3
+-- 
+2.48.0
+

diff --git a/15.0.0/gentoo/85_all_PR118456-robustify-decode_field_reference.patch b/15.0.0/gentoo/85_all_PR118456-robustify-decode_field_reference.patch
new file mode 100644
index 0000000..065c958
--- /dev/null
+++ b/15.0.0/gentoo/85_all_PR118456-robustify-decode_field_reference.patch
@@ -0,0 +1,354 @@
+https://inbox.sourceware.org/gcc-patches/or1px6gf6r.fsf@lxoliva.fsfla.org/
+
+From e3a5a707fd88522a73d05841970fa2465e991eaa Mon Sep 17 00:00:00 2001
+Message-ID: <e3a5a707fd88522a73d05841970fa2465e991eaa.1736844127.git.sam@gentoo.org>
+In-Reply-To: <4e794a3a5de8e8fa0fcaf98e5ea298d4a3c71192.1736844127.git.sam@gentoo.org>
+References: <4e794a3a5de8e8fa0fcaf98e5ea298d4a3c71192.1736844127.git.sam@gentoo.org>
+From: Alexandre Oliva <oliva@adacore.com>
+Date: Tue, 14 Jan 2025 02:03:24 -0300
+Subject: [PATCH 2/2] robustify decode_field_reference
+
+Arrange for decode_field_reference to use local variables throughout,
+to modify the out parms only when we're about to return non-NULL, and
+to drop the unused case of NULL pand_mask, that had a latent failure
+to detect signbit masking.
+
+Regstrapped on x86_64-linux-gnu along with the PR118456 patch.
+Ok to install?
+
+for  gcc/ChangeLog
+
+* gimple-fold.cc (decode_field_reference): Rebustify to set
+	out parms only when returning non-NULL.
+	(fold_truth_andor_for_ifcombine): Bail if
+	decode_field_reference returns NULL.  Add complementary assert
+	on r_const's not being set when l_const isn't.
+---
+ gcc/gimple-fold.cc | 155 +++++++++++++++++++++++----------------------
+ 1 file changed, 80 insertions(+), 75 deletions(-)
+
+diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc
+index 5b1fbe6db1df..3c971a29ef04 100644
+--- a/gcc/gimple-fold.cc
++++ b/gcc/gimple-fold.cc
+@@ -7510,18 +7510,17 @@ gimple_binop_def_p (enum tree_code code, tree t, tree op[2])
+    *PREVERSEP is set to the storage order of the field.
+ 
+    *PAND_MASK is set to the mask found in a BIT_AND_EXPR, if any.  If
+-   PAND_MASK *is NULL, BIT_AND_EXPR is not recognized.  If *PAND_MASK
+-   is initially set to a mask with nonzero precision, that mask is
++   *PAND_MASK is initially set to a mask with nonzero precision, that mask is
+    combined with the found mask, or adjusted in precision to match.
+ 
+    *PSIGNBIT is set to TRUE if, before clipping to *PBITSIZE, the mask
+    encompassed bits that corresponded to extensions of the sign bit.
+ 
+-   *XOR_P is to be FALSE if EXP might be a XOR used in a compare, in which
+-   case, if XOR_CMP_OP is a zero constant, it will be overridden with *PEXP,
+-   *XOR_P will be set to TRUE, *XOR_PAND_MASK will be copied from *PAND_MASK,
+-   and the left-hand operand of the XOR will be decoded.  If *XOR_P is TRUE,
+-   XOR_CMP_OP and XOR_PAND_MASK are supposed to be NULL, and then the
++   *PXORP is to be FALSE if EXP might be a XOR used in a compare, in which
++   case, if PXOR_CMP_OP is a zero constant, it will be overridden with *PEXP,
++   *PXORP will be set to TRUE, *PXOR_AND_MASK will be copied from *PAND_MASK,
++   and the left-hand operand of the XOR will be decoded.  If *PXORP is TRUE,
++   PXOR_CMP_OP and PXOR_AND_MASK are supposed to be NULL, and then the
+    right-hand operand of the XOR will be decoded.
+ 
+    *LOAD is set to the load stmt of the innermost reference, if any,
+@@ -7538,8 +7537,8 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+ 			HOST_WIDE_INT *pbitpos,
+ 			bool *punsignedp, bool *preversep, bool *pvolatilep,
+ 			wide_int *pand_mask, bool *psignbit,
+-			bool *xor_p, tree *xor_cmp_op, wide_int *xor_pand_mask,
+-			gimple **load, location_t loc[4])
++			bool *pxorp, tree *pxor_cmp_op, wide_int *pxor_and_mask,
++			gimple **pload, location_t loc[4])
+ {
+   tree exp = *pexp;
+   tree outer_type = 0;
+@@ -7549,9 +7548,11 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+   tree res_ops[2];
+   machine_mode mode;
+   bool convert_before_shift = false;
+-
+-  *load = NULL;
+-  *psignbit = false;
++  bool signbit = false;
++  bool xorp = false;
++  tree xor_cmp_op;
++  wide_int xor_and_mask;
++  gimple *load = NULL;
+ 
+   /* All the optimizations using this function assume integer fields.
+      There are problems with FP fields since the type_for_size call
+@@ -7576,7 +7577,7 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+ 
+   /* Recognize and save a masking operation.  Combine it with an
+      incoming mask.  */
+-  if (pand_mask && gimple_binop_def_p (BIT_AND_EXPR, exp, res_ops)
++  if (gimple_binop_def_p (BIT_AND_EXPR, exp, res_ops)
+       && TREE_CODE (res_ops[1]) == INTEGER_CST)
+     {
+       loc[1] = gimple_location (SSA_NAME_DEF_STMT (exp));
+@@ -7596,29 +7597,29 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+ 	    and_mask &= wide_int::from (*pand_mask, prec_op, UNSIGNED);
+ 	}
+     }
+-  else if (pand_mask)
++  else
+     and_mask = *pand_mask;
+ 
+   /* Turn (a ^ b) [!]= 0 into a [!]= b.  */
+-  if (xor_p && gimple_binop_def_p (BIT_XOR_EXPR, exp, res_ops))
++  if (pxorp && gimple_binop_def_p (BIT_XOR_EXPR, exp, res_ops))
+     {
+       /* No location recorded for this one, it's entirely subsumed by the
+ 	 compare.  */
+-      if (*xor_p)
++      if (*pxorp)
+ 	{
+ 	  exp = res_ops[1];
+-	  gcc_checking_assert (!xor_cmp_op && !xor_pand_mask);
++	  gcc_checking_assert (!pxor_cmp_op && !pxor_and_mask);
+ 	}
+-      else if (!xor_cmp_op)
++      else if (!pxor_cmp_op)
+ 	/* Not much we can do when xor appears in the right-hand compare
+ 	   operand.  */
+ 	return NULL_TREE;
+-      else if (integer_zerop (*xor_cmp_op))
++      else if (integer_zerop (*pxor_cmp_op))
+ 	{
+-	  *xor_p = true;
++	  xorp = true;
+ 	  exp = res_ops[0];
+-	  *xor_cmp_op = *pexp;
+-	  *xor_pand_mask = *pand_mask;
++	  xor_cmp_op = *pexp;
++	  xor_and_mask = *pand_mask;
+ 	}
+     }
+ 
+@@ -7646,12 +7647,12 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+   /* Yet another chance to drop conversions.  This one is allowed to
+      match a converting load, subsuming the load identification block
+      below.  */
+-  if (!outer_type && gimple_convert_def_p (exp, res_ops, load))
++  if (!outer_type && gimple_convert_def_p (exp, res_ops, &load))
+     {
+       outer_type = TREE_TYPE (exp);
+       loc[0] = gimple_location (SSA_NAME_DEF_STMT (exp));
+-      if (*load)
+-	loc[3] = gimple_location (*load);
++      if (load)
++	loc[3] = gimple_location (load);
+       exp = res_ops[0];
+       /* This looks backwards, but we're going back the def chain, so if we
+ 	 find the conversion here, after finding a shift, that's because the
+@@ -7662,14 +7663,13 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+     }
+ 
+   /* Identify the load, if there is one.  */
+-  if (!(*load) && TREE_CODE (exp) == SSA_NAME
+-      && !SSA_NAME_IS_DEFAULT_DEF (exp))
++  if (!load && TREE_CODE (exp) == SSA_NAME && !SSA_NAME_IS_DEFAULT_DEF (exp))
+     {
+       gimple *def = SSA_NAME_DEF_STMT (exp);
+       if (gimple_assign_load_p (def))
+ 	{
+ 	  loc[3] = gimple_location (def);
+-	  *load = def;
++	  load = def;
+ 	  exp = gimple_assign_rhs1 (def);
+ 	}
+     }
+@@ -7694,20 +7694,14 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+ 	  && !type_has_mode_precision_p (TREE_TYPE (inner))))
+     return NULL_TREE;
+ 
+-  *pbitsize = bs;
+-  *pbitpos = bp;
+-  *punsignedp = unsignedp;
+-  *preversep = reversep;
+-  *pvolatilep = volatilep;
+-
+   /* Adjust shifts...  */
+   if (convert_before_shift
+-      && outer_type && *pbitsize > TYPE_PRECISION (outer_type))
++      && outer_type && bs > TYPE_PRECISION (outer_type))
+     {
+-      HOST_WIDE_INT excess = *pbitsize - TYPE_PRECISION (outer_type);
+-      if (*preversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
+-	*pbitpos += excess;
+-      *pbitsize -= excess;
++      HOST_WIDE_INT excess = bs - TYPE_PRECISION (outer_type);
++      if (reversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
++	bp += excess;
++      bs -= excess;
+     }
+ 
+   if (shiftrt)
+@@ -7720,49 +7714,57 @@ decode_field_reference (tree *pexp, HOST_WIDE_INT *pbitsize,
+ 	 shift after a change of signedness would make the extension
+ 	 non-uniform, and we can't deal with that (yet ???).  See
+ 	 gcc.dg/field-merge-22.c for a test that would go wrong.  */
+-      if (*pbitsize <= shiftrt
++      if (bs <= shiftrt
+ 	  || (convert_before_shift
+ 	      && outer_type && unsignedp != TYPE_UNSIGNED (outer_type)))
+ 	return NULL_TREE;
+-      if (!*preversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
+-	*pbitpos += shiftrt;
+-      *pbitsize -= shiftrt;
++      if (!reversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
++	bp += shiftrt;
++      bs -= shiftrt;
+     }
+ 
+   /* ... and bit position.  */
+   if (!convert_before_shift
+-      && outer_type && *pbitsize > TYPE_PRECISION (outer_type))
++      && outer_type && bs > TYPE_PRECISION (outer_type))
+     {
+-      HOST_WIDE_INT excess = *pbitsize - TYPE_PRECISION (outer_type);
+-      if (*preversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
+-	*pbitpos += excess;
+-      *pbitsize -= excess;
++      HOST_WIDE_INT excess = bs - TYPE_PRECISION (outer_type);
++      if (reversep ? !BYTES_BIG_ENDIAN : BYTES_BIG_ENDIAN)
++	bp += excess;
++      bs -= excess;
+     }
+ 
+-  *pexp = exp;
+-
+   /* If the number of bits in the reference is the same as the bitsize of
+      the outer type, then the outer type gives the signedness. Otherwise
+      (in case of a small bitfield) the signedness is unchanged.  */
+-  if (outer_type && *pbitsize == TYPE_PRECISION (outer_type))
+-    *punsignedp = TYPE_UNSIGNED (outer_type);
++  if (outer_type && bs == TYPE_PRECISION (outer_type))
++    unsignedp = TYPE_UNSIGNED (outer_type);
+ 
+-  if (pand_mask)
++  /* Make the mask the expected width.  */
++  if (and_mask.get_precision () != 0)
+     {
+-      /* Make the mask the expected width.  */
+-      if (and_mask.get_precision () != 0)
+-	{
+-	  /* If the AND_MASK encompasses bits that would be extensions of
+-	     the sign bit, set *PSIGNBIT.  */
+-	  if (!unsignedp
+-	      && and_mask.get_precision () > *pbitsize
+-	      && (and_mask
+-		  & wi::mask (*pbitsize, true, and_mask.get_precision ())) != 0)
+-	    *psignbit = true;
+-	  and_mask = wide_int::from (and_mask, *pbitsize, UNSIGNED);
+-	}
++      /* If the AND_MASK encompasses bits that would be extensions of
++	 the sign bit, set SIGNBIT.  */
++      if (!unsignedp
++	  && and_mask.get_precision () > bs
++	  && (and_mask & wi::mask (bs, true, and_mask.get_precision ())) != 0)
++	signbit = true;
++      and_mask = wide_int::from (and_mask, bs, UNSIGNED);
++    }
+ 
+-      *pand_mask = and_mask;
++  *pexp = exp;
++  *pload = load;
++  *pbitsize = bs;
++  *pbitpos = bp;
++  *punsignedp = unsignedp;
++  *preversep = reversep;
++  *pvolatilep = volatilep;
++  *psignbit = signbit;
++  *pand_mask = and_mask;
++  if (xorp)
++    {
++      *pxorp = xorp;
++      *pxor_cmp_op = xor_cmp_op;
++      *pxor_and_mask = xor_and_mask;
+     }
+ 
+   return inner;
+@@ -8168,19 +8170,27 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
+ 				     &ll_and_mask, &ll_signbit,
+ 				     &l_xor, &lr_arg, &lr_and_mask,
+ 				     &ll_load, ll_loc);
++  if (!ll_inner)
++    return 0;
+   lr_inner = decode_field_reference (&lr_arg, &lr_bitsize, &lr_bitpos,
+ 				     &lr_unsignedp, &lr_reversep, &volatilep,
+ 				     &lr_and_mask, &lr_signbit, &l_xor, 0, 0,
+ 				     &lr_load, lr_loc);
++  if (!lr_inner)
++    return 0;
+   rl_inner = decode_field_reference (&rl_arg, &rl_bitsize, &rl_bitpos,
+ 				     &rl_unsignedp, &rl_reversep, &volatilep,
+ 				     &rl_and_mask, &rl_signbit,
+ 				     &r_xor, &rr_arg, &rr_and_mask,
+ 				     &rl_load, rl_loc);
++  if (!rl_inner)
++    return 0;
+   rr_inner = decode_field_reference (&rr_arg, &rr_bitsize, &rr_bitpos,
+ 				     &rr_unsignedp, &rr_reversep, &volatilep,
+ 				     &rr_and_mask, &rr_signbit, &r_xor, 0, 0,
+ 				     &rr_load, rr_loc);
++  if (!rr_inner)
++    return 0;
+ 
+   /* It must be true that the inner operation on the lhs of each
+      comparison must be the same if we are to be able to do anything.
+@@ -8188,16 +8198,13 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
+      the rhs's.  If one is a load and the other isn't, we have to be
+      conservative and avoid the optimization, otherwise we could get
+      SRAed fields wrong.  */
+-  if (volatilep
+-      || ll_reversep != rl_reversep
+-      || ll_inner == 0 || rl_inner == 0)
++  if (volatilep || ll_reversep != rl_reversep)
+     return 0;
+ 
+   if (! operand_equal_p (ll_inner, rl_inner, 0))
+     {
+       /* Try swapping the operands.  */
+       if (ll_reversep != rr_reversep
+-	  || !rr_inner
+ 	  || !operand_equal_p (ll_inner, rr_inner, 0))
+ 	return 0;
+ 
+@@ -8266,7 +8273,6 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
+       lr_reversep = ll_reversep;
+     }
+   else if (lr_reversep != rr_reversep
+-	   || lr_inner == 0 || rr_inner == 0
+ 	   || ! operand_equal_p (lr_inner, rr_inner, 0)
+ 	   || ((lr_load && rr_load)
+ 	       ? gimple_vuse (lr_load) != gimple_vuse (rr_load)
+@@ -8520,6 +8526,9 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
+   else
+     rl_mask = wi::shifted_mask (xrl_bitpos, rl_bitsize, false, lnprec);
+ 
++  /* When we set l_const, we also set r_const.  */
++  gcc_checking_assert (!l_const.get_precision () == !r_const.get_precision ());
++
+   /* Adjust right-hand constants in both original comparisons to match width
+      and bit position.  */
+   if (l_const.get_precision ())
+@@ -8550,10 +8559,6 @@ fold_truth_andor_for_ifcombine (enum tree_code code, tree truth_type,
+ 	  return constant_boolean_node (wanted_code == NE_EXPR, truth_type);
+ 	}
+ 
+-      /* When we set l_const, we also set r_const, so we need not test it
+-	 again.  */
+-      gcc_checking_assert (r_const.get_precision ());
+-
+       /* Before clipping upper bits of the right-hand operand of the compare,
+ 	 check that they're sign or zero extensions, depending on how the
+ 	 left-hand operand would be extended.  */
+-- 
+2.48.0
+

diff --git a/15.0.0/gentoo/README.history b/15.0.0/gentoo/README.history
index 5580afb..490735a 100644
--- a/15.0.0/gentoo/README.history
+++ b/15.0.0/gentoo/README.history
@@ -2,6 +2,8 @@
 
 	U 80_all_PR81358-Enable-automatic-linking-of-libatomic.patch
 	- 82_all_PR118409-ifcombine.patch
+	+ 84_all_PR118456-check-and-extend-constants-to-compare-with-bitfields.patch
+	+ 85_all_PR118456-robustify-decode_field_reference.patch
 
 38	13 January 2023
 


             reply	other threads:[~2025-01-14  8:43 UTC|newest]

Thread overview: 183+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-14  8:43 Sam James [this message]
  -- strict thread matches above, loose matches on Subject: below --
2025-03-25 10:27 [gentoo-commits] proj/gcc-patches:master commit in: 15.0.0/gentoo/ Sam James
2025-03-25  8:38 Sam James
2025-03-25  2:32 Sam James
2025-03-25  1:27 Sam James
2025-03-24  0:35 Sam James
2025-03-21 19:31 Sam James
2025-03-21 17:21 Sam James
2025-03-21 16:23 Sam James
2025-03-21 11:20 Sam James
2025-03-21  8:51 Sam James
2025-03-21  6:07 Sam James
2025-03-20 22:08 Sam James
2025-03-20  1:59 Sam James
2025-03-20  1:59 Sam James
2025-03-16 22:37 Sam James
2025-03-14 14:46 Sam James
2025-03-14 13:37 Sam James
2025-03-13 16:48 Sam James
2025-03-13 10:08 Sam James
2025-03-11 10:32 Sam James
2025-03-07 16:54 Sam James
2025-03-03 16:38 Sam James
2025-03-01 10:33 Sam James
2025-03-01  6:50 Sam James
2025-02-17  1:30 Sam James
2025-02-13  9:23 Sam James
2025-02-12 15:12 Sam James
2025-02-10 21:22 Sam James
2025-02-09 23:58 Sam James
2025-02-07 23:37 Sam James
2025-02-07 21:19 Sam James
2025-02-03 22:04 Sam James
2025-02-02 22:41 Sam James
2025-01-29 20:21 Sam James
2025-01-26 22:52 Sam James
2025-01-22 16:27 Sam James
2025-01-19 22:43 Sam James
2025-01-16 23:11 Sam James
2025-01-16 23:11 Sam James
2025-01-15 11:41 Sam James
2025-01-14 16:22 Sam James
2025-01-14 15:06 Sam James
2025-01-14 15:06 Sam James
2025-01-14 12:29 Sam James
2025-01-14  8:40 Sam James
2025-01-13 13:58 Sam James
2025-01-13  6:00 Sam James
2025-01-13  3:40 Sam James
2025-01-13  3:23 Sam James
2025-01-13  3:20 Sam James
2025-01-13  0:20 Sam James
2025-01-12 18:53 Sam James
2025-01-11 12:53 Sam James
2025-01-08 21:51 Sam James
2025-01-06 10:50 Sam James
2025-01-06 10:03 Sam James
2025-01-06  4:49 Sam James
2025-01-06  4:44 Sam James
2025-01-06  4:13 Sam James
2025-01-06  4:13 Sam James
2025-01-06  4:13 Sam James
2025-01-06  4:03 Sam James
2025-01-05 23:19 Sam James
2025-01-03  3:07 Sam James
2024-12-30  1:05 Sam James
2024-12-29 10:00 Sam James
2024-12-27 15:14 Sam James
2024-12-24 20:48 Sam James
2024-12-22 22:46 Sam James
2024-12-20 11:25 Sam James
2024-12-20  5:57 Sam James
2024-12-20  1:55 Sam James
2024-12-19 18:34 Sam James
2024-12-13 13:23 Sam James
2024-12-13 11:52 Sam James
2024-12-13  5:08 Sam James
2024-12-12 12:28 Sam James
2024-12-11  4:41 Sam James
2024-12-11  0:58 Sam James
2024-12-10 19:19 Sam James
2024-12-10 14:55 Sam James
2024-12-10  5:19 Sam James
2024-12-10  5:13 Sam James
2024-12-10  5:11 Sam James
2024-12-10  5:07 Sam James
2024-12-09  3:05 Sam James
2024-12-08 22:41 Sam James
2024-12-06 17:33 Sam James
2024-12-04 20:40 Sam James
2024-12-01 22:51 Sam James
2024-12-01 22:51 Sam James
2024-11-30 11:30 Sam James
2024-11-27 17:42 Sam James
2024-11-25 15:10 Sam James
2024-11-25  3:01 Sam James
2024-11-25  3:00 Sam James
2024-11-25  3:00 Sam James
2024-11-24 22:42 Sam James
2024-11-18 17:25 Sam James
2024-11-18 10:42 Sam James
2024-11-18 10:42 Sam James
2024-11-18  9:25 Sam James
2024-11-18  9:25 Sam James
2024-11-14 18:38 Sam James
2024-11-13  4:26 Sam James
2024-11-13  0:16 Sam James
2024-11-12  2:33 Sam James
2024-11-11 19:46 Sam James
2024-11-11 19:46 Sam James
2024-11-10 22:41 Sam James
2024-11-09 16:24 Sam James
2024-11-09  7:55 Sam James
2024-11-08  8:22 Sam James
2024-11-07 16:13 Sam James
2024-11-03 23:16 Sam James
2024-11-01  8:24 Sam James
2024-11-01  8:24 Sam James
2024-11-01  8:18 Sam James
2024-11-01  8:17 Sam James
2024-10-30 16:03 Sam James
2024-10-29 19:17 Sam James
2024-10-28 21:32 Sam James
2024-10-28  8:09 Sam James
2024-10-23 15:40 Sam James
2024-10-22 19:09 Sam James
2024-10-22 18:34 Sam James
2024-10-21 12:33 Sam James
2024-10-21 12:27 Sam James
2024-10-21 12:26 Sam James
2024-10-21 11:45 Sam James
2024-10-20 22:42 Sam James
2024-10-18 14:05 Sam James
2024-10-18 10:35 Sam James
2024-10-17 23:33 Sam James
2024-10-17 23:03 Sam James
2024-10-17  5:01 Sam James
2024-10-17  4:15 Sam James
2024-10-13 22:48 Sam James
2024-10-07  2:45 Sam James
2024-10-04 10:37 Sam James
2024-10-04  9:28 Sam James
2024-10-02 19:45 Sam James
2024-09-30 14:05 Sam James
2024-09-29 22:56 Sam James
2024-09-24  1:41 Sam James
2024-09-23 15:23 Sam James
2024-09-02  2:28 Sam James
2024-08-26 13:44 Sam James
2024-08-26  6:24 Sam James
2024-08-23 13:51 Sam James
2024-08-20 20:31 Sam James
2024-08-19 18:43 Sam James
2024-08-14  9:48 Sam James
2024-08-14  2:57 Sam James
2024-08-11 22:40 Sam James
2024-08-09 19:54 Sam James
2024-08-09 19:54 Sam James
2024-08-09 19:47 Sam James
2024-08-09 19:25 Sam James
2024-08-08 11:10 Sam James
2024-08-08 11:06 Sam James
2024-08-08 11:03 Sam James
2024-08-05  9:09 Sam James
2024-08-05  1:54 Sam James
2024-08-05  1:51 Sam James
2024-08-02 20:39 Sam James
2024-08-01 14:40 Sam James
2024-07-28 23:34 Sam James
2024-07-22  1:11 Sam James
2024-07-19 11:14 Sam James
2024-07-18  0:45 Sam James
2024-07-14 23:36 Sam James
2024-06-28 12:49 Sam James
2024-06-27  0:02 Sam James
2024-06-26 23:57 Sam James
2024-06-16 22:45 Sam James
2024-06-10 20:18 Sam James
2024-06-10 17:28 Sam James
2024-06-10 17:28 Sam James
2024-06-10  2:08 Sam James
2024-06-08 17:03 Sam James
2024-06-08 17:03 Sam James

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1736844210.45c3db3dbbdcb7c6a692987a05a40cfb2bdaa034.sam@gentoo \
    --to=sam@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox