Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update op lower #49

Merged
merged 119 commits into from
Mar 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
119 commits
Select commit Hold shift + click to select a range
381b0b0
[PIR] support wrap_type_interface. (#62422)
winter-wang Mar 5, 2024
ca0a285
[PIR] [DyShape] Fix cinn_reshape with case shape including 0 (#62415)
zhangbopd Mar 5, 2024
5a7828b
llama group: add llama group (#62325)
cyber-pioneer Mar 5, 2024
fa07d31
fix JetPack_bug (#62426)
risemeup1 Mar 6, 2024
6bb3ae5
support pd silce op 0D to 1D (#62442)
phlrain Mar 6, 2024
0d98d15
[SOT] Always generate `false_fn` when `POP_JUMP_*` breakgraph (#62424)
SigureMo Mar 6, 2024
f4b6eea
add cinn mode check (#62418)
cyber-pioneer Mar 6, 2024
68bfa86
[PIR+CINN]Add Llama2 subgraph for backend test (#62313)
Aurelius84 Mar 6, 2024
2e1899e
sharding supports reduce_avg communication (#62147)
zhangting2020 Mar 6, 2024
c3229dd
fix some bug of while test (#62440)
zyfncg Mar 6, 2024
4bf4895
[PIR][DynamicShape] Fix bug in cinn_op.slice (#62320)
lanxianghit Mar 6, 2024
19a5ae5
fix use nvidia cuda libraries bug (#62425)
risemeup1 Mar 6, 2024
90529ac
[Paddle-TRT]add inference api:exp_disable_tensorrt_dynamic_shape_ops …
lizexu123 Mar 6, 2024
c3ca9a9
[PIR+CINN]Fix cinn_op.concat infer shape bug for dynamic shape (#62421)
Aurelius84 Mar 6, 2024
c7b3acf
fix group copy (#62409)
BiynXu Mar 6, 2024
376aba5
[PIR] Add op_callstack to Pir (#62139)
xingmingyyj Mar 6, 2024
c870186
[Auto Parallel] Add gather spmd rule (#62097)
pkuzyc Mar 6, 2024
2a05a38
fix ShapeOrData == error (#62437)
JiaWenxuan Mar 6, 2024
316fdfb
[PIR] [DyShape] Add fix increment infer mannul op (#62438)
zhangbopd Mar 6, 2024
ce649b1
[AutoParallel] unify llama model && fix vpp unittest hang problem (#6…
deepllz Mar 6, 2024
af00bec
[Prim] Optimize composite OP silu_double_grad (#62112)
HydrogenSulfate Mar 6, 2024
dcf2de5
[CINN]support spatial dynamic (#62444)
phlrain Mar 6, 2024
de777d8
[HACKATHON 6th][CMake Optimization] use new cmake policy CMP0135 for …
silverling Mar 6, 2024
3de4a22
support dist tensor in reshape api (#62420)
LiYuRio Mar 6, 2024
948a1b0
fix bugs (#62428)
haohongxiang Mar 6, 2024
eb639c6
Fix check_depency check_dependency, etc (#62458)
co63oc Mar 6, 2024
7bfde24
Fix GetFusableConsumerGroupLists GetFusibleConsumerGroupLists, etc (…
co63oc Mar 6, 2024
2ca34a7
[PIR] Support wrap_type_interface for AlloctedDenseTensorType Allocat…
zhangbo9674 Mar 6, 2024
ed3486b
Support n-order differential testing (#62074)
GGBond8488 Mar 6, 2024
0c43da7
[DistDialect] Add PIR Pybind Utils for Auto-Parallel (#62297)
JZ-LIANG Mar 6, 2024
1208cd3
[PIR] Filter out attribute `op_callstack` when print program (#62469)
SigureMo Mar 6, 2024
b684e1a
[HACKATHON 6th][CMake Optimization] use CMAKE_CXX_COMPILER_ID instead…
silverling Mar 7, 2024
56a024d
prohibit the use of IR_ENFORCE (#62445)
risemeup1 Mar 7, 2024
600bdd5
[SOT][3.12] Fix that `frame` in eval custom code was not released in …
gouzil Mar 7, 2024
13c0bd3
[PIR+CINN]Add SimplifyDimExpr for +-*/ min max broadcast (#62449)
Aurelius84 Mar 7, 2024
03bf7c4
disable cuda malloc async when CUDA < 11.2 (#62264)
eee4017 Mar 7, 2024
2c34d76
Adjust the search path for libnccl.so (#62492)
risemeup1 Mar 7, 2024
c448d28
[PIR][DynamicShape] Add nullary_infer_sym and binary nullary_infer_sy…
zhangbopd Mar 7, 2024
cc1be3e
Enhance several unit tests (#62477)
zlsh80826 Mar 7, 2024
1128c78
[PIR] refine onednn add_n (#62471)
wanghuancoder Mar 7, 2024
be55c7b
Fix axies -> axes (#62481)
co63oc Mar 7, 2024
928c35a
Update alterlayout.cc (#62465)
co63oc Mar 7, 2024
2304692
Update broadcast.cc (#62462)
co63oc Mar 7, 2024
2b7c7ff
Fix fellowing following, etc (#62453)
co63oc Mar 7, 2024
1813177
Fix uitls -> utils (#62496)
co63oc Mar 7, 2024
21f4074
Fix CWE 502 (#62345)
omri-alon24 Mar 7, 2024
88c79f1
[clang-tidy] NO.12 modernize-loop-convert (#61725)
enkilee Mar 7, 2024
3cb3f4d
[PIR] Remove duplicate error message in executor log warning (#62479)
SigureMo Mar 7, 2024
b90de4d
[PIR] pir onednn support conv2d_transpose (#61165)
wanghuancoder Mar 7, 2024
68cb8d7
[CustomDevice] replace phi::ccl::CCLDataType with phi::DataType (#62…
ronny1996 Mar 7, 2024
046d70a
fix grid dim error when launching kernel (#62483)
BiynXu Mar 7, 2024
7964315
[AutoParallel] Change switch name to gradient_sync_after_accumulate (…
AndSonder Mar 7, 2024
93f29aa
fix bug (#62501)
ming1753 Mar 7, 2024
6a9d40b
【PIR Dist Op Reg No.16】 reg c_split (#62416)
enkilee Mar 7, 2024
a726f82
[PIR] move pir::DenseTensorType registration from OperatorDialect to …
huangjiyi Mar 7, 2024
b8c4936
[CustomDevice] fix anomalous memory usage on custom devices (#62377)
SylarTiaNII Mar 7, 2024
660276a
fix reduce avg bug (#62502)
zhangting2020 Mar 7, 2024
7129945
Fix ShapeOrDataDimExpr simplify unwork (#62376)
JiaWenxuan Mar 7, 2024
b726a90
fix adamw loop out int32 bound (#62461)
FeixLiu Mar 7, 2024
d95713f
[Fix bug](Fix compilation bug in flags.cc) (#62056)
Galaxy1458 Mar 7, 2024
8e8eb40
Fix yiled yield, etc (#62457)
co63oc Mar 7, 2024
9cc505e
Fix semi static split with section op (#62516)
liuzhenhai93 Mar 7, 2024
24777d4
delete IR_ENFORCE (#62515)
wanghuancoder Mar 8, 2024
7b1540a
group cluster support control flow (#62523)
phlrain Mar 8, 2024
3646da6
[AutoParallel] Fix problem of expand_as. (#62460)
GhostScreaming Mar 8, 2024
70cd811
[Auto Parallel] Add spmd rule for scatter_grad and gather_grad (#62099)
pkuzyc Mar 8, 2024
a96ef33
[PIR] [DyShape] Fix unit test -- test_unary_op_infer_sym_shape (#62530)
zhangbopd Mar 8, 2024
7fd1722
Fix MemEvenRecorder MemEventRecorder (#62537)
co63oc Mar 8, 2024
536a85e
Fix DECLEAR_ DECLARE_ (#62514)
co63oc Mar 8, 2024
f2d1f4d
[PIR][DynamicShape] Fix bug in InferSymbolicShape ElementWiseBinary (…
lanxianghit Mar 8, 2024
06f1abf
[CINN] Fix some bug of cinn (#62540)
zyfncg Mar 8, 2024
1257059
[AutoTuner] support refined recompute in autotuner (#62430)
Caozhou1995 Mar 8, 2024
03344d8
[PHI]Support set need_prepare_phi_data by env (#62519)
NeroLoh Mar 8, 2024
8a523ee
skip prepare_op_amp_options in build_program when pir is used (#62528)
zhiqiu Mar 8, 2024
93d1e85
[Distributed]Earse p2p cache for every step (#62277) (#62400)
ForFishes Mar 8, 2024
04c96fa
[Distributed] fix sharding on custom devices (#62535)
SylarTiaNII Mar 8, 2024
12666ce
disable isl init in dynamic shape mode (#62521)
BiynXu Mar 8, 2024
3ed3761
fix replace reshape op (#62552)
BiynXu Mar 8, 2024
2c7d189
Add sub graph of stable diffusion-4 (#62510)
yulangz Mar 8, 2024
9d2d05d
Add sub graph of stable diffusion-3 (#62511)
yulangz Mar 8, 2024
008d0ac
Add sub graph of stable diffusion-2 (#62512)
yulangz Mar 8, 2024
1e3e19f
Add sub graph of stable diffusion-1 (#62513)
yulangz Mar 8, 2024
c8cd35d
cinn(dynamic): fix reshape op when accessing shape dialect across fus…
6clc Mar 8, 2024
98aa58f
[DistDialect] add ShardTensor op (#62433)
hitywt Mar 8, 2024
6255e8b
[CustomDevice] fix ToCDataType (#62562)
ronny1996 Mar 8, 2024
b11f7f5
[PIR] support infer spmd auto gen. (#62547)
winter-wang Mar 8, 2024
bb86d51
Support empty reduce axis (#62542)
phlrain Mar 9, 2024
bc56513
dist.to_static support pir program (#62560)
zhiqiu Mar 9, 2024
4117a52
fix group cluster shape dialect bug (#62545)
phlrain Mar 10, 2024
8de49de
[CINN] EliminateCommonGlobalVar pass, optimize performance (#62517)
jiahy0825 Mar 10, 2024
72c4f15
fix dyshape buffer resize (#62490)
BiynXu Mar 10, 2024
6c2378f
cinn(op): add fill constant symblic compute (#62478)
6clc Mar 10, 2024
d27c2ea
cinn(op): add broadcast compute (#62488)
6clc Mar 10, 2024
00266ae
[Dynamic Shape]Fix SubstituteDimExprBasedOnConstraintsPass invalid bu…
jiahy0825 Mar 10, 2024
2417813
ReversedInferShardableAxes support sinks
jiahy0825 Mar 10, 2024
b8e7939
update op lower
feifei-111 Mar 10, 2024
e22f81d
support multiple sinks in group_pattern_util.InferShardableAxes
jiahy0825 Mar 10, 2024
da2b472
Merge branch 'cinn-trivalop-fuse' of https://github.com/2742195759/Pa…
jiahy0825 Mar 10, 2024
04f5f59
[PIR+CINN]Fix cinn_op.GroupOp insert bug for WriteAfterRead (#62529)
Aurelius84 Mar 10, 2024
c84c50c
update
feifei-111 Mar 10, 2024
2f0c384
update
feifei-111 Mar 10, 2024
cf96b67
fix bug of fuse shape ops to generate_shape (#62587)
zyfncg Mar 11, 2024
d45efa2
cinn(op): fix broadcast op (#62594)
6clc Mar 11, 2024
01f01c3
add inference api:exp_specify_tensorrt_subgraph_precision (#62402)
lizexu123 Mar 11, 2024
2c924ed
add matmul shape constrain (#62567)
phlrain Mar 11, 2024
0a97ad9
merge origin
jiahy0825 Mar 11, 2024
a6cfd99
Merge pull request #50 from tc20042008/xk-cinn-trivalop-fuse
tc20042008 Mar 11, 2024
0ad3c13
fix conf
feifei-111 Mar 11, 2024
e819334
Symbolic shape inference support for pd_op.split and builtin.split (#…
fty1777 Mar 11, 2024
e365fcd
[PIR] add paddle fatal mechanism. (#62571)
winter-wang Mar 11, 2024
0417a59
Fix DEFIN_NOT definite_not (#62548)
co63oc Mar 11, 2024
c00cd0c
[PIR]Fix Bugs and adapt Custom op unittest (#62506)
YuanRisheng Mar 11, 2024
f8fbbb5
Fix precedding_nodes preceding_nodes (#62544)
co63oc Mar 11, 2024
ce5a3a8
support sharding stage 2 (#62486)
LiYuRio Mar 11, 2024
0942bbc
fix small reduce in tile first schedule (#62593)
BiynXu Mar 11, 2024
280045c
fix loop reorder alignment tactic bug (#62581)
phlrain Mar 11, 2024
a5f7615
[PIR]Split test_zeros_dim_tensor.py to 10 unittest files (#62527)
YuanRisheng Mar 11, 2024
5875b9e
update
feifei-111 Mar 11, 2024
274086f
update
feifei-111 Mar 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
10 changes: 8 additions & 2 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,8 @@ option(WITH_SETUP_INSTALL "Compile PaddlePaddle with setup.py" OFF)
option(WITH_SHARED_PHI "Compile PaddlePaddle with SHARED LIB of PHI" ON)
option(CINN_ONLY "Compile CINN only in Paddle" OFF)
option(CINN_WITH_CUDNN "Compile CINN with CUDNN support" ON)

option(WITH_PIP_CUDA_LIBRARIES
"Paddle uses the CUDA library provided by NVIDIA" OFF)
find_package(Git REQUIRED)

# config GIT_URL with github mirrors to speed up dependent repos clone
Expand Down Expand Up @@ -97,11 +98,16 @@ endif()

if(WITH_GPU AND NOT APPLE)
#(Note risemeup1): The cudart dynamic library libcudart.so is used by set CUDA_USE_STATIC_CUDA_RUNTIME and CMAKE_CUDA_FLAGS
if(LINUX)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux" AND CMAKE_SYSTEM_PROCESSOR STREQUAL
"x86_64")
set(CUDA_USE_STATIC_CUDA_RUNTIME
OFF
CACHE BOOL "" FORCE)
set(CMAKE_CUDA_FLAGS "--cudart shared")
if(WITH_PIP_CUDA_LIBRARIES)
#(Note risemeup1): Flag 'WITH_PIP_CUDA_LIBRARIES' will be used in dynamic_loader.cc to search for CUDA-related .so files through the Python libraries provided by NVIDIA.
add_definitions(-DWITH_PIP_CUDA_LIBRARIES)
endif()
endif()
enable_language(CUDA)
message(STATUS "CUDA compiler: ${CMAKE_CUDA_COMPILER}, version: "
Expand Down
20 changes: 6 additions & 14 deletions cmake/external/eigen.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -39,27 +39,19 @@ elseif(LINUX)
endif()
endif()

if(CMAKE_COMPILER_IS_GNUCC)
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU|Clang")
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/eigen/TensorRandom.h.patch
tensor_random_header)
# See: [Why calling some `git` commands before `patch`?]
set(EIGEN_PATCH_COMMAND
git checkout -- . && git checkout ${EIGEN_TAG} && patch -Nd
${SOURCE_DIR}/unsupported/Eigen/CXX11/src/Tensor <
${tensor_random_header})
execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpfullversion -dumpversion
OUTPUT_VARIABLE GCC_VERSION)
string(REGEX MATCHALL "[0-9]+" GCC_VERSION_COMPONENTS ${GCC_VERSION})
list(GET GCC_VERSION_COMPONENTS 0 GCC_MAJOR)
list(GET GCC_VERSION_COMPONENTS 1 GCC_MINOR)
set(GCC_VERSION "${GCC_MAJOR}.${GCC_MINOR}")
if(GCC_VERSION GREATER_EQUAL 12.0)
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/eigen/Complex.h.patch
complex_header)
set(EIGEN_PATCH_COMMAND
${EIGEN_PATCH_COMMAND} && patch -Nd
${SOURCE_DIR}/Eigen/src/Core/arch/SSE/ < ${complex_header})
endif()
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/eigen/Complex.h.patch
complex_header)
set(EIGEN_PATCH_COMMAND
${EIGEN_PATCH_COMMAND} && patch -Nd
${SOURCE_DIR}/Eigen/src/Core/arch/SSE/ < ${complex_header})
endif()

set(EIGEN_INCLUDE_DIR ${SOURCE_DIR})
Expand Down
28 changes: 10 additions & 18 deletions cmake/external/gloo.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -42,24 +42,16 @@ if(WITH_GPU)
endif()
endif()

if(CMAKE_COMPILER_IS_GNUCC)
execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpfullversion -dumpversion
OUTPUT_VARIABLE GCC_VERSION)
string(REGEX MATCHALL "[0-9]+" GCC_VERSION_COMPONENTS ${GCC_VERSION})
list(GET GCC_VERSION_COMPONENTS 0 GCC_MAJOR)
list(GET GCC_VERSION_COMPONENTS 1 GCC_MINOR)
set(GCC_VERSION "${GCC_MAJOR}.${GCC_MINOR}")
if(GCC_VERSION GREATER_EQUAL "12.0")
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/gloo/device.cc.patch
native_dst)
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/gloo/types.h.patch
types_header)
# See: [Why calling some `git` commands before `patch`?]
set(GLOO_PATCH_COMMAND
git checkout -- . && git checkout ${GLOO_TAG} && patch -Nd
${SOURCE_DIR}/gloo/transport/tcp < ${native_dst} && patch -Nd
${SOURCE_DIR}/gloo/ < ${types_header})
endif()
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU|Clang")
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/gloo/device.cc.patch
native_dst)
file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/gloo/types.h.patch
types_header)
# See: [Why calling some `git` commands before `patch`?]
set(GLOO_PATCH_COMMAND
git checkout -- . && git checkout ${GLOO_TAG} && patch -Nd
${SOURCE_DIR}/gloo/transport/tcp < ${native_dst} && patch -Nd
${SOURCE_DIR}/gloo/ < ${types_header})
endif()

file(TO_NATIVE_PATH ${PADDLE_SOURCE_DIR}/patches/gloo/linux.cc.patch
Expand Down
4 changes: 1 addition & 3 deletions cmake/simd.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,7 @@
include(CheckCXXSourceRuns)
include(CheckCXXSourceCompiles)

if(CMAKE_COMPILER_IS_GNUCC
OR CMAKE_COMPILER_IS_GNUCXX
OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
if(CMAKE_CXX_COMPILER_ID MATCHES "GNU|Clang")
set(MMX_FLAG "-mmmx")
set(SSE2_FLAG "-msse2")
set(SSE3_FLAG "-msse3")
Expand Down
5 changes: 5 additions & 0 deletions cmake/third_party.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,11 @@
include(ExternalProject)
# Create a target named "third_party", which can compile external dependencies on all platform(windows/linux/mac)

# Avoid warning about DOWNLOAD_EXTRACT_TIMESTAMP in CMake 3.24
if(CMAKE_VERSION VERSION_GREATER_EQUAL "3.24.0")
cmake_policy(SET CMP0135 NEW)
endif()

set(THIRD_PARTY_PATH
"${CMAKE_BINARY_DIR}/third_party"
CACHE STRING
Expand Down
2 changes: 2 additions & 0 deletions paddle/cinn/backends/codegen_cuda_dev.cc
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
#include <set>
#include <unordered_set>

#include "paddle/cinn/common/cas.h"
#include "paddle/cinn/common/ir_util.h"
#include "paddle/cinn/ir/op/ir_operators.h"
#include "paddle/cinn/ir/utils/ir_verify.h"
Expand Down Expand Up @@ -124,6 +125,7 @@ std::vector<Expr> FilterDeallocTempBuffers(const std::vector<Expr> &frees) {
bool has_symbolic_constant = false;
const ir::_Buffer_ *buffer = op->destination.As<ir::_Buffer_>();
for (Expr shape : buffer->shape) {
shape = common::AutoSimplify(shape);
ir::ir_utils::CollectIRNodes(shape, [&](const Expr *x) {
if (x->as_var()) {
CHECK(x->as_var()->is_symbolic_constant)
Expand Down
44 changes: 24 additions & 20 deletions paddle/cinn/common/integer_set.cc
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,9 @@ cas_intervals_t CollectVarIntervalsOfExprs(const std::vector<ir::Expr>& exprs,
if (var->upper_bound.defined()) {
upper_bound = var->upper_bound;
}
if (var->is_symbolic_constant) {
lower_bound = ir::Expr(1);
}
var_intervals.insert(
{var->name, CasInterval(lower_bound, upper_bound)});
}
Expand Down Expand Up @@ -118,32 +121,32 @@ std::optional<bool> SymbolicExprAnalyzer::ProveGE(const ir::Expr& lhs,
if (lhs == rhs) {
return true;
}
if (lhs == SymbolicExprLimit::positive_inf ||
rhs == SymbolicExprLimit::negative_inf) {
return true;
}
if (rhs == SymbolicExprLimit::positive_inf ||
lhs == SymbolicExprLimit::negative_inf) {
return false;
}
ir::Expr diff = AutoSimplify(ir::Sub::Make(lhs, rhs), var_intervals_);
VLOG(6) << "diff of " << ir::Sub::Make(lhs, rhs) << " = " << diff;
if (diff.is_constant() && diff.get_constant() >= 0) {
if (lhs == SymbolicExprLimit::positive_inf ||
rhs == SymbolicExprLimit::negative_inf) {
return true;
}
ir::Expr diff = AutoSimplify(ir::Sub::Make(lhs, rhs), var_intervals_);
VLOG(6) << "diff of " << ir::Sub::Make(lhs, rhs) << " = " << diff;
if (diff.is_constant() && diff.get_constant() < 0) {
return false;
}
ir::Expr diff_lower_bound = LowerBound(diff);
VLOG(6) << "lower bound of " << diff << " = " << diff_lower_bound;
if (diff_lower_bound.is_constant() && diff_lower_bound.get_constant() >= 0) {
if (diff.is_constant() && diff.get_constant() >= 0) {
return true;
}
ir::Expr diff_upper_bound = UpperBound(diff);
VLOG(6) << "upper bound of " << diff << " = " << diff_upper_bound;
if (diff_upper_bound.is_constant() && diff_upper_bound.get_constant() < 0) {
return false;
}
ir::Expr diff_lower_bound = LowerBound(diff);
VLOG(6) << "lower bound of " << diff << " = " << diff_lower_bound;
if (diff_lower_bound.is_constant() && diff_lower_bound.get_constant() >= 0) {
return true;
}
return std::nullopt;
}

Expand All @@ -157,32 +160,33 @@ std::optional<bool> SymbolicExprAnalyzer::ProveGT(const ir::Expr& lhs,
if (lhs == rhs) {
return false;
}
if (lhs == SymbolicExprLimit::positive_inf ||
rhs == SymbolicExprLimit::negative_inf) {
return true;
}
if (rhs == SymbolicExprLimit::positive_inf ||
lhs == SymbolicExprLimit::negative_inf) {
return false;
}
ir::Expr diff = AutoSimplify(ir::Sub::Make(lhs, rhs), var_intervals_);
VLOG(6) << "diff of " << ir::Sub::Make(lhs, rhs) << " = " << diff;
if (diff.is_constant() && diff.get_constant() > 0) {
if (lhs == SymbolicExprLimit::positive_inf ||
rhs == SymbolicExprLimit::negative_inf) {
return true;
}
ir::Expr diff = AutoSimplify(ir::Sub::Make(lhs, rhs), var_intervals_);
VLOG(6) << "diff of " << ir::Sub::Make(lhs, rhs) << " = " << diff;
if (diff.is_constant() && diff.get_constant() <= 0) {
return false;
}
ir::Expr diff_lower_bound = LowerBound(diff);
VLOG(6) << "lower bound of " << diff << " = " << diff_lower_bound;
if (diff_lower_bound.is_constant() && diff_lower_bound.get_constant() > 0) {
if (diff.is_constant() && diff.get_constant() > 0) {
return true;
}
ir::Expr diff_upper_bound = UpperBound(diff);
VLOG(6) << "upper bound of " << diff << " = " << diff_upper_bound;
if (diff_upper_bound.is_constant() && diff_upper_bound.get_constant() <= 0) {
return false;
}
ir::Expr diff_lower_bound = LowerBound(diff);
VLOG(6) << "lower bound of " << diff << " = " << diff_lower_bound;
if (diff_lower_bound.is_constant() && diff_lower_bound.get_constant() > 0) {
return true;
}

return std::nullopt;
}

Expand Down
Loading