Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
1046 commits
Select commit Hold shift + click to select a range
e539f51
[XLA:CPU][nanort] Enable nanort client to compile an HLO module witho…
basioli-k Nov 11, 2025
72a2b88
[XLA:CPU][XTile] Prepare fusion compiler to rely more on one-shot-buf…
WillFroom Nov 11, 2025
485a5ec
NFC: Add bounds checks in Triton fusion emitter.
chsigg Nov 11, 2025
d2a0a49
Automated Code Change
tensorflower-gardener Nov 11, 2025
be86484
Reverts 9cbe7bd184c7bf0558c5b908da4f92446cc1b2a0
tensorflower-gardener Nov 11, 2025
b2d0fdb
Automated Code Change
tensorflower-gardener Nov 11, 2025
2fb4baa
[XLA:GPU] Add infinity and zero counts to GPU buffer float checks.
loislo Nov 11, 2025
516b904
Migrate Triton GEMM tests to use nested fusion structure.
chsigg Nov 11, 2025
578a4f8
[stablehlo] Add ALG_DOT_BF16_BF16_F32_X9 to ConvertDotAlgorithm
basioli-k Nov 11, 2025
876ba60
[xla:gpu] Reorder autotuner backends for autotuning fusion pass
vwbaker Nov 11, 2025
ea8af7f
[XLA][codegen] Remove se::DeviceDescription argument from the shared …
basioli-k Nov 11, 2025
ac2bbd5
Use `lookupOrDefault` for value mapping in TritonReduce lowering.
vwbaker Nov 11, 2025
630956b
[XLA:GPU] Remove cusolver dependency from XLA:GPU.
pifon2a Nov 11, 2025
92f53bf
[XLA:GPU] Refactor BufferDebugLog to be a template class.
loislo Nov 11, 2025
63047d9
[XLA:CPU][XTile] Register StableHLO to Linalg conversion passes in `f…
WillFroom Nov 11, 2025
20b7e54
Reverts 42eb0dbe341e4b16f829da6f123b57ee460906a9
AusarYao28 Nov 11, 2025
9da0422
add CustomCall to append Tensor to file
ermilovmaxim Nov 11, 2025
1fdc835
raise nccl channel limit and add blackwell nvlink bandwidth
ermilovmaxim Nov 11, 2025
29ae5d0
(1/N) Add support for `NamedSharding` in existing `HloSharding` metho…
KanishAnand Nov 11, 2025
bdca87b
Clear frontend attributes for get-tuple-elements of GlobalToLocal and…
ZixuanJiang Nov 11, 2025
d85da0b
Allow op cost analysis to be customizable for TFRT session
tensorflower-gardener Nov 12, 2025
c3ecebb
Introduce methods to query host, chip, and device IDs from PjRtTopolo…
hhb Nov 12, 2025
fad2b04
Remove redundant constructors for `TileAssignment`.
ZixuanJiang Nov 12, 2025
fa7dd80
[XLA] Run latency hiding scheduler on computations with no async comp…
Nov 12, 2025
de237d1
Move dlpack_support to xla and separate out stride functions from types.
hanrach9 Nov 12, 2025
a5bcf5a
Update XLA ml-build image used by benchmark to the latest version
quoctruong Nov 12, 2025
b15ba07
Automated Code Change
tensorflower-gardener Nov 12, 2025
b9f5ea7
Fix test breakage when address/memory/thread sanitizer was enabled al…
ggawryal Nov 12, 2025
dec2b02
PR #31348: [ROCm] added rocm7 support to EnablePeerAccess
amd-songpiao Nov 12, 2025
6b2ce81
PR #33794: [GPU] Support int4 in cuDNN GEMM fusions.
sergachev Nov 12, 2025
559558c
Update GraphDef version to 2409.
tensorflower-gardener Nov 12, 2025
daf4d83
compat: Update forward compatibility horizon to 2025-11-12
tensorflower-gardener Nov 12, 2025
2209da2
[XTile] Move ConvertElementwise0DTensorToScalarPass from triton_xla t…
WillFroom Nov 12, 2025
b6aea89
[XLA] Add div, min & max to ImplicitArithOpBuilder.
WillFroom Nov 12, 2025
ca0631c
[SymbolicMap] Add basic simplifications for Add and Mul when we creat…
tensorflower-gardener Nov 12, 2025
bea62b9
[XLA] Use absl::AnyInvocable instead of std::function since it can't …
tensorflower-gardener Nov 12, 2025
3cb3c49
Integrate Triton up to [ff05bd2f](https://github.com/openai/triton/co…
chsigg Nov 12, 2025
6351a80
[XLA:GPU] Add support for detecting infinite values in XLA GPU.
loislo Nov 12, 2025
5155227
[XLA:GPU] Add collective emitter to triton/ codegen.
sohaibiftikhar Nov 12, 2025
e25f69f
Use KernelArgumentsPackingSpec in KernelSpec
beckerhe Nov 12, 2025
4e2f5ad
Fixed the linter issue
ILCSFNO Nov 12, 2025
18fe05b
Fixed the change in `base_api` instead of `python_api`
ILCSFNO Nov 12, 2025
66507bb
Fix the change in `base_api`
ILCSFNO Nov 12, 2025
d66234c
Automated Code Change
tensorflower-gardener Nov 12, 2025
7093187
[XLA:GPU]: Add a kTritonCollectiveFusion kind.
sohaibiftikhar Nov 12, 2025
1a5c0ce
[XLA:GPU]: Move out pieces of fusion_emitter required for collective …
sohaibiftikhar Nov 12, 2025
fa71348
[XLA:GPU] Add a comment about numerics to the cuDNN GEMM FLAG.
thomasjoerg Nov 12, 2025
bfb4057
[XLA:GPU] Enhance float check debug logging with thunk profile annota…
loislo Nov 12, 2025
e57bfcf
[XLA][codegen] Emit stablehlo dot and addition and then lower it to t…
basioli-k Nov 12, 2025
f16a07f
[XLA:CPU][XTile] Read directly from the input tensor rather than via …
WillFroom Nov 12, 2025
a75ee56
[xla:gpu] Do not create clique keys with trivial groups
ezhulenev Nov 12, 2025
7214fc9
Increase memory limits for eager tests.
tensorflower-gardener Nov 12, 2025
9c1a0c8
#tf-data reenable captured_function on macos.
ethanluoyc Nov 12, 2025
ca5d3c3
[XLA][Numerics][Hlo Value Tracking] Add a test case for recovering tu…
jcai19 Nov 12, 2025
6ee9872
Add `kDotDependent` to `DimensionInfo`, to indicate a DOT operation c…
bixia1 Nov 12, 2025
27a2fe5
[XLA] Enhance `set_xla_metadata` to further support tagging gradient …
tensorflower-gardener Nov 12, 2025
3839764
Add kTpuHbmMemorySpaceKind for tpu hbm
hhb Nov 12, 2025
1d029e3
handle nullptr case in AsyncGpuBuffer
jparkerh Nov 12, 2025
269aa1d
[XLA:GPU] Use dot operands instead of the computation parameters to e…
mooskagh Nov 12, 2025
f068f7e
PR #33201: Docs: Error 0102
mtsokol Nov 12, 2025
aea00a2
Add constant folder for tanh
sirakiin Nov 12, 2025
800e5a9
[XLA:GPU] Relax tile size restriction for the last operand of concat.
pifon2a Nov 12, 2025
285775b
Fix StableHLO Patch file
GleasonK Nov 12, 2025
a062ff3
Fix flatbuffer import for large models
chunnienc Nov 12, 2025
df9f909
[ReplicaGroupV3][Conversion] Add V3->{V2,V1} conversion functions.
Varcho Nov 13, 2025
dede764
Remove stride functions from xla/types
hanrach9 Nov 13, 2025
55e7043
Add support for offloading suitable reductions to YNNPACK.
alexander-shaposhnikov Nov 13, 2025
6f611c8
[XLA:GPU]: Emit all-reduce using fusion emitter.
sohaibiftikhar Nov 13, 2025
f4b2b4c
Use `absl::Span` and `absl::InlinedVector` for `allow_spmd_sharding_p…
ZixuanJiang Nov 13, 2025
6bf1359
Automated Code Change
tensorflower-gardener Nov 13, 2025
cc79f72
Update XNNPACK and KleidiAI
tensorflower-gardener Nov 13, 2025
a28ae3b
Automated Code Change
tensorflower-gardener Nov 13, 2025
0a47faf
Support serialization of InprocessSymbolSpecs
beckerhe Nov 13, 2025
ce31fc7
Add de/serialization for `Host{Send|Recv}[Done]Thunk(s)`
khasanovaa Nov 13, 2025
772f98f
compat: Update forward compatibility horizon to 2025-11-13
tensorflower-gardener Nov 13, 2025
26681fe
Update GraphDef version to 2410.
tensorflower-gardener Nov 13, 2025
6bf52ef
[XLA:CPU][XTile] Emit vectorized reduce as a single loop.
WillFroom Nov 13, 2025
704b84a
Fix bug in KernelArgsPackedVector::number_of_arguments
beckerhe Nov 13, 2025
d8292f2
PR #33854: [ROCM[ Remove padding for gemms
pemeliya Nov 13, 2025
27d2ad1
PR #33754: [NVIDIA GPU] Add pred, int8 and uint8 as supported nvshmem…
Tixxx Nov 13, 2025
20c1ff6
PR #33860: [XLA:GPU][oneAPI][Build-fix] Fix ptx custom kernel build i…
mdfaijul Nov 13, 2025
5f530ae
[XLA][codegen] Add xtile scaled dot op, emit it from the fusion emitt…
basioli-k Nov 13, 2025
e6d6b2d
[XLA:CPU][XTile] Use linalg DPS for elementwise ops.
WillFroom Nov 13, 2025
696ec92
Make KernelArgumentPackingSpec work on 32bit platforms
beckerhe Nov 13, 2025
6be20e4
Add SetEventMetadataId method to the XEventBuilder
Nov 13, 2025
5eb9019
[XLA:GPU] Comment why we don't need to check the last concat operand'…
pifon2a Nov 13, 2025
8e1f96b
Enable XLA GPU experimental fusion autotuner by default.
vwbaker Nov 13, 2025
a577696
[XLA:GPU] Do not normalize start_index_map in gather simplifier
mooskagh Nov 13, 2025
d1a4ab1
PR #33413: [ROCm] Unify definition of the local libs rpath for hermet…
alekstheod Nov 13, 2025
5a3c23b
[xla:gpu] Disable the legacy emitter path.
chsigg Nov 13, 2025
4266748
PR #33534: [ROCm] rm gcc bazelrc and unify rocm_ci and rocm (#412)
i-chaochen Nov 13, 2025
0051320
Fix getSuccessorRegions implementation
jeffbparker Nov 13, 2025
42b1074
This code can race with the new Reset() code.
pschuh Nov 13, 2025
8c798ff
Add threads & process ids to compilation metrics.
tensorflower-gardener Nov 13, 2025
650d745
#HLODiff Broaden --ignore_shape to also apply to diff reporting.
tensorflower-gardener Nov 13, 2025
df7448b
Normalize xla test names to [test_name]_test.
toli-y Nov 13, 2025
7bbde49
Map CudaGraph Node in traceviewer could show per-node framework names…
tensorflower-gardener Nov 13, 2025
6346abe
[XLA:CPU][pjrt] Factor out shared HLO module creation code
basioli-k Nov 13, 2025
4a4edbe
[xla:cpu] Don't hoist small loops with runtime calls
ezhulenev Nov 14, 2025
5f7a21e
Normalize xla test names to [test_name]_test.
toli-y Nov 14, 2025
9a65af1
[xla:ffi] Remove deprecated deleter field from XLA_FFI_State_Set_Args
ezhulenev Nov 14, 2025
d717dd7
Add shared ownership to `tstring`'s VIEW type.
LarryLansing Nov 14, 2025
ff993fc
Improve the performance in `StreamExecutorGpuTopologyDescription::Log…
hhb Nov 14, 2025
f314034
Automated Code Change
tensorflower-gardener Nov 14, 2025
6890888
Automated Code Change
tensorflower-gardener Nov 14, 2025
34579a5
Automated Code Change
tensorflower-gardener Nov 14, 2025
ad370fc
Automated Code Change
tensorflower-gardener Nov 14, 2025
069753c
Automated Code Change
tensorflower-gardener Nov 14, 2025
5c2fa19
Automated Code Change
tensorflower-gardener Nov 14, 2025
81d8d53
Automated Code Change
tensorflower-gardener Nov 14, 2025
11a271e
Automated Code Change
tensorflower-gardener Nov 14, 2025
0bdde8f
Automated Code Change
tensorflower-gardener Nov 14, 2025
474a8b0
Automated Code Change
tensorflower-gardener Nov 14, 2025
688bdf3
Merge pull request #102350 from ILCSFNO:patch-10
tensorflower-gardener Nov 14, 2025
bda197b
Merge pull request #103921 from ILCSFNO:patch-12
tensorflower-gardener Nov 14, 2025
b28b018
Automated Code Change
tensorflower-gardener Nov 14, 2025
3a02032
Automated Code Change
tensorflower-gardener Nov 14, 2025
9422891
Merge pull request #102344 from ILCSFNO:patch-9
tensorflower-gardener Nov 14, 2025
88bf663
Go: Update generated wrapper functions for TensorFlow ops.
tensorflower-gardener Nov 14, 2025
2e407ab
[XLA:CPU] Rewrite polynomial approximations of vectorized llvm.exp.
WillFroom Nov 14, 2025
05d2b10
Merge the `CompilationResultProto` and `GpuExecutableProto` protos
EusebioDM Nov 14, 2025
ff58a0a
Update GraphDef version to 2411.
tensorflower-gardener Nov 14, 2025
45d66c8
compat: Update forward compatibility horizon to 2025-11-14
tensorflower-gardener Nov 14, 2025
1f6f833
Automated Code Change
tensorflower-gardener Nov 14, 2025
0ecc2f5
Reverts 5a3c23b1d7ee994d19e054096469f69b34e5674c
chsigg Nov 14, 2025
4512a92
[XLA:GPU] use new triton support checks in gemm fusion pass
metaflow Nov 14, 2025
74bbe7a
[XLA:GPU] Do not check for tensor_float_32_execution_enabled when dec…
mooskagh Nov 14, 2025
49690a7
[XTile] Add MaskOp.
WillFroom Nov 14, 2025
a7b7055
[XLA:CPU] Reduce logging level of dumping before/after ir in ir_compi…
WillFroom Nov 14, 2025
9b21aa6
Automated Code Change
tensorflower-gardener Nov 14, 2025
f895a53
Remove platform IDs declarations from platform headers
beckerhe Nov 14, 2025
016dc1a
Automated Code Change
tensorflower-gardener Nov 14, 2025
d009793
Make TopK custom kernels serializable
beckerhe Nov 14, 2025
fb0bb4b
[stablehlo optim] handle bounded dynamic values in optimization pass
GleasonK Nov 14, 2025
5dc2dd0
Add float4_e2m1fn to TensorFlow.
tensorflower-gardener Nov 14, 2025
84378aa
Disable YNNPACK for tfcompile due to missing serialization support
tensorflower-gardener Nov 14, 2025
4c92c13
[XLA:GPU] Add tests for GPU float check logging and reduce log spam.
loislo Nov 14, 2025
5a5d403
[xla:cpu] Extract sort_lib out of sort_thunk
ezhulenev Nov 14, 2025
daf9250
Refactor PRNG bit generation and float conversion.
majnemer Nov 14, 2025
8911b5b
Integrate LLVM at llvm/llvm-project@9625cf6cc0e3
tensorflower-gardener Nov 14, 2025
19cc258
Extend the list of skipped opcodes to avoid "unfavorable" materializa…
alexander-shaposhnikov Nov 14, 2025
edb9a9b
Google internal change.
LarryLansing Nov 14, 2025
09e6628
For each topology(inter-partition or intra-partition),. categorize co…
felixwqp Nov 14, 2025
1f6f105
Internal build change.
Artem-B Nov 14, 2025
6a93fd3
[xla:cpu] Use sort_lib in tfcompile AOT compilation
ezhulenev Nov 14, 2025
a089b32
[xla:cpu] Remove unused runtime_key_value_sort
ezhulenev Nov 14, 2025
27cbbfa
Integrate LLVM at llvm/llvm-project@741ba8209c1f
tensorflower-gardener Nov 14, 2025
96f4c1a
Reverts 4512a9251b234a4d1d59a3ee1ba5eeb3f17c4b9c
hawkinsp Nov 14, 2025
75f87ab
Remove `xla::AlignedAlloc` and migrate callers to `tsl::port::Aligned…
majnemer Nov 14, 2025
703e5b8
Update `rules_ml_toolchain` version.
ybaturina Nov 14, 2025
67fc77e
Add `buildMaxAndArgmaxBody` helper for StableHLO.
tensorflower-gardener Nov 15, 2025
2acf2fd
Add optional shape to BufferUse
ermilovmaxim Nov 15, 2025
f227383
[xla:cpu] Rename dot_lib to dot_dims
ezhulenev Nov 15, 2025
5ef77d5
convert Thunk::TransformAllNestedThunks to return absl::Status
ermilovmaxim Nov 15, 2025
5c31259
[xla:cpu] Rename convolution_lib to convolution_dims
ezhulenev Nov 15, 2025
abdd8ce
Migrate memory_space_assignment_test_base to PjRt.
tensorflower-gardener Nov 15, 2025
9ac746f
Automated Code Change
ckennelly Nov 15, 2025
6b45d1b
Automated Code Change
tensorflower-gardener Nov 15, 2025
7a329ab
Reverts abdd8ce08277c9b90909d87e8a494af2e564f587
tensorflower-gardener Nov 15, 2025
3425535
Automated Code Change
tensorflower-gardener Nov 15, 2025
2ed2bff
Automated Code Change
tensorflower-gardener Nov 15, 2025
ea1deb0
Automated Code Change
tensorflower-gardener Nov 15, 2025
94ba45a
Automated Code Change
tensorflower-gardener Nov 15, 2025
a56c671
Automated Code Change
tensorflower-gardener Nov 15, 2025
f95aa78
Automated Code Change
tensorflower-gardener Nov 15, 2025
d3478df
Automated Code Change
tensorflower-gardener Nov 15, 2025
620c9a0
Automated Code Change
tensorflower-gardener Nov 15, 2025
30d4c19
compat: Update forward compatibility horizon to 2025-11-15
tensorflower-gardener Nov 15, 2025
c799802
Update GraphDef version to 2412.
tensorflower-gardener Nov 15, 2025
07c6ae6
Automated Code Change
tensorflower-gardener Nov 15, 2025
eb63e7d
Automated Code Change
tensorflower-gardener Nov 15, 2025
c6956cc
Automated Code Change
tensorflower-gardener Nov 15, 2025
a502c30
Automated Code Change
tensorflower-gardener Nov 15, 2025
56ed4fd
Automated Code Change
tensorflower-gardener Nov 15, 2025
8dcbf0d
Automated Code Change
tensorflower-gardener Nov 15, 2025
257f386
Automated Code Change
tensorflower-gardener Nov 15, 2025
1db6c7d
Automated Code Change
tensorflower-gardener Nov 15, 2025
4eef375
Automated Code Change
tensorflower-gardener Nov 15, 2025
297dd1b
Automated Code Change
tensorflower-gardener Nov 15, 2025
35790b0
Automated Code Change
tensorflower-gardener Nov 15, 2025
4617e0d
Automated Code Change
tensorflower-gardener Nov 15, 2025
c8ae25a
Automated Code Change
tensorflower-gardener Nov 15, 2025
e382078
Automated Code Change
tensorflower-gardener Nov 15, 2025
331a80e
Automated Code Change
tensorflower-gardener Nov 15, 2025
89fab6c
Automated Code Change
tensorflower-gardener Nov 15, 2025
06123e5
Automated Code Change
tensorflower-gardener Nov 15, 2025
76c6dba
Automated Code Change
tensorflower-gardener Nov 15, 2025
3279cee
Automated Code Change
tensorflower-gardener Nov 15, 2025
8ee1115
Automated Code Change
tensorflower-gardener Nov 15, 2025
7071c38
Automated Code Change
tensorflower-gardener Nov 15, 2025
3e43c82
[xla:cpu] Refactor convolution_lib into reusable functions
ezhulenev Nov 15, 2025
8ddb1ce
Automated Code Change
tensorflower-gardener Nov 15, 2025
53f42dc
Refactor: Use `std::align_val_t` for aligned allocation functions.
majnemer Nov 16, 2025
d4a686f
[xla:cpu] Extract dot implementation into dot_lib
ezhulenev Nov 16, 2025
5505aee
Automated Code Change
tensorflower-gardener Nov 16, 2025
60b5bdc
Use `[[maybe_unused]]` instead of `XLA_FFI_ATTRIBUTE_UNUSED`.
majnemer Nov 16, 2025
1f74885
Automated Code Change
tensorflower-gardener Nov 16, 2025
b117d31
Automated Code Change
tensorflower-gardener Nov 16, 2025
d90e319
Automated Code Change
tensorflower-gardener Nov 16, 2025
3bcec27
Automated Code Change
tensorflower-gardener Nov 16, 2025
d6df6c4
Automated Code Change
tensorflower-gardener Nov 16, 2025
4c269ed
Automated Code Change
tensorflower-gardener Nov 16, 2025
007b8ba
Update GraphDef version to 2413.
tensorflower-gardener Nov 16, 2025
e134554
compat: Update forward compatibility horizon to 2025-11-16
tensorflower-gardener Nov 16, 2025
d5d074c
[XLA:GPU] do not fuse dynamic slice
metaflow Nov 16, 2025
60c09b2
Use `absl::StrAppend` for string concatenation.
zvikinoza Nov 16, 2025
1684fbe
Reverts 0ecc2f598d999bb4df3937e92caf8ddb490a6f43
metaflow Nov 16, 2025
3895124
[xla:cpu] Remove runtime_topk target
ezhulenev Nov 16, 2025
7507dad
Properly tiebreak ForceDelay
tensorflower-gardener Nov 16, 2025
4f38037
Automated Code Change
tensorflower-gardener Nov 16, 2025
953dc4d
Automated Code Change
tensorflower-gardener Nov 17, 2025
1ba0a98
Update XNNPACK in XLA
tensorflower-gardener Nov 17, 2025
050cc1c
Automated Code Change
tensorflower-gardener Nov 17, 2025
ef82df8
Automated Code Change
tensorflower-gardener Nov 17, 2025
6740e37
Automated Code Change
tensorflower-gardener Nov 17, 2025
4f746be
Move CustomKernelThunk into its own file
beckerhe Nov 17, 2025
f6c174a
compat: Update forward compatibility horizon to 2025-11-17
tensorflower-gardener Nov 17, 2025
467c372
Update GraphDef version to 2414.
tensorflower-gardener Nov 17, 2025
8447cc1
KernelSpecTest improvements and cleanups
beckerhe Nov 17, 2025
7fd6f3e
PR #32738: [XLA:GPU] Allow cuDNN scaled dot fusions in the gemm autot…
sergey-kozub Nov 17, 2025
48afbe1
[XLA:GPU] Add more informative error messages to CHECKs in GpuPerform…
thomasjoerg Nov 17, 2025
caddf57
Remove dependency on HloInstruction from CustomKernelThunk
beckerhe Nov 17, 2025
bca597c
[XLA:CPU/GPU] Check that values are scalars before casting in ExpandF…
WillFroom Nov 17, 2025
8ba0728
PR #34042: Fix comments for xla/hlo/ir/hlo_input_output_alias_config.h
shawnwang18 Nov 17, 2025
426e568
[XLA:GPU] Double the maximum unroll factor on Blackwell.
akuegel Nov 17, 2025
be02f0d
PR #33835: make cudnn fusion config conv kind optional for cudnn gemm…
Cjkkkk Nov 17, 2025
2c999f5
PR #33956: [ROCm] Add missing run_under to tsan/asan configs, add mis…
alekstheod Nov 17, 2025
1657515
[XLA:CPU][XTile] Update arith/math conversion patterns work correctly…
WillFroom Nov 17, 2025
bbb25ba
[XLA:GPU] Make flop_per_ns_per_fpu a double in CalculateEffectiveFlop…
nputikhin Nov 17, 2025
581d1a6
[XLA:GPU] Consistently check which bitcasts we can fuse.
olegshyshkov Nov 17, 2025
8a1e75e
Update autotuner to filter out "Cublas_fission" backends.
tensorflower-gardener Nov 17, 2025
f575b8d
[XLA:GPU] Register BF16 kernels for Cub sort and Cub prefix sum
akuegel Nov 17, 2025
7a2bbf2
Refactor SymbolicExpr creation to use free functions with MLIRContext
tensorflower-gardener Nov 17, 2025
dc0aec4
[XLA:GPU] Enable maximum unroll factor 8 heuristic by default.
akuegel Nov 17, 2025
e14921b
PR #33906: [ROCm] upgrade bitcode library to fcc50fb091b7c75d8f6c9a65…
draganmladjenovic Nov 17, 2025
0bf42af
[XLA:CPU][XTile] Add simple bufferization for xtile extract/insert.
WillFroom Nov 17, 2025
8cfd63f
[XLA:GPU] Update test autotune DB
Nov 17, 2025
2628eb7
Use `mlir::MLIRContext` directly in `SymbolicMap` and related functions.
tensorflower-gardener Nov 17, 2025
37ad2d2
Handle fission in legacy cache.
tensorflower-gardener Nov 17, 2025
768adef
[Autotuner] Make buffer checking best effort, rather than forcing it.
tensorflower-gardener Nov 17, 2025
4d8491f
Merge remote-tracking branch 'upstream/master' into develop-upstream-…
hsharsha Nov 17, 2025
b3ad88b
Resolve merge conflict
hsharsha Nov 17, 2025
a1daec9
Enable TF_NEED_ROCM in cpu tests script
hsharsha Nov 18, 2025
6210425
Fix lldMain access to the LLVM command line
draganmladjenovic Nov 18, 2025
067d23a
Add -no-canonical-prefixes to net_zstd
hsharsha Nov 20, 2025
d5a10eb
Snub logging from rocm_tracer
hsharsha Nov 20, 2025
43382aa
Fix xla unit tests
hsharsha Nov 21, 2025
b5f3d12
Enable more gpu_pycpp tests
hsharsha Nov 21, 2025
1231900
Differentiate container name based on CI Job for parallel CI processing
hsharsha Nov 21, 2025
6e5266c
Unset TF_NEED_ROCM for cpu tests and conditonal include in dso_loader
hsharsha Nov 21, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
12 changes: 5 additions & 7 deletions .bazelrc
Original file line number Diff line number Diff line change
Expand Up @@ -299,9 +299,11 @@ common:cuda --@local_config_cuda//:enable_cuda
common:cuda --config=cuda_version
# This flag is needed to include CUDA libraries.
common:cuda --@local_config_cuda//cuda:include_cuda_libs=true
common:cuda --@cuda_driver//:include_cuda_umd_libs=true

# This configuration is used for building the wheels.
common:cuda_wheel --@local_config_cuda//cuda:include_cuda_libs=false
common:cuda_wheel --@cuda_driver//:include_cuda_umd_libs=false

# CUDA: This config refers to building CUDA op kernels with clang.
common:cuda_clang --config=cuda
Expand Down Expand Up @@ -612,7 +614,6 @@ common:use_tar_archive_files --repo_env=USE_LLVM_TAR_ARCHIVE_FILES=1
common:use_tar_archive_files --repo_env=USE_MIRRORED_TAR_ARCHIVE_FILES=1

# Make Bazel not try to probe the host system for a C++ toolchain.
common:rbe_base --config=use_tar_archive_files
common:rbe_base --config=resultstore
common:rbe_base --repo_env=BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1
common:rbe_base --define=EXECUTOR=remote
Expand Down Expand Up @@ -655,8 +656,8 @@ common:rbe_linux_cpu --remote_instance_name=projects/tensorflow-testing/instance
# Download CUDA/CUDNN redistributions to preserve the repositories cache between
# CPU and GPU builds.
# TODO(ybaturina): Uncomment when RBE is ready to support this.
commonld:rbe_linux_cpu --repo_env USE_CUDA_REDISTRIBUTIONS=1
commonld:rbe_linux_cpu --config=cuda_version
common:rbe_linux_cpu --repo_env USE_CUDA_REDISTRIBUTIONS=1
common:rbe_linux_cpu --config=cuda_version

# Deprecated RBE config with non-hermetic toolchains.
common:rbe_linux_cpu_clang_local --config=rbe_linux_cpu
Expand All @@ -682,9 +683,6 @@ common:rbe_linux_cuda --config=cuda_clang_official
common:rbe_linux_cuda --config=rbe_linux_cpu
# For Remote build execution -- GPU configuration
common:rbe_linux_cuda --repo_env=REMOTE_GPU_TESTING=1
# Enable forward compatibility for CUDA builds because RBE docker image doesn't
# have latest CUDA drivers installed.
common:rbe_linux_cuda --@cuda_driver//:enable_forward_compatibility=true

common:rbe_linux_cuda_nvcc --config=rbe_linux_cuda
common:rbe_linux_cuda_nvcc --config=cuda_nvcc
Expand Down Expand Up @@ -877,7 +875,7 @@ test:linux_cpu_wheel_test --@local_xla//third_party/py:wheel_dependency=true --c
test:linux_cuda_wheel_test_filters --test_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
test:linux_cuda_wheel_test_filters --build_tag_filters=gpu,requires-gpu,-no_gpu,-no_oss,-tf_tosa,-oss_excluded,-oss_serial,-benchmark-test,-no_cuda11,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
test:linux_cuda_wheel_test_filters --test_lang_filters=py --test_size_filters=small,medium
test:linux_cuda_wheel_test --@local_xla//third_party/py:wheel_dependency=true --config=linux_cuda_wheel_test_filters -- //tensorflow/... //tensorflow/tools/pip_package:prebuilt_wheel_import_api_packages_test_gpu -//tensorflow/compiler/tf2tensorrt/... -//tensorflow/core/tpu/... -//tensorflow/lite/... -//tensorflow/tools/toolchains/...
test:linux_cuda_wheel_test --repo_env=HERMETIC_CUDA_UMD_VERSION=12.8.1 --@local_xla//third_party/py:wheel_dependency=true --config=linux_cuda_wheel_test_filters -- //tensorflow/... //tensorflow/tools/pip_package:prebuilt_wheel_import_api_packages_test_gpu -//tensorflow/compiler/tf2tensorrt/... -//tensorflow/core/tpu/... -//tensorflow/lite/... -//tensorflow/tools/toolchains/...
# ARM64 WHEEL
test:linux_arm64_wheel_test_filters --test_tag_filters=-no_oss,-tf_tosa,-no_aarch64,-oss_excluded,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
test:linux_arm64_wheel_test_filters --build_tag_filters=-no_oss,-tf_tosa,-no_aarch64,-oss_excluded,-oss_serial,-gpu,-tpu,-benchmark-test,-v1only,-no_oss_py38,-no_oss_py39,-no_oss_py310,-no_oss_py313
Expand Down
2 changes: 1 addition & 1 deletion .bazelversion
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
7.4.1
7.7.0
# NOTE: Update Bazel version in tensorflow/tools/ci_build/release/common.sh.oss
2 changes: 1 addition & 1 deletion .github/workflows/osv-scanner-scheduled.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ permissions:
jobs:
scan-scheduled:
if: github.repository == 'tensorflow/tensorflow'
uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable.yml@v2.2.3"
uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable.yml@v2.2.4"
with:
scan-args: |-
--lockfile=requirements.txt:./requirements_lock_3_9.txt
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/scorecards-analysis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: SARIF file
path: results.sarif
Expand All @@ -64,6 +64,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@3599b3baa15b485a2e49ef411a7a4bb2452e7f93 # v3.29.5
uses: github/codeql-action/upload-sarif@0499de31b99561a6d14a36a5f662c2a54f91beee # v3.29.5
with:
sarif_file: results.sarif
4 changes: 2 additions & 2 deletions .github/workflows/stale-issues.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
pull-requests: write
steps:
- name: Awaiting response issues
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with:
#Comma separated list of labels that can be assigned to issues to exclude them from being marked as stale
exempt-issue-labels: 'override-stale'
Expand Down Expand Up @@ -59,7 +59,7 @@ jobs:
close-pr-message: "This PR was closed because it has been inactive for 14 days since being marked as stale. Please reopen if you'd like to work on this further."
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Contribution issues
uses: actions/stale@3a9db7e6a41a89f618792c92c0e97cc736e1b13f # v10.0.0
uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
with:
#Comma separated list of labels that can be assigned to issues to exclude them from being marked as stale
exempt-issue-labels: 'override-stale'
Expand Down
4 changes: 3 additions & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@
* Adds int8 and int16x8 support for SQRT operator.
* Adds int16x8 support for EQUAL and NOT_EQUAL operators.
* Adds support for int2 type.
* Adds support for int2/int4 in tfl.cast.
* Adds support for int2/int4 in tfl.cast .
* Adds support for SRQ int2 in tfl.fully_connected.
* Adds support for int4 in tfl.slice.

### Bug Fixes and Other Changes

Expand Down
11 changes: 0 additions & 11 deletions ci/official/containers/ml_build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,6 @@ COPY builder.packages.txt /builder.packages.txt

RUN /setup.sources.sh && /setup.packages.sh /builder.packages.txt

# Install devtoolset-9 in /dt9 with glibc 2.17 and libstdc++ 4.8, for building
# manylinux2014-compatible packages.
COPY builder.devtoolset/fixlinks.sh /fixlinks.sh
COPY builder.devtoolset/rpm-patch.sh /rpm-patch.sh
COPY builder.devtoolset/build_devtoolset.sh /build_devtoolset.sh
COPY builder.devtoolset/glibc2.17-inline.patch /glibc2.17-inline.patch
RUN /build_devtoolset.sh devtoolset-9 /dt9

# Setup Python
COPY setup.python.sh /setup.python.sh
COPY builder.requirements.txt /builder.requirements.txt
Expand Down Expand Up @@ -56,9 +48,6 @@ RUN ln -sf /usr/bin/python3.12 /usr/bin/python3
RUN ln -sf /usr/bin/python3.12 /usr/bin/python
RUN ln -sf /usr/lib/python3.12 /usr/lib/tf_python

# Make sure clang is on the path
RUN ln -s /usr/lib/llvm-18/bin/clang /usr/bin/clang

# Link the compat driver to the location if available.
RUN if [ -e "/usr/local/cuda/compat/libcuda.so.1" ]; then ln -s /usr/local/cuda/compat/libcuda.so.1 /usr/lib/x86_64-linux-gnu/libcuda.so.1; fi

Expand Down
21 changes: 2 additions & 19 deletions ci/official/containers/ml_build/builder.packages.txt
Original file line number Diff line number Diff line change
@@ -1,28 +1,9 @@
# Packages to be installed for the new Docker image.

# Packages needed to build devtoolset
file
flex
g++
make
patch
rpm2cpio
unar
wget
xz-utils
cpio

# Other build-related tools
apt-transport-https
autoconf
automake
build-essential
ca-certificates
llvm-18
clang-18
clang-tidy-18
lld-18
clang-format-12
curl
git
parallel
Expand All @@ -32,4 +13,6 @@ unzip
zip
openjdk-21-jdk
vim
wget
jq
file
3 changes: 3 additions & 0 deletions ci/official/containers/ml_build/builder.requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ id
urllib3
requests

# For XLA
pyyaml

# For JAX
build ~= 1.2.2
# uv is faster than pip for installing Python packages.
Expand Down
10 changes: 0 additions & 10 deletions ci/official/containers/ml_build/setup.python.sh
Original file line number Diff line number Diff line change
Expand Up @@ -45,16 +45,6 @@ fi

/setup.packages.sh pythons.txt

# Re-link pyconfig.h from x86_64-linux-gnu into the devtoolset directory
# for any Python version present
pushd /usr/include/x86_64-linux-gnu
for f in $(ls | grep python); do
# set up symlink for devtoolset-9
rm -f /dt9/usr/include/x86_64-linux-gnu/$f
ln -s /usr/include/x86_64-linux-gnu/$f /dt9/usr/include/x86_64-linux-gnu/$f
done
popd

# Python 3.10 include headers fix:
# sysconfig.get_path('include') incorrectly points to /usr/local/include/python
# map /usr/include/python3.10 to /usr/local/include/python3.10
Expand Down
2 changes: 1 addition & 1 deletion ci/official/envs/windows_x86_2022
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
TFCI_DOCKER_ENABLE=1
TFCI_DOCKER_PULL_ENABLE=1
TFCI_DOCKER_IMAGE="gcr.io/tensorflow-testing/tf-win2022@sha256:915cb093630432c38b028f56bd31116a5559ebbc688d427b6092d86828ae03bc"
TFCI_BAZEL_BAZELRC_ARGS="--output_user_root=C:/t"
TFCI_BAZEL_BAZELRC_ARGS="--output_user_root=C:/x"
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --repo_env=USE_PYWRAP_RULES=True --config=windows_x86_cpu_2022"
TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=windows_x86_cpu_2022
TFCI_BUILD_PIP_PACKAGE_WHEEL_NAME_ARG="--repo_env=WHEEL_NAME=tensorflow"
Expand Down
3 changes: 2 additions & 1 deletion ci/official/utilities/cleanup_docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,5 @@ $ docker exec -it tf bash
EOF

docker ps
docker rm -f tf-${TFCI_PYTHON_VERSION}
echo "Removing container tf-$TFCI_PYTHON_VERSION-$TFCI_DOCKER_CONTAINER_POSTFIX"
docker rm -f tf-$TFCI_PYTHON_VERSION-$TFCI_DOCKER_CONTAINER_POSTFIX
4 changes: 2 additions & 2 deletions ci/official/utilities/setup_docker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ if ! docker container inspect tf >/dev/null 2>&1 ; then
echo "GCE_METADATA_HOST=$IP_ADDR" >> $env_file
fi

docker run $TFCI_DOCKER_ARGS --name tf-$TFCI_PYTHON_VERSION -w "$WORKING_DIR" -itd --rm \
docker run $TFCI_DOCKER_ARGS --name tf-$TFCI_PYTHON_VERSION-$TFCI_DOCKER_CONTAINER_POSTFIX -w "$WORKING_DIR" -itd --rm \
-v "$TFCI_GIT_DIR:$WORKING_DIR" \
--env-file "$env_file" \
"$TFCI_DOCKER_IMAGE" \
Expand All @@ -65,4 +65,4 @@ if ! docker container inspect tf >/dev/null 2>&1 ; then
fi

fi
tfrun() { docker exec tf-$TFCI_PYTHON_VERSION "$@"; }
tfrun() { docker exec tf-$TFCI_PYTHON_VERSION-$TFCI_DOCKER_CONTAINER_POSTFIX "$@"; }
58 changes: 26 additions & 32 deletions tensorflow/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -1033,6 +1033,7 @@ package_group(
"//tensorflow_models/google/recml/...",
"//third_party/cloud_tpu/convergence_tools/sdc_monitoring/...",
"//third_party/cloud_tpu/inference_converter/...",
"//third_party/pathways/...",
"//third_party/py/cloud_ml_autoflow/...",
"//third_party/py/envlogger/...",
"//third_party/py/gldm/...",
Expand Down Expand Up @@ -1180,38 +1181,31 @@ tf_cc_shared_library(
linkstatic = 1,
per_os_targets = True,
roots = [
"//tensorflow/c/experimental/filesystem:filesystem_interface",
"//tensorflow/c/experimental/stream_executor:stream_executor",
"//tensorflow/c:env",
"//tensorflow/c:kernels",
"//tensorflow/c:kernels_experimental",
"//tensorflow/c:logging",
"//tensorflow/c:ops",
"//tensorflow/cc/saved_model:fingerprinting_impl",
"//tensorflow/cc/saved_model:loader_lite_impl",
"//tensorflow/cc/saved_model:metrics_impl",
"//tensorflow/compiler/tf2tensorrt:op_converter_registry_impl",
"//tensorflow/core/common_runtime:core_cpu_impl",
"//tensorflow/core/common_runtime/gpu:gpu_runtime_impl",
"//tensorflow/core/common_runtime/pluggable_device:pluggable_device_runtime_impl",
"//tensorflow/core:framework_internal_impl",
"//tensorflow/core/framework:tensor",
"//tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_impl",
"//tensorflow/core:lib_internal_impl",
"//tensorflow/core/profiler:profiler_impl",
"//tensorflow/core/util:determinism", # Must be linked and exported to libtensorflow_framework.so.
"//tensorflow/lite/kernels/shim:tf_kernel_shim",
"@local_xla//xla/stream_executor:stream_executor_impl",
"@local_xla//xla/tsl/framework:bfc_allocator",
"@local_xla//xla/tsl/framework:metrics",
] + tf_additional_binary_deps() +
# TODO(b/259305727): Remove this select and include captured_function in macos builds.
select({
"//tensorflow:macos": [],
"//conditions:default": [
"//tensorflow/core/data:captured_function",
],
}),
"//tensorflow/c/experimental/filesystem:filesystem_interface",
"//tensorflow/c/experimental/stream_executor:stream_executor",
"//tensorflow/c:env",
"//tensorflow/c:kernels",
"//tensorflow/c:kernels_experimental",
"//tensorflow/c:ops",
"//tensorflow/cc/saved_model:fingerprinting_impl",
"//tensorflow/cc/saved_model:loader_lite_impl",
"//tensorflow/cc/saved_model:metrics_impl",
"//tensorflow/compiler/tf2tensorrt:op_converter_registry_impl",
"//tensorflow/core/common_runtime:core_cpu_impl",
"//tensorflow/core/common_runtime/gpu:gpu_runtime_impl",
"//tensorflow/core/common_runtime/pluggable_device:pluggable_device_runtime_impl",
"//tensorflow/core:framework_internal_impl",
"//tensorflow/core/framework:tensor",
"//tensorflow/core/grappler/optimizers:custom_graph_optimizer_registry_impl",
"//tensorflow/core:lib_internal_impl",
"//tensorflow/core/profiler:profiler_impl",
"//tensorflow/core/util:determinism", # Must be linked and exported to libtensorflow_framework.so.
"//tensorflow/lite/kernels/shim:tf_kernel_shim",
"@local_xla//xla/stream_executor:stream_executor_impl",
"@local_xla//xla/tsl/framework:bfc_allocator",
"@local_xla//xla/tsl/framework:metrics",
"//tensorflow/core/data:captured_function",
] + tf_additional_binary_deps(),
soversion = VERSION,
static_deps = PACKAGE_STATIC_DEPS,
visibility = ["//visibility:public"],
Expand Down
13 changes: 0 additions & 13 deletions tensorflow/c/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,6 @@ tf_cuda_library(
],
"//conditions:default": [
":env",
":logging",
":tf_status",
":tf_tensor",
"//tensorflow/c/experimental/filesystem:modular_filesystem",
Expand All @@ -325,18 +324,6 @@ tf_cuda_library(
alwayslink = 1,
)

cc_library(
name = "logging",
srcs = ["logging.cc"],
hdrs = ["logging.h"],
visibility = ["//visibility:public"],
deps = [
":c_api_macros",
"//tensorflow/core/platform:logging",
"//tensorflow/core/platform:stringprintf",
],
)

tf_cuda_library(
name = "tf_status_internal",
hdrs = [
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/c/c_api_function_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1171,7 +1171,7 @@ TEST_F(CApiFunctionTest, InvalidOutputTensor_BadNodePtr) {
EXPECT_EQ(TF_INVALID_ARGUMENT, TF_GetCode(s_));
EXPECT_EQ(string("Node is null\n\tEncountered while processing output 0 "
"from function 'MyFunc'"),
string(TF_Message(s_)));
std::string(TF_Message(s_)));
}

TEST_F(CApiFunctionTest, NodeMissingInput) {
Expand Down
2 changes: 1 addition & 1 deletion tensorflow/c/c_api_test.cc
Original file line number Diff line number Diff line change
Expand Up @@ -2478,7 +2478,7 @@ TEST_F(CApiAttributesTest, Names) {

TF_OperationGetAttrName(oper, 0, value.get(), s_);
EXPECT_EQ(TF_OK, TF_GetCode(s_)) << TF_Message(s_);
EXPECT_EQ("v", string(static_cast<const char*>(value.get()), 1));
EXPECT_EQ("v", std::string(static_cast<const char*>(value.get()), 1));
}

TEST_F(CApiAttributesTest, Errors) {
Expand Down
6 changes: 2 additions & 4 deletions tensorflow/c/checkpoint_reader.cc
Original file line number Diff line number Diff line change
Expand Up @@ -119,8 +119,7 @@ CheckpointReader::BuildV2VarMaps() {
BundleEntryProto entry;
v2_reader_->Seek(kHeaderEntryKey);
for (v2_reader_->Next(); v2_reader_->Valid(); v2_reader_->Next()) {
CHECK(entry.ParseFromArray(v2_reader_->value().data(),
v2_reader_->value().size()))
CHECK(entry.ParseFromString(v2_reader_->value()))
<< entry.InitializationErrorString();
for (int i = 0; i < entry.slices_size(); ++i) {
const auto& slice_proto = entry.slices(i);
Expand All @@ -140,8 +139,7 @@ CheckpointReader::BuildV2VarMaps() {
v2_reader_->Seek(kHeaderEntryKey);
for (v2_reader_->Next(); v2_reader_->Valid(); v2_reader_->Next()) {
if (filtered_keys.count(string(v2_reader_->key())) > 0) continue;
CHECK(entry.ParseFromArray(v2_reader_->value().data(),
v2_reader_->value().size()))
CHECK(entry.ParseFromString(v2_reader_->value()))
<< entry.InitializationErrorString();
string key(v2_reader_->key());
(*var_to_shape_map)[key] = TensorShape(entry.shape());
Expand Down
3 changes: 2 additions & 1 deletion tensorflow/c/eager/c_api.cc
Original file line number Diff line number Diff line change
Expand Up @@ -939,7 +939,8 @@ void TFE_ContextAddFunctionDef(TFE_Context* ctx,
const char* serialized_function_def, size_t size,
TF_Status* status) {
tensorflow::FunctionDef function_def;
if (!function_def.ParseFromArray(serialized_function_def, size)) {
if (!function_def.ParseFromString(
absl::string_view(serialized_function_def, size))) {
status->status =
tensorflow::errors::InvalidArgument("Invalid FunctionDef proto");
return;
Expand Down
Loading