You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In release-1.10, gc_managed_realloc_ is being defined as:
1.10:
staticvoid*gc_managed_realloc_(jl_ptls_tptls, void*d, size_tsz, size_toldsz,
intisaligned, jl_value_t*owner, int8_tcan_collect)
{
if (can_collect)
maybe_collect(ptls);
intis_old_marked=jl_astaggedvalue(owner)->bits.gc==GC_OLD_MARKED;
size_tallocsz=LLT_ALIGN(sz, JL_CACHE_BYTE_ALIGNMENT);
if (allocsz<sz) // overflow in adding offs, size was "negative"jl_throw(jl_memory_exception);
intlast_errno=errno;
#ifdef_OS_WINDOWS_DWORDlast_error=GetLastError();
#endifvoid*b;
if (isaligned)
b=realloc_cache_align(d, allocsz, oldsz);
elseb=realloc(d, allocsz);
if (b==NULL)
jl_throw(jl_memory_exception);
#ifdef_OS_WINDOWS_SetLastError(last_error);
#endiferrno=last_errno;
// gc_managed_realloc_ is currently used exclusively for resizing array buffers.if (is_old_marked) {
ptls->gc_cache.perm_scanned_bytes+=allocsz-oldsz;
inc_live_bytes(allocsz-oldsz);
}
elseif (!(allocsz<oldsz))
jl_atomic_store_relaxed(&ptls->gc_num.allocd,
jl_atomic_load_relaxed(&ptls->gc_num.allocd) + (allocsz-oldsz));
jl_atomic_store_relaxed(&ptls->gc_num.realloc,
jl_atomic_load_relaxed(&ptls->gc_num.realloc) +1);
if (allocsz>oldsz) {
maybe_record_alloc_to_profile((jl_value_t*)b, allocsz-oldsz, (jl_datatype_t*)jl_buff_tag);
}
returnb;
}
as opposed to the implementation from 1.9:
1.9:
staticvoid*gc_managed_realloc_(jl_ptls_tptls, void*d, size_tsz, size_toldsz,
intisaligned, jl_value_t*owner, int8_tcan_collect)
{
if (can_collect)
maybe_collect(ptls);
size_tallocsz=LLT_ALIGN(sz, JL_CACHE_BYTE_ALIGNMENT);
if (allocsz<sz) // overflow in adding offs, size was "negative"jl_throw(jl_memory_exception);
if (jl_astaggedvalue(owner)->bits.gc==GC_OLD_MARKED) {
ptls->gc_cache.perm_scanned_bytes+=allocsz-oldsz;
live_bytes+=allocsz-oldsz;
}
elseif (allocsz<oldsz)
jl_atomic_store_relaxed(&ptls->gc_num.freed,
jl_atomic_load_relaxed(&ptls->gc_num.freed) + (oldsz-allocsz));
elsejl_atomic_store_relaxed(&ptls->gc_num.allocd,
jl_atomic_load_relaxed(&ptls->gc_num.allocd) + (allocsz-oldsz));
jl_atomic_store_relaxed(&ptls->gc_num.realloc,
jl_atomic_load_relaxed(&ptls->gc_num.realloc) +1);
intlast_errno=errno;
#ifdef_OS_WINDOWS_DWORDlast_error=GetLastError();
#endifvoid*b;
if (isaligned)
b=realloc_cache_align(d, allocsz, oldsz);
elseb=realloc(d, allocsz);
if (b==NULL)
jl_throw(jl_memory_exception);
#ifdef_OS_WINDOWS_SetLastError(last_error);
#endiferrno=last_errno;
maybe_record_alloc_to_profile((jl_value_t*)b, sz, jl_gc_unknown_type_tag);
returnb;
}
For some reason, after #50144 we stopped incrementing the number of freed bytes if we shrink the memory buffer on a realloc.
In particular, this could lead to some issues since our heuristics use this metric to compute live_bytes as a proxy for the heap size.
It could be, for instance, one of the causes behind a pathological behavior we're seeing in one of our workloads in 1.10, where live_bytes increases monotonically despite RSS being stable.
I think when we ported the old behavior into 1.10 this got missed? Because 1.11 and forward doesn't use live bytes, it uses heap size, so the freed amount there doesn't really matter. But in 1.10 it matters
In
release-1.10
,gc_managed_realloc_
is being defined as:1.10:
as opposed to the implementation from 1.9:
1.9:
For some reason, after #50144 we stopped incrementing the number of freed bytes if we shrink the memory buffer on a
realloc
.In particular, this could lead to some issues since our heuristics use this metric to compute
live_bytes
as a proxy for the heap size.It could be, for instance, one of the causes behind a pathological behavior we're seeing in one of our workloads in 1.10, where
live_bytes
increases monotonically despite RSS being stable.CC: @gbaraldi
The text was updated successfully, but these errors were encountered: