Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8351444: Shenandoah: Class Unloading may encounter recycled oops #23951

Original file line number Diff line number Diff line change
@@ -121,10 +121,9 @@ inline oop ShenandoahBarrierSet::load_reference_barrier(DecoratorSet decorators,
return nullptr;
}

// Prevent resurrection of unreachable objects that are visited during
// concurrent class-unloading.
// Allow runtime to see unreachable objects that are visited during concurrent class-unloading.
if ((decorators & AS_NO_KEEPALIVE) != 0 &&
_heap->is_evacuation_in_progress() &&
_heap->is_concurrent_weak_root_in_progress() &&
!_heap->marking_context()->is_marked(obj)) {
return obj;
}
21 changes: 11 additions & 10 deletions src/hotspot/share/gc/shenandoah/shenandoahConcurrentGC.cpp
Original file line number Diff line number Diff line change
@@ -150,15 +150,22 @@ bool ShenandoahConcurrentGC::collect(GCCause::Cause cause) {
return false;
}

assert(heap->is_concurrent_weak_root_in_progress(), "Must be doing weak roots now");

// Concurrent stack processing
if (heap->is_evacuation_in_progress()) {
entry_thread_roots();
}

// Process weak roots that might still point to regions that would be broken by cleanup
if (heap->is_concurrent_weak_root_in_progress()) {
entry_weak_refs();
entry_weak_roots();
// Process weak roots that might still point to regions that would be broken by cleanup.
// We cannot recycle regions because weak roots need to know what is marked in trashed regions.
entry_weak_refs();
entry_weak_roots();

// Perform concurrent class unloading before any regions get recycled. Class unloading may
// need to inspect unmarked objects in trashed regions.
if (heap->unload_classes()) {
entry_class_unloading();
}

// Final mark might have reclaimed some immediate garbage, kick cleanup to reclaim
@@ -168,12 +175,6 @@ bool ShenandoahConcurrentGC::collect(GCCause::Cause cause) {

heap->free_set()->log_status_under_lock();

// Perform concurrent class unloading
if (heap->unload_classes() &&
heap->is_concurrent_weak_root_in_progress()) {
entry_class_unloading();
}

// Processing strong roots
// This may be skipped if there is nothing to update/evacuate.
// If so, strong_root_in_progress would be unset.
5 changes: 4 additions & 1 deletion src/hotspot/share/gc/shenandoah/shenandoahFreeSet.cpp
Original file line number Diff line number Diff line change
@@ -999,7 +999,10 @@ HeapWord* ShenandoahFreeSet::try_allocate_in(ShenandoahHeapRegion* r, Shenandoah
assert (has_alloc_capacity(r), "Performance: should avoid full regions on this path: %zu", r->index());
if (_heap->is_concurrent_weak_root_in_progress() && r->is_trash()) {
// We cannot use this region for allocation when weak roots are in progress because the collector may need
// to reference unmarked oops during concurrent classunloading.
// to reference unmarked oops during concurrent classunloading. The collector also needs accurate marking
// information to determine which weak handles need to be null'd out. If the region is recycled before weak
// roots processing has finished, weak root processing may fail to null out a handle into a trashed region.
// This turns the handle into a dangling pointer and will crash or corrupt the heap.
return nullptr;
}
HeapWord* result = nullptr;
33 changes: 23 additions & 10 deletions src/hotspot/share/gc/shenandoah/shenandoahHeap.cpp
Original file line number Diff line number Diff line change
@@ -817,18 +817,31 @@ size_t ShenandoahHeap::initial_capacity() const {
}

bool ShenandoahHeap::is_in(const void* p) const {
if (is_in_reserved(p)) {
if (is_full_gc_move_in_progress()) {
// Full GC move is running, we do not have a consistent region
// information yet. But we know the pointer is in heap.
return true;
}
// Now check if we point to a live section in active region.
ShenandoahHeapRegion* r = heap_region_containing(p);
return (r->is_active() && p < r->top());
} else {
if (!is_in_reserved(p)) {
return false;
}

if (is_full_gc_move_in_progress()) {
// Full GC move is running, we do not have a consistent region
// information yet. But we know the pointer is in heap.
return true;
}

// Now check if we point to a live section in active region.
const ShenandoahHeapRegion* r = heap_region_containing(p);
if (p >= r->top()) {
return false;
}

if (r->is_active()) {
return true;
}

// The region is trash, but won't be recycled until after concurrent weak
// roots. We also don't allow mutators to allocate from trash regions
// during weak roots. Concurrent class unloading may access unmarked oops
// in trash regions.
return r->is_trash() && is_concurrent_weak_root_in_progress();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pity to do this, but I understand the reason for it.

We should investigate if this window is unnecessarily large. I see currently we drop WEAK_ROOTS gc state in ShenandoahHeap::concurrent_prepare_for_update_refs. Should we drop the flag sooner, somewhere after concurrent class unloading? Can be done separately, if it snowballs into something more complicated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Class unloading is the last thing we do before recycling trash regions. A region will be usable for allocation as soon as it is recycled, so, in a sense, this has the same effect as turning off the weak roots flag immediately after class unloading.

Also, the weak roots phase itself cannot have regions recycled because it relies on accurate mark information (recycling clears live data and resets the TAMS). We could work around this by preserving the mark data (perhaps decoupling TAMS reset from region recycling). But changing the gc_state currently requires either a safepoint or a handshake (while holding the Thread_lock). I haven't thought all the way through this, but something like this (psuedo-code) might be possible:

vmop_entry_final_mark();

// Complete class unloading, since it actually _needs_ the oops (still need to forbid trash recycling here).
entry_class_unloading();

// Recycle trash, but do not reset TAMS (weak roots needs TAMS to decide reachability of referents).
entry_cleanup_early();

// Complete weak roots. There are no more trash regions and we don't have to change gc_state
entry_weak_refs();
entry_weak_roots();

What do you think? This would be a separate PR of course, but do you see any reason something like this wouldn't work? I'd expect some asserts to break if we allocate into a new region with TAMS > bottom.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A region will be usable for allocation as soon as it is recycled, so, in a sense, this has the same effect as turning off the weak roots flag immediately after class unloading.

Right. This answers my original question.

What do you think? This would be a separate PR of course, but do you see any reason something like this wouldn't work?

It looks to me as stretching the definition of "trash" even further? I think it would be conceptually cleaner to never turn regular regions into trash until after weak roots are done. So accesses to "dead" weak roots are still possible like a regular access to "regular" region.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advantage with the scheme I proposed is that it makes immediate trash regions available for allocations earlier in the cycle. I don't think it changes the way "trash" is treated during concurrent class unloading, but it would mean that weak roots/refs wouldn't see "trash" regions any more.

}

void ShenandoahHeap::notify_soft_max_changed() {