Project

General

Profile

Actions

Task #14636

closed
YS CW

Cascading deletion not working in `theia-dev` K8s Cluster

Task #14636: Cascading deletion not working in `theia-dev` K8s Cluster

Added by Yannik Schmidt about 2 months ago. Updated about 2 months ago.

Status:
Won't Fix
Priority:
Major
Assignee:
Start date:
06.03.2026
Due date:
% Done:

0%

Estimated time:
SecReporter:
Originally created on:
13.09.2024
Originally updated on:
02.10.2024
Original due date:

Description

Hey folks,

I hope you all had a great vacation 😊

Unfortunately, I am facing some issues with cascading deletion on the theia-dev cluster ([https://k8s-theia-cp.ase.cit.tum.de:6443|https://k8s-theia-cp.ase.cit.t/]). Even though the ownerReferences are set correctly, the child-resources are not deleted when the owner/parent is deleted.

I conducted some simple tests to verify the situation. Let me introduce you to the situation:

  1. Installed basic deployment

{code:java}
iVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.3
ports:
- containerPort: 80{code}

  1. This creates:
  • 1 Deployment nginx-deployment
  • 1 ReplicaSet nginx-deployment-85484bf959 with ownerReference = deployment nginx-deployment
  • 3 Pods with each ownerReference = replicaset nginx-deployment-85484bf959

All of them have "blockOwnerDeletion":true, I do not know whether this is important...

=> The setup should work for cascading deletes.

  1. Delete the deployment. It is expected that the replicaset and consequently also the pods are deleted. However, this is not the case.

Please let me know If I can help you with fixing this problem. I'd also greatly appreciate a expected fix date to estimate a bit better as it's currently blocking quite a significant part of my master's thesis ☺️
Thank you so much!

CW Updated by Colin Wilk about 2 months ago Actions #1

Das ganze cluster ist komplett im arsch. Node 0 startet nicht mehr. Das ist noch eines der Artifakte von den All master clustern die wir erstellt hatten....

CW Updated by Colin Wilk about 2 months ago Actions #3

Retiring cluster in favor for the rancher Prod cluster, see https://jira.ase.in.tum.de/browse/LS1ADMIN-38357

Actions

Also available in: PDF Atom