Apr 13 20:08:38.970934 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:08:38.970950 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:08:38.970959 kernel: BIOS-provided physical RAM map: Apr 13 20:08:38.970964 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 20:08:38.970968 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 13 20:08:38.970973 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 13 20:08:38.970978 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 13 20:08:38.970982 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 13 20:08:38.970987 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 13 20:08:38.970991 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 13 20:08:38.970995 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 13 20:08:38.971002 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 13 20:08:38.971006 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 13 20:08:38.971011 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 13 20:08:38.971016 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 13 20:08:38.971021 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:08:38.971028 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 13 20:08:38.971032 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 13 20:08:38.971037 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:08:38.971042 kernel: NX (Execute Disable) protection: active Apr 13 20:08:38.971046 kernel: APIC: Static calls initialized Apr 13 20:08:38.971051 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 20:08:38.971055 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 Apr 13 20:08:38.971060 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 13 20:08:38.971065 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 13 20:08:38.971069 kernel: SMBIOS 3.0.0 present. Apr 13 20:08:38.971074 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 13 20:08:38.971079 kernel: Hypervisor detected: KVM Apr 13 20:08:38.971086 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:08:38.971091 kernel: kvm-clock: using sched offset of 12743105396 cycles Apr 13 20:08:38.971095 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:08:38.971100 kernel: tsc: Detected 2399.998 MHz processor Apr 13 20:08:38.971105 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:08:38.971110 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:08:38.971115 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 13 20:08:38.971119 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 20:08:38.971124 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:08:38.971131 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 13 20:08:38.971136 kernel: Using GB pages for direct mapping Apr 13 20:08:38.971141 kernel: Secure boot disabled Apr 13 20:08:38.971149 kernel: ACPI: Early table checksum verification disabled Apr 13 20:08:38.971154 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 13 20:08:38.971159 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 20:08:38.971163 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:38.971171 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:38.971176 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 13 20:08:38.971181 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:38.971186 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:38.971191 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:38.971196 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:38.971200 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 20:08:38.971208 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 13 20:08:38.971213 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 13 20:08:38.971218 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 13 20:08:38.971223 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 13 20:08:38.971228 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 13 20:08:38.971233 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 13 20:08:38.971238 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 13 20:08:38.971243 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 13 20:08:38.971248 kernel: No NUMA configuration found Apr 13 20:08:38.971255 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 13 20:08:38.971260 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Apr 13 20:08:38.971265 kernel: Zone ranges: Apr 13 20:08:38.971271 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:08:38.971276 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:08:38.971281 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:08:38.971286 kernel: Movable zone start for each node Apr 13 20:08:38.971291 kernel: Early memory node ranges Apr 13 20:08:38.971296 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 20:08:38.971301 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 13 20:08:38.971308 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 13 20:08:38.971313 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 13 20:08:38.971318 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:08:38.971323 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 13 20:08:38.971328 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:08:38.971333 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 20:08:38.971338 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 13 20:08:38.971343 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:08:38.971348 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 13 20:08:38.971355 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 20:08:38.971360 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:08:38.971365 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:08:38.971370 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:08:38.971375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:08:38.971380 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:08:38.971385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:08:38.971390 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:08:38.971395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:08:38.971402 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:08:38.971407 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:08:38.971412 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:08:38.971417 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:08:38.971422 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 13 20:08:38.971427 kernel: Booting paravirtualized kernel on KVM Apr 13 20:08:38.971432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:08:38.971437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:08:38.971442 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:08:38.971449 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:08:38.971454 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:08:38.971459 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 13 20:08:38.971465 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:08:38.971470 kernel: random: crng init done Apr 13 20:08:38.971475 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:08:38.971480 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:08:38.971485 kernel: Fallback order for Node 0: 0 Apr 13 20:08:38.971492 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 13 20:08:38.971497 kernel: Policy zone: Normal Apr 13 20:08:38.971502 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:08:38.971507 kernel: software IO TLB: area num 2. Apr 13 20:08:38.971512 kernel: Memory: 3827836K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 263128K reserved, 0K cma-reserved) Apr 13 20:08:38.971517 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:08:38.971522 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:08:38.971527 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:08:38.971532 kernel: Dynamic Preempt: voluntary Apr 13 20:08:38.971539 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:08:38.971545 kernel: rcu: RCU event tracing is enabled. Apr 13 20:08:38.971550 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:08:38.971555 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:08:38.971567 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:08:38.971575 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:08:38.971580 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:08:38.971585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:08:38.971590 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:08:38.971596 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:08:38.971601 kernel: Console: colour dummy device 80x25 Apr 13 20:08:38.971606 kernel: printk: console [tty0] enabled Apr 13 20:08:38.971613 kernel: printk: console [ttyS0] enabled Apr 13 20:08:38.971619 kernel: ACPI: Core revision 20230628 Apr 13 20:08:38.971624 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:08:38.971631 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:08:38.973658 kernel: x2apic enabled Apr 13 20:08:38.973674 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:08:38.973688 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:08:38.973694 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:08:38.973700 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Apr 13 20:08:38.973705 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:08:38.973710 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:08:38.973716 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:08:38.973721 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:08:38.973726 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 13 20:08:38.973734 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:08:38.973740 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:08:38.973745 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:08:38.973750 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 13 20:08:38.973755 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:08:38.973762 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:08:38.973767 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:08:38.973773 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:08:38.973778 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:08:38.973786 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 20:08:38.973791 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 20:08:38.973796 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 20:08:38.973801 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:08:38.973807 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:08:38.973812 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 20:08:38.973817 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 20:08:38.973822 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 20:08:38.973828 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 13 20:08:38.973835 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 13 20:08:38.973841 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:08:38.973846 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:08:38.973851 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:08:38.973856 kernel: landlock: Up and running. Apr 13 20:08:38.973862 kernel: SELinux: Initializing. Apr 13 20:08:38.973867 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:08:38.973872 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:08:38.973878 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 13 20:08:38.973885 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:08:38.973891 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:08:38.973896 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:08:38.973901 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:08:38.973907 kernel: ... version: 0 Apr 13 20:08:38.973912 kernel: ... bit width: 48 Apr 13 20:08:38.973917 kernel: ... generic registers: 6 Apr 13 20:08:38.973922 kernel: ... value mask: 0000ffffffffffff Apr 13 20:08:38.973927 kernel: ... max period: 00007fffffffffff Apr 13 20:08:38.973935 kernel: ... fixed-purpose events: 0 Apr 13 20:08:38.973940 kernel: ... event mask: 000000000000003f Apr 13 20:08:38.973946 kernel: signal: max sigframe size: 3376 Apr 13 20:08:38.973951 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:08:38.973957 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:08:38.973962 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:08:38.973967 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:08:38.973973 kernel: .... node #0, CPUs: #1 Apr 13 20:08:38.973978 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:08:38.973986 kernel: smpboot: Max logical packages: 1 Apr 13 20:08:38.973991 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Apr 13 20:08:38.973996 kernel: devtmpfs: initialized Apr 13 20:08:38.974001 kernel: x86/mm: Memory block size: 128MB Apr 13 20:08:38.974007 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 13 20:08:38.974012 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:08:38.974017 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:08:38.974023 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:08:38.974028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:08:38.974035 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:08:38.974041 kernel: audit: type=2000 audit(1776110917.998:1): state=initialized audit_enabled=0 res=1 Apr 13 20:08:38.974046 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:08:38.974051 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:08:38.974056 kernel: cpuidle: using governor menu Apr 13 20:08:38.974062 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:08:38.974067 kernel: dca service started, version 1.12.1 Apr 13 20:08:38.974072 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 13 20:08:38.974077 kernel: PCI: Using configuration type 1 for base access Apr 13 20:08:38.974085 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:08:38.974091 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:08:38.974096 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:08:38.974101 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:08:38.974107 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:08:38.974112 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:08:38.974117 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:08:38.974122 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:08:38.974128 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:08:38.974136 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:08:38.974141 kernel: ACPI: Interpreter enabled Apr 13 20:08:38.974146 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:08:38.974151 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:08:38.974157 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:08:38.974162 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:08:38.974167 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:08:38.974172 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:08:38.974332 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:08:38.974441 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:08:38.974539 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:08:38.974546 kernel: PCI host bridge to bus 0000:00 Apr 13 20:08:38.974663 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:08:38.974765 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:08:38.974854 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:08:38.974946 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 13 20:08:38.975033 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 13 20:08:38.975121 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:08:38.975208 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:08:38.975317 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:08:38.975420 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 13 20:08:38.975519 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 13 20:08:38.975615 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 13 20:08:38.975784 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 13 20:08:38.975883 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 20:08:38.975980 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 20:08:38.976074 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:08:38.976180 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.976280 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 13 20:08:38.976381 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.976477 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 13 20:08:38.976579 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.977971 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 13 20:08:38.978086 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.978200 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 13 20:08:38.978337 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.978469 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 13 20:08:38.978605 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.978762 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 13 20:08:38.978866 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.978962 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 13 20:08:38.979067 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.979161 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 13 20:08:38.979264 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:38.979360 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 13 20:08:38.979460 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:08:38.979555 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:08:38.979738 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:08:38.979839 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 13 20:08:38.979933 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 13 20:08:38.980033 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:08:38.980129 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 13 20:08:38.980235 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:08:38.980360 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 13 20:08:38.980464 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 13 20:08:38.980565 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:08:38.980692 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:08:38.980791 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:08:38.980887 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:08:38.980994 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 20:08:38.981099 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 13 20:08:38.981194 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:08:38.981288 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:08:38.981415 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 20:08:38.981537 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 13 20:08:38.981676 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 13 20:08:38.981785 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:08:38.981885 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:08:38.981981 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:08:38.982088 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 20:08:38.982188 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 13 20:08:38.982282 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:08:38.982376 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:08:38.982483 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 20:08:38.982585 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 13 20:08:38.982730 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 13 20:08:38.982840 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:08:38.982936 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:08:38.983036 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:08:38.983147 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 20:08:38.983246 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 13 20:08:38.983351 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 13 20:08:38.983450 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:08:38.983545 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:08:38.983657 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:08:38.983663 kernel: acpiphp: Slot [0] registered Apr 13 20:08:38.983784 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:08:38.983885 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 13 20:08:38.983984 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 13 20:08:38.984089 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:08:38.984184 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:08:38.984279 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:08:38.984373 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:08:38.984379 kernel: acpiphp: Slot [0-2] registered Apr 13 20:08:38.984528 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:08:38.984624 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:08:38.984821 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:08:38.984832 kernel: acpiphp: Slot [0-3] registered Apr 13 20:08:38.984929 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:08:38.985025 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:08:38.985121 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:08:38.985127 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:08:38.985133 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:08:38.985138 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:08:38.985143 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:08:38.985151 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:08:38.985157 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:08:38.985162 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:08:38.985167 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:08:38.985172 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:08:38.985178 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:08:38.985183 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:08:38.985188 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:08:38.985194 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:08:38.985201 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:08:38.985207 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:08:38.985212 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:08:38.985218 kernel: iommu: Default domain type: Translated Apr 13 20:08:38.985223 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:08:38.985228 kernel: efivars: Registered efivars operations Apr 13 20:08:38.985234 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:08:38.985239 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:08:38.985245 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 13 20:08:38.985253 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 13 20:08:38.985258 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 13 20:08:38.985263 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 13 20:08:38.985359 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:08:38.985453 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:08:38.985547 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:08:38.985553 kernel: vgaarb: loaded Apr 13 20:08:38.985558 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:08:38.985564 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:08:38.985571 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:08:38.985577 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:08:38.985583 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:08:38.985588 kernel: pnp: PnP ACPI init Apr 13 20:08:38.985728 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 13 20:08:38.985737 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:08:38.985743 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:08:38.985748 kernel: NET: Registered PF_INET protocol family Apr 13 20:08:38.985770 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:08:38.985778 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:08:38.985784 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:08:38.985790 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:08:38.985795 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:08:38.985801 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:08:38.985806 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:08:38.985812 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:08:38.985817 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:08:38.985825 kernel: NET: Registered PF_XDP protocol family Apr 13 20:08:38.985928 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:08:38.986029 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:08:38.986123 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 20:08:38.986219 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 20:08:38.986314 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 20:08:38.986409 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 20:08:38.986508 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 20:08:38.986605 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 20:08:38.986752 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 13 20:08:38.986849 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:08:38.986948 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:08:38.987043 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:08:38.987137 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:08:38.987232 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:08:38.987327 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:08:38.987423 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:08:38.987540 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:08:38.987668 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:08:38.987780 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:08:38.987880 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:08:38.987974 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:08:38.988069 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:08:38.988282 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:08:38.988481 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:08:38.988586 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:08:38.988741 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 13 20:08:38.988845 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:08:38.988953 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 13 20:08:38.989052 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:08:38.989152 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:08:38.989250 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:08:38.989349 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 13 20:08:38.989449 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:08:38.989550 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:08:38.989675 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:08:38.989813 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 13 20:08:38.989919 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:08:38.990018 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:08:38.990116 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:08:38.990209 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:08:38.990306 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:08:38.990398 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 13 20:08:38.990490 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 13 20:08:38.990582 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:08:38.990713 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 13 20:08:38.990815 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:08:38.990931 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 13 20:08:38.991040 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 13 20:08:38.991138 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:08:38.991244 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:08:38.991350 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 13 20:08:38.991449 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:08:38.991553 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 13 20:08:38.992095 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:08:38.992262 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 13 20:08:38.992360 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 13 20:08:38.992452 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:08:38.992550 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 13 20:08:38.992657 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 13 20:08:38.992777 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:08:38.992901 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 13 20:08:38.992995 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 13 20:08:38.993087 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:08:38.993094 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:08:38.993100 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:08:38.993106 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:08:38.993112 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 13 20:08:38.993117 kernel: Initialise system trusted keyrings Apr 13 20:08:38.993127 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:08:38.993148 kernel: Key type asymmetric registered Apr 13 20:08:38.993163 kernel: Asymmetric key parser 'x509' registered Apr 13 20:08:38.993169 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:08:38.993174 kernel: io scheduler mq-deadline registered Apr 13 20:08:38.993180 kernel: io scheduler kyber registered Apr 13 20:08:38.993185 kernel: io scheduler bfq registered Apr 13 20:08:38.993789 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 13 20:08:38.993895 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 13 20:08:38.993997 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 13 20:08:38.994091 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 13 20:08:38.994187 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 13 20:08:38.994282 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 13 20:08:38.994376 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 13 20:08:38.994499 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 13 20:08:38.994595 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 13 20:08:38.994718 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 13 20:08:38.994819 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 13 20:08:38.994915 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 13 20:08:38.995011 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 13 20:08:38.995106 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 13 20:08:38.995200 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 13 20:08:38.995295 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 13 20:08:38.995302 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:08:38.995396 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 13 20:08:38.995497 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 13 20:08:38.995506 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:08:38.995512 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 13 20:08:38.995518 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:08:38.995523 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:08:38.995529 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:08:38.995534 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:08:38.995539 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:08:38.995545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:08:38.995745 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:08:38.995847 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:08:38.995937 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:08:38 UTC (1776110918) Apr 13 20:08:38.996027 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:08:38.996034 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:08:38.996040 kernel: efifb: probing for efifb Apr 13 20:08:38.996045 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 13 20:08:38.996051 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 13 20:08:38.996060 kernel: efifb: scrolling: redraw Apr 13 20:08:38.996066 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 20:08:38.996072 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:08:38.996077 kernel: fb0: EFI VGA frame buffer device Apr 13 20:08:38.996082 kernel: pstore: Using crash dump compression: deflate Apr 13 20:08:38.996088 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:08:38.996093 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:08:38.996099 kernel: Segment Routing with IPv6 Apr 13 20:08:38.996104 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:08:38.996113 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:08:38.996118 kernel: Key type dns_resolver registered Apr 13 20:08:38.996123 kernel: IPI shorthand broadcast: enabled Apr 13 20:08:38.996129 kernel: sched_clock: Marking stable (1344012401, 218737627)->(1629069587, -66319559) Apr 13 20:08:38.996134 kernel: registered taskstats version 1 Apr 13 20:08:38.996140 kernel: Loading compiled-in X.509 certificates Apr 13 20:08:38.996146 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:08:38.996151 kernel: Key type .fscrypt registered Apr 13 20:08:38.996156 kernel: Key type fscrypt-provisioning registered Apr 13 20:08:38.996164 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:08:38.996170 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:08:38.996175 kernel: ima: No architecture policies found Apr 13 20:08:38.996181 kernel: clk: Disabling unused clocks Apr 13 20:08:38.996186 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:08:38.996192 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:08:38.996197 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:08:38.996203 kernel: Run /init as init process Apr 13 20:08:38.996208 kernel: with arguments: Apr 13 20:08:38.996216 kernel: /init Apr 13 20:08:38.996222 kernel: with environment: Apr 13 20:08:38.996227 kernel: HOME=/ Apr 13 20:08:38.996233 kernel: TERM=linux Apr 13 20:08:38.996240 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:08:38.996248 systemd[1]: Detected virtualization kvm. Apr 13 20:08:38.996254 systemd[1]: Detected architecture x86-64. Apr 13 20:08:38.996262 systemd[1]: Running in initrd. Apr 13 20:08:38.996268 systemd[1]: No hostname configured, using default hostname. Apr 13 20:08:38.996274 systemd[1]: Hostname set to . Apr 13 20:08:38.996279 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:08:38.996285 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:08:38.996291 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:08:38.996297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:08:38.996304 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:08:38.996312 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:08:38.996320 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:08:38.996326 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:08:38.996333 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:08:38.996339 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:08:38.996345 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:08:38.996351 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:08:38.996359 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:08:38.996365 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:08:38.996371 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:08:38.996377 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:08:38.996382 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:08:38.996388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:08:38.996394 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:08:38.996400 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:08:38.996408 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:08:38.996414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:08:38.996420 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:08:38.996425 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:08:38.996431 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:08:38.996437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:08:38.996443 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:08:38.996449 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:08:38.996454 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:08:38.996463 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:08:38.996485 systemd-journald[189]: Collecting audit messages is disabled. Apr 13 20:08:38.996500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:38.996506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:08:38.996514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:08:38.996520 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:08:38.996527 systemd-journald[189]: Journal started Apr 13 20:08:38.996541 systemd-journald[189]: Runtime Journal (/run/log/journal/244b37f6c1224f979c9dc447a6522cfb) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:08:38.992233 systemd-modules-load[190]: Inserted module 'overlay' Apr 13 20:08:39.001870 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:08:39.011973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:08:39.016109 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:08:39.021771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:39.025684 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:08:39.025705 kernel: Bridge firewalling registered Apr 13 20:08:39.023515 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 13 20:08:39.024209 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:08:39.026721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:08:39.034894 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:08:39.035995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:08:39.038767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:08:39.040167 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:08:39.052422 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:39.053436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:08:39.054458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:08:39.059775 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:08:39.061785 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:08:39.069666 dracut-cmdline[222]: dracut-dracut-053 Apr 13 20:08:39.074208 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:08:39.095882 systemd-resolved[225]: Positive Trust Anchors: Apr 13 20:08:39.095892 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:08:39.095914 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:08:39.098983 systemd-resolved[225]: Defaulting to hostname 'linux'. Apr 13 20:08:39.101977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:08:39.102427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:08:39.136695 kernel: SCSI subsystem initialized Apr 13 20:08:39.144667 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:08:39.153671 kernel: iscsi: registered transport (tcp) Apr 13 20:08:39.171060 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:08:39.171115 kernel: QLogic iSCSI HBA Driver Apr 13 20:08:39.223739 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:08:39.231816 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:08:39.258699 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:08:39.258772 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:08:39.258792 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:08:39.300674 kernel: raid6: avx512x4 gen() 44898 MB/s Apr 13 20:08:39.318668 kernel: raid6: avx512x2 gen() 46475 MB/s Apr 13 20:08:39.336729 kernel: raid6: avx512x1 gen() 43489 MB/s Apr 13 20:08:39.354700 kernel: raid6: avx2x4 gen() 46974 MB/s Apr 13 20:08:39.372697 kernel: raid6: avx2x2 gen() 48927 MB/s Apr 13 20:08:39.391732 kernel: raid6: avx2x1 gen() 39095 MB/s Apr 13 20:08:39.391765 kernel: raid6: using algorithm avx2x2 gen() 48927 MB/s Apr 13 20:08:39.411812 kernel: raid6: .... xor() 36417 MB/s, rmw enabled Apr 13 20:08:39.411834 kernel: raid6: using avx512x2 recovery algorithm Apr 13 20:08:39.451692 kernel: xor: automatically using best checksumming function avx Apr 13 20:08:39.562713 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:08:39.580008 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:08:39.587895 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:08:39.599561 systemd-udevd[408]: Using default interface naming scheme 'v255'. Apr 13 20:08:39.603879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:08:39.611835 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:08:39.626701 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Apr 13 20:08:39.659484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:08:39.665911 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:08:39.736958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:08:39.752439 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:08:39.774147 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:08:39.775209 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:08:39.776074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:08:39.776785 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:08:39.784868 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:08:39.795929 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:08:39.826332 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:08:39.847670 kernel: ACPI: bus type USB registered Apr 13 20:08:39.848993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:08:39.849775 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:39.851814 kernel: usbcore: registered new interface driver usbfs Apr 13 20:08:39.852259 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:08:39.852569 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:39.852708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:39.853357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:39.861026 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:08:39.863607 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:08:39.863391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:39.874301 kernel: usbcore: registered new interface driver hub Apr 13 20:08:39.874327 kernel: usbcore: registered new device driver usb Apr 13 20:08:39.877246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:39.877345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:39.881773 kernel: libata version 3.00 loaded. Apr 13 20:08:39.884928 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:08:39.885750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:39.898809 kernel: AES CTR mode by8 optimization enabled Apr 13 20:08:39.910578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:39.917676 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:08:39.919849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:08:39.926051 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 20:08:39.926234 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 20:08:39.926357 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:08:39.926473 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:08:39.930875 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 20:08:39.931037 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 20:08:39.932710 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:08:39.939912 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:08:39.940082 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:08:39.940198 kernel: hub 1-0:1.0: USB hub found Apr 13 20:08:39.940334 kernel: hub 1-0:1.0: 4 ports detected Apr 13 20:08:39.940449 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 20:08:39.948742 kernel: hub 2-0:1.0: USB hub found Apr 13 20:08:39.949033 kernel: hub 2-0:1.0: 4 ports detected Apr 13 20:08:39.952951 kernel: scsi host1: ahci Apr 13 20:08:39.961665 kernel: scsi host2: ahci Apr 13 20:08:39.966707 kernel: scsi host3: ahci Apr 13 20:08:39.968265 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:39.968698 kernel: scsi host4: ahci Apr 13 20:08:39.970730 kernel: scsi host5: ahci Apr 13 20:08:39.973009 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:08:39.975219 kernel: scsi host6: ahci Apr 13 20:08:39.975341 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 13 20:08:39.975466 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 51 Apr 13 20:08:39.975478 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:08:39.975601 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 51 Apr 13 20:08:39.975609 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:08:39.975810 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 51 Apr 13 20:08:39.977693 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:08:39.977901 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 51 Apr 13 20:08:39.981752 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:08:39.981803 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 51 Apr 13 20:08:39.981813 kernel: GPT:17805311 != 160006143 Apr 13 20:08:39.981821 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 51 Apr 13 20:08:39.981829 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:08:39.981836 kernel: GPT:17805311 != 160006143 Apr 13 20:08:40.002574 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:08:40.002613 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:40.006667 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:08:40.186754 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 20:08:40.296682 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:40.296802 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:40.310175 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:40.310233 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 20:08:40.312663 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 20:08:40.318260 kernel: ata1.00: applying bridge limits Apr 13 20:08:40.325716 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:40.325765 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:40.330816 kernel: ata1.00: configured for UDMA/100 Apr 13 20:08:40.342217 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 20:08:40.358714 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 20:08:40.370212 kernel: usbcore: registered new interface driver usbhid Apr 13 20:08:40.370236 kernel: usbhid: USB HID core driver Apr 13 20:08:40.383005 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 13 20:08:40.383055 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 20:08:40.398692 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 20:08:40.399056 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 20:08:40.412660 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (475) Apr 13 20:08:40.419053 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (472) Apr 13 20:08:40.422604 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:08:40.423668 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 13 20:08:40.432391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:08:40.436841 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:08:40.441510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:08:40.442242 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:08:40.448210 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:08:40.454823 disk-uuid[589]: Primary Header is updated. Apr 13 20:08:40.454823 disk-uuid[589]: Secondary Entries is updated. Apr 13 20:08:40.454823 disk-uuid[589]: Secondary Header is updated. Apr 13 20:08:40.460668 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:40.467669 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:41.473188 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:41.473272 disk-uuid[590]: The operation has completed successfully. Apr 13 20:08:41.537107 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:08:41.537201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:08:41.544752 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:08:41.547838 sh[607]: Success Apr 13 20:08:41.559668 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:08:41.613731 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:08:41.621730 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:08:41.627540 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:08:41.643072 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:08:41.643153 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:41.647605 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:08:41.647631 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:08:41.649889 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:08:41.659659 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:08:41.661984 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:08:41.662898 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:08:41.670769 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:08:41.671752 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:08:41.686904 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:41.686936 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:41.686945 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:08:41.694081 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:08:41.694106 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:08:41.702464 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:08:41.705502 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:41.710580 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:08:41.717772 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:08:41.761365 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:08:41.771857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:08:41.785686 ignition[721]: Ignition 2.19.0 Apr 13 20:08:41.786220 ignition[721]: Stage: fetch-offline Apr 13 20:08:41.786257 ignition[721]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:41.786266 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:41.788286 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:08:41.786688 ignition[721]: parsed url from cmdline: "" Apr 13 20:08:41.786701 ignition[721]: no config URL provided Apr 13 20:08:41.786707 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:08:41.786716 ignition[721]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:08:41.786720 ignition[721]: failed to fetch config: resource requires networking Apr 13 20:08:41.786872 ignition[721]: Ignition finished successfully Apr 13 20:08:41.791131 systemd-networkd[788]: lo: Link UP Apr 13 20:08:41.791134 systemd-networkd[788]: lo: Gained carrier Apr 13 20:08:41.793389 systemd-networkd[788]: Enumeration completed Apr 13 20:08:41.793573 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:08:41.794364 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:41.794368 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:41.795048 systemd[1]: Reached target network.target - Network. Apr 13 20:08:41.796533 systemd-networkd[788]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:41.796537 systemd-networkd[788]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:41.797218 systemd-networkd[788]: eth0: Link UP Apr 13 20:08:41.797222 systemd-networkd[788]: eth0: Gained carrier Apr 13 20:08:41.797228 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:41.803777 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:08:41.804286 systemd-networkd[788]: eth1: Link UP Apr 13 20:08:41.804289 systemd-networkd[788]: eth1: Gained carrier Apr 13 20:08:41.804296 systemd-networkd[788]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:41.814438 ignition[795]: Ignition 2.19.0 Apr 13 20:08:41.814446 ignition[795]: Stage: fetch Apr 13 20:08:41.815634 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:41.815668 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:41.815762 ignition[795]: parsed url from cmdline: "" Apr 13 20:08:41.815766 ignition[795]: no config URL provided Apr 13 20:08:41.815770 ignition[795]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:08:41.815778 ignition[795]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:08:41.815792 ignition[795]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 20:08:41.815947 ignition[795]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:08:41.839691 systemd-networkd[788]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:08:41.871702 systemd-networkd[788]: eth0: DHCPv4 address 62.238.3.135/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:08:42.016524 ignition[795]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 20:08:42.023546 ignition[795]: GET result: OK Apr 13 20:08:42.023725 ignition[795]: parsing config with SHA512: 2142203593a4509b6c01023493479665b93dd815435f407d8f18285442bd079885cfc8e633998ceaaeb45cd09e9f60403f4ebb53b6acd70dba1a0e9ae6646a7a Apr 13 20:08:42.032789 unknown[795]: fetched base config from "system" Apr 13 20:08:42.032809 unknown[795]: fetched base config from "system" Apr 13 20:08:42.032828 unknown[795]: fetched user config from "hetzner" Apr 13 20:08:42.036092 ignition[795]: fetch: fetch complete Apr 13 20:08:42.036110 ignition[795]: fetch: fetch passed Apr 13 20:08:42.036232 ignition[795]: Ignition finished successfully Apr 13 20:08:42.041217 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:08:42.048948 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:08:42.091559 ignition[802]: Ignition 2.19.0 Apr 13 20:08:42.091587 ignition[802]: Stage: kargs Apr 13 20:08:42.092539 ignition[802]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:42.092560 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:42.093571 ignition[802]: kargs: kargs passed Apr 13 20:08:42.093634 ignition[802]: Ignition finished successfully Apr 13 20:08:42.098233 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:08:42.105952 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:08:42.124294 ignition[808]: Ignition 2.19.0 Apr 13 20:08:42.124709 ignition[808]: Stage: disks Apr 13 20:08:42.125918 ignition[808]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:42.125962 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:42.129037 ignition[808]: disks: disks passed Apr 13 20:08:42.129676 ignition[808]: Ignition finished successfully Apr 13 20:08:42.132029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:08:42.134246 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:08:42.135378 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:08:42.136617 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:08:42.138038 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:08:42.139214 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:08:42.145904 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:08:42.165466 systemd-fsck[816]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:08:42.169423 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:08:42.176864 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:08:42.272716 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:08:42.272598 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:08:42.273487 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:08:42.278717 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:08:42.280750 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:08:42.285779 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 20:08:42.286576 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:08:42.287262 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:08:42.295722 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:08:42.299732 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (824) Apr 13 20:08:42.299747 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:42.299762 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:42.299770 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:08:42.314187 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:08:42.314219 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:08:42.315567 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:08:42.320446 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:08:42.343957 coreos-metadata[826]: Apr 13 20:08:42.343 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 20:08:42.344816 coreos-metadata[826]: Apr 13 20:08:42.344 INFO Fetch successful Apr 13 20:08:42.345526 coreos-metadata[826]: Apr 13 20:08:42.345 INFO wrote hostname ci-4081-3-7-2-642afe6700 to /sysroot/etc/hostname Apr 13 20:08:42.347111 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:08:42.355431 initrd-setup-root[852]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:08:42.360858 initrd-setup-root[859]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:08:42.365486 initrd-setup-root[866]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:08:42.369240 initrd-setup-root[873]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:08:42.454524 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:08:42.458741 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:08:42.459919 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:08:42.471792 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:42.487535 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:08:42.492161 ignition[946]: INFO : Ignition 2.19.0 Apr 13 20:08:42.492894 ignition[946]: INFO : Stage: mount Apr 13 20:08:42.492894 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:42.493923 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:42.493923 ignition[946]: INFO : mount: mount passed Apr 13 20:08:42.493923 ignition[946]: INFO : Ignition finished successfully Apr 13 20:08:42.494852 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:08:42.505804 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:08:42.638785 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:08:42.649980 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:08:42.672827 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (958) Apr 13 20:08:42.677926 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:42.677977 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:42.681496 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:08:42.687044 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:08:42.687108 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:08:42.691267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:08:42.719396 ignition[975]: INFO : Ignition 2.19.0 Apr 13 20:08:42.719396 ignition[975]: INFO : Stage: files Apr 13 20:08:42.720351 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:42.720351 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:42.720976 ignition[975]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:08:42.722297 ignition[975]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:08:42.722688 ignition[975]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:08:42.726602 ignition[975]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:08:42.726945 ignition[975]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:08:42.727421 unknown[975]: wrote ssh authorized keys file for user: core Apr 13 20:08:42.727955 ignition[975]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:08:42.729853 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:08:42.730520 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:08:42.891880 systemd-networkd[788]: eth1: Gained IPv6LL Apr 13 20:08:42.927363 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:08:43.147853 systemd-networkd[788]: eth0: Gained IPv6LL Apr 13 20:08:43.306365 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:08:43.307416 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:08:43.311164 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:08:43.311164 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:08:43.311164 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:08:43.311164 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:08:43.311164 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:08:43.311164 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 20:08:43.762256 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:08:44.082492 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:08:44.082492 ignition[975]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:08:44.085478 ignition[975]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:08:44.085478 ignition[975]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:08:44.085478 ignition[975]: INFO : files: files passed Apr 13 20:08:44.085478 ignition[975]: INFO : Ignition finished successfully Apr 13 20:08:44.085407 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:08:44.093287 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:08:44.095780 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:08:44.098978 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:08:44.099067 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:08:44.111024 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:08:44.111024 initrd-setup-root-after-ignition[1003]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:08:44.113254 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:08:44.114755 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:08:44.115581 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:08:44.122769 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:08:44.152519 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:08:44.152621 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:08:44.154405 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:08:44.155351 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:08:44.155870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:08:44.156781 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:08:44.171184 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:08:44.177780 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:08:44.186064 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:08:44.186998 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:08:44.187877 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:08:44.188729 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:08:44.189198 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:08:44.190184 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:08:44.191014 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:08:44.191741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:08:44.192347 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:08:44.193029 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:08:44.193788 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:08:44.194488 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:08:44.195249 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:08:44.195974 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:08:44.196690 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:08:44.197406 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:08:44.197494 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:08:44.198748 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:08:44.200215 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:08:44.201034 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:08:44.201437 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:08:44.201836 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:08:44.201909 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:08:44.202393 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:08:44.202470 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:08:44.202938 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:08:44.203004 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:08:44.203672 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 20:08:44.203750 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:08:44.214108 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:08:44.214469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:08:44.214577 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:08:44.217806 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:08:44.218182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:08:44.218287 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:08:44.220176 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:08:44.220276 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:08:44.226587 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:08:44.226697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:08:44.233885 ignition[1027]: INFO : Ignition 2.19.0 Apr 13 20:08:44.234419 ignition[1027]: INFO : Stage: umount Apr 13 20:08:44.235719 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:44.235719 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:44.237135 ignition[1027]: INFO : umount: umount passed Apr 13 20:08:44.237521 ignition[1027]: INFO : Ignition finished successfully Apr 13 20:08:44.240616 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:08:44.241067 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:08:44.241870 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:08:44.243088 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:08:44.243157 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:08:44.244264 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:08:44.244302 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:08:44.245047 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:08:44.245082 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:08:44.245800 systemd[1]: Stopped target network.target - Network. Apr 13 20:08:44.246446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:08:44.246484 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:08:44.247189 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:08:44.247888 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:08:44.247965 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:08:44.248590 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:08:44.249248 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:08:44.249763 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:08:44.249806 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:08:44.250305 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:08:44.250342 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:08:44.250951 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:08:44.250991 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:08:44.251611 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:08:44.251658 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:08:44.252401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:08:44.253122 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:08:44.254013 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:08:44.254101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:08:44.255983 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:08:44.256085 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:08:44.256734 systemd-networkd[788]: eth1: DHCPv6 lease lost Apr 13 20:08:44.258732 systemd-networkd[788]: eth0: DHCPv6 lease lost Apr 13 20:08:44.259601 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:08:44.259700 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:08:44.261111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:08:44.261156 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:08:44.261984 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:08:44.262089 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:08:44.263326 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:08:44.263382 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:08:44.268773 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:08:44.269518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:08:44.269940 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:08:44.270387 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:08:44.270427 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:08:44.271056 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:08:44.271092 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:08:44.271800 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:08:44.283570 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:08:44.283690 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:08:44.284625 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:08:44.284792 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:08:44.286282 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:08:44.286336 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:08:44.287033 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:08:44.287066 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:08:44.287739 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:08:44.287782 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:08:44.288655 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:08:44.288693 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:08:44.289701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:08:44.289761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:44.301838 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:08:44.302179 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:08:44.302233 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:08:44.302599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:44.302633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:44.307606 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:08:44.307732 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:08:44.308840 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:08:44.310305 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:08:44.319215 systemd[1]: Switching root. Apr 13 20:08:44.370139 systemd-journald[189]: Journal stopped Apr 13 20:08:45.506063 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Apr 13 20:08:45.506147 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:08:45.506159 kernel: SELinux: policy capability open_perms=1 Apr 13 20:08:45.506168 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:08:45.506176 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:08:45.506184 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:08:45.506195 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:08:45.506204 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:08:45.506216 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:08:45.506226 kernel: audit: type=1403 audit(1776110924.531:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:08:45.506236 systemd[1]: Successfully loaded SELinux policy in 61.869ms. Apr 13 20:08:45.506252 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.652ms. Apr 13 20:08:45.506262 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:08:45.506277 systemd[1]: Detected virtualization kvm. Apr 13 20:08:45.506291 systemd[1]: Detected architecture x86-64. Apr 13 20:08:45.506302 systemd[1]: Detected first boot. Apr 13 20:08:45.506311 systemd[1]: Hostname set to . Apr 13 20:08:45.506326 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:08:45.506336 zram_generator::config[1069]: No configuration found. Apr 13 20:08:45.506346 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:08:45.506355 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:08:45.506364 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:08:45.506373 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:08:45.506385 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:08:45.506394 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:08:45.506403 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:08:45.506412 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:08:45.506421 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:08:45.506430 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:08:45.506439 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:08:45.506448 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:08:45.506460 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:08:45.506469 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:08:45.506478 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:08:45.506487 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:08:45.506496 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:08:45.506509 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:08:45.506518 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:08:45.506527 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:08:45.506537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:08:45.506548 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:08:45.506559 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:08:45.506568 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:08:45.506577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:08:45.506586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:08:45.506595 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:08:45.506604 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:08:45.506616 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:08:45.506626 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:08:45.507673 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:08:45.507690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:08:45.507700 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:08:45.507711 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:08:45.507730 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:08:45.507739 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:08:45.507748 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:08:45.507762 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:45.507771 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:08:45.507780 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:08:45.507790 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:08:45.507800 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:08:45.507814 systemd[1]: Reached target machines.target - Containers. Apr 13 20:08:45.507824 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:08:45.507836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:08:45.507847 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:08:45.507856 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:08:45.507866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:08:45.507874 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:08:45.507883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:08:45.507892 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:08:45.507901 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:08:45.507912 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:08:45.507923 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:08:45.507932 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:08:45.507941 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:08:45.507950 kernel: fuse: init (API version 7.39) Apr 13 20:08:45.507962 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:08:45.507971 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:08:45.507979 kernel: ACPI: bus type drm_connector registered Apr 13 20:08:45.507989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:08:45.507998 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:08:45.508009 kernel: loop: module loaded Apr 13 20:08:45.508036 systemd-journald[1159]: Collecting audit messages is disabled. Apr 13 20:08:45.508059 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:08:45.508068 systemd-journald[1159]: Journal started Apr 13 20:08:45.508084 systemd-journald[1159]: Runtime Journal (/run/log/journal/244b37f6c1224f979c9dc447a6522cfb) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:08:45.145264 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:08:45.169752 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:08:45.170257 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:08:45.516655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:08:45.519820 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:08:45.519849 systemd[1]: Stopped verity-setup.service. Apr 13 20:08:45.523656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:45.527665 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:08:45.528233 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:08:45.528817 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:08:45.529329 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:08:45.529870 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:08:45.530387 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:08:45.530902 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:08:45.531534 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:08:45.532181 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:08:45.532856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:08:45.533024 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:08:45.533725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:08:45.533905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:08:45.534573 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:08:45.534822 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:08:45.535472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:08:45.535909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:08:45.536550 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:08:45.536738 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:08:45.537371 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:08:45.537547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:08:45.538326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:08:45.539023 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:08:45.539809 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:08:45.552433 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:08:45.559769 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:08:45.568738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:08:45.569614 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:08:45.569699 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:08:45.570803 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:08:45.575353 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:08:45.577112 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:08:45.578813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:08:45.580795 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:08:45.587799 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:08:45.588155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:08:45.590818 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:08:45.591214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:08:45.593782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:08:45.597090 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:08:45.599968 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:08:45.603031 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:08:45.603455 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:08:45.604674 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:08:45.629991 systemd-journald[1159]: Time spent on flushing to /var/log/journal/244b37f6c1224f979c9dc447a6522cfb is 45.973ms for 1178 entries. Apr 13 20:08:45.629991 systemd-journald[1159]: System Journal (/var/log/journal/244b37f6c1224f979c9dc447a6522cfb) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:08:45.701616 systemd-journald[1159]: Received client request to flush runtime journal. Apr 13 20:08:45.701665 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 20:08:45.701677 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:08:45.665536 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:08:45.666048 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:08:45.676872 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:08:45.678205 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:08:45.686885 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:08:45.696962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:08:45.705151 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:08:45.717867 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:08:45.720924 kernel: loop1: detected capacity change from 0 to 140768 Apr 13 20:08:45.719566 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:08:45.723282 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:08:45.728843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:08:45.731792 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 20:08:45.768001 kernel: loop2: detected capacity change from 0 to 8 Apr 13 20:08:45.768883 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Apr 13 20:08:45.768897 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Apr 13 20:08:45.785215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:08:45.803670 kernel: loop3: detected capacity change from 0 to 219192 Apr 13 20:08:45.847707 kernel: loop4: detected capacity change from 0 to 142488 Apr 13 20:08:45.867669 kernel: loop5: detected capacity change from 0 to 140768 Apr 13 20:08:45.887855 kernel: loop6: detected capacity change from 0 to 8 Apr 13 20:08:45.892907 kernel: loop7: detected capacity change from 0 to 219192 Apr 13 20:08:45.915207 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 20:08:45.916188 (sd-merge)[1214]: Merged extensions into '/usr'. Apr 13 20:08:45.921825 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:08:45.921920 systemd[1]: Reloading... Apr 13 20:08:46.002820 zram_generator::config[1240]: No configuration found. Apr 13 20:08:46.099724 ldconfig[1184]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:08:46.114664 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:08:46.150427 systemd[1]: Reloading finished in 227 ms. Apr 13 20:08:46.181383 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:08:46.182308 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:08:46.182988 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:08:46.191798 systemd[1]: Starting ensure-sysext.service... Apr 13 20:08:46.193291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:08:46.195489 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:08:46.201586 systemd[1]: Reloading requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:08:46.201594 systemd[1]: Reloading... Apr 13 20:08:46.217020 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:08:46.217560 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:08:46.218414 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:08:46.218696 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Apr 13 20:08:46.218818 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Apr 13 20:08:46.222338 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:08:46.222413 systemd-tmpfiles[1285]: Skipping /boot Apr 13 20:08:46.241244 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:08:46.242762 systemd-tmpfiles[1285]: Skipping /boot Apr 13 20:08:46.247166 systemd-udevd[1286]: Using default interface naming scheme 'v255'. Apr 13 20:08:46.286687 zram_generator::config[1314]: No configuration found. Apr 13 20:08:46.446619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:08:46.451664 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 20:08:46.457021 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:08:46.476669 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1328) Apr 13 20:08:46.499308 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:08:46.522868 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:08:46.524243 systemd[1]: Reloading finished in 322 ms. Apr 13 20:08:46.545081 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:08:46.546976 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:08:46.555660 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 13 20:08:46.559664 kernel: Console: switching to colour dummy device 80x25 Apr 13 20:08:46.564559 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 13 20:08:46.566075 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 20:08:46.566094 kernel: [drm] features: -context_init Apr 13 20:08:46.566109 kernel: [drm] number of scanouts: 1 Apr 13 20:08:46.567291 kernel: [drm] number of cap sets: 0 Apr 13 20:08:46.570335 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 20:08:46.575684 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 13 20:08:46.583929 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:08:46.587981 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 20:08:46.602670 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:08:46.609512 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 20:08:46.622475 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:08:46.622688 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:08:46.622706 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:08:46.622929 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:08:46.632960 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:08:46.636885 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 20:08:46.648234 systemd[1]: Finished ensure-sysext.service. Apr 13 20:08:46.651266 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:46.656840 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:08:46.664809 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:08:46.665004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:08:46.666257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:08:46.668575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:08:46.670969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:08:46.677803 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:08:46.678012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:08:46.681226 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:08:46.685776 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:08:46.689785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:08:46.694811 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:08:46.698068 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:08:46.703889 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:08:46.703977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:46.704806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:08:46.705075 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:08:46.707391 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:08:46.707536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:08:46.709044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:08:46.709206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:08:46.711398 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:08:46.712021 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:08:46.730938 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:08:46.731063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:08:46.737841 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:08:46.739860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:46.742769 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:08:46.754370 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:08:46.771996 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:08:46.782589 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:08:46.786038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:46.786225 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:46.799877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:46.811208 augenrules[1441]: No rules Apr 13 20:08:46.813685 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:08:46.819709 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:08:46.821777 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:08:46.827040 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:08:46.829882 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:08:46.847030 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:08:46.860867 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:08:46.871500 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:08:46.899033 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:08:46.901559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:08:46.916003 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:08:46.928665 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:08:46.929617 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:08:46.935859 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:08:46.941038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:46.953083 systemd-networkd[1408]: lo: Link UP Apr 13 20:08:46.953095 systemd-networkd[1408]: lo: Gained carrier Apr 13 20:08:46.956181 systemd-resolved[1409]: Positive Trust Anchors: Apr 13 20:08:46.956199 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:08:46.956221 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:08:46.957784 systemd-networkd[1408]: Enumeration completed Apr 13 20:08:46.957864 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:08:46.962775 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:46.962783 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:46.963582 systemd-networkd[1408]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:46.963587 systemd-networkd[1408]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:46.964328 systemd-networkd[1408]: eth0: Link UP Apr 13 20:08:46.964372 systemd-networkd[1408]: eth0: Gained carrier Apr 13 20:08:46.964417 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:46.964835 systemd-resolved[1409]: Using system hostname 'ci-4081-3-7-2-642afe6700'. Apr 13 20:08:46.971123 systemd-networkd[1408]: eth1: Link UP Apr 13 20:08:46.971173 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:08:46.971323 systemd-networkd[1408]: eth1: Gained carrier Apr 13 20:08:46.971399 systemd-networkd[1408]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:46.972378 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:08:46.973172 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:08:46.975960 systemd[1]: Reached target network.target - Network. Apr 13 20:08:46.976921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:08:46.979266 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:08:46.979712 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:08:46.980087 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:08:46.980931 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:08:46.982936 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:08:46.985959 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:08:46.986412 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:08:46.986442 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:08:46.986861 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:08:46.988832 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:08:46.991482 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:08:46.999917 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:08:47.001802 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:08:47.002301 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:08:47.003474 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:08:47.004126 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:08:47.004439 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:08:47.005553 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:08:47.009790 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:08:47.013308 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:08:47.014361 systemd-networkd[1408]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:08:47.017567 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Apr 13 20:08:47.019795 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:08:47.023790 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:08:47.024153 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:08:47.026102 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:08:47.029036 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:08:47.033775 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 20:08:47.035789 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:08:47.039783 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:08:47.053794 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:08:47.055384 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:08:47.055820 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:08:47.061703 systemd-networkd[1408]: eth0: DHCPv4 address 62.238.3.135/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:08:47.064877 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:08:47.068102 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:08:47.070756 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Apr 13 20:08:47.085702 extend-filesystems[1475]: Found loop4 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found loop5 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found loop6 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found loop7 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda1 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda2 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda3 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found usr Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda4 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda6 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda7 Apr 13 20:08:47.085702 extend-filesystems[1475]: Found sda9 Apr 13 20:08:47.085702 extend-filesystems[1475]: Checking size of /dev/sda9 Apr 13 20:08:47.150263 dbus-daemon[1473]: [system] SELinux support is enabled Apr 13 20:08:47.171912 extend-filesystems[1475]: Resized partition /dev/sda9 Apr 13 20:08:47.173850 jq[1474]: false Apr 13 20:08:47.097798 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:08:47.173983 coreos-metadata[1472]: Apr 13 20:08:47.087 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 20:08:47.173983 coreos-metadata[1472]: Apr 13 20:08:47.090 INFO Fetch successful Apr 13 20:08:47.173983 coreos-metadata[1472]: Apr 13 20:08:47.090 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 20:08:47.173983 coreos-metadata[1472]: Apr 13 20:08:47.090 INFO Fetch successful Apr 13 20:08:47.174190 extend-filesystems[1502]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:08:47.098003 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:08:47.185468 update_engine[1485]: I20260413 20:08:47.168496 1485 main.cc:92] Flatcar Update Engine starting Apr 13 20:08:47.192134 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 13 20:08:47.150425 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:08:47.166070 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:08:47.166847 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:08:47.169162 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:08:47.169183 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:08:47.170216 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:08:47.200863 update_engine[1485]: I20260413 20:08:47.197618 1485 update_check_scheduler.cc:74] Next update check in 2m36s Apr 13 20:08:47.200902 jq[1486]: true Apr 13 20:08:47.170230 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:08:47.181247 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:08:47.201328 tar[1496]: linux-amd64/LICENSE Apr 13 20:08:47.201328 tar[1496]: linux-amd64/helm Apr 13 20:08:47.198216 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:08:47.201831 jq[1504]: true Apr 13 20:08:47.208790 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:08:47.225090 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1322) Apr 13 20:08:47.228615 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:08:47.241095 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:08:47.242674 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:08:47.250497 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:08:47.251875 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:08:47.326520 systemd-logind[1483]: New seat seat0. Apr 13 20:08:47.329662 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:08:47.330926 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:08:47.341025 systemd-logind[1483]: Watching system buttons on /dev/input/event2 (Power Button) Apr 13 20:08:47.341048 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:08:47.349179 systemd[1]: Starting sshkeys.service... Apr 13 20:08:47.350930 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:08:47.382568 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:08:47.397088 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:08:47.437164 coreos-metadata[1550]: Apr 13 20:08:47.437 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 20:08:47.439006 coreos-metadata[1550]: Apr 13 20:08:47.438 INFO Fetch successful Apr 13 20:08:47.446086 unknown[1550]: wrote ssh authorized keys file for user: core Apr 13 20:08:47.463713 containerd[1501]: time="2026-04-13T20:08:47.461921436Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:08:47.471781 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:08:47.491827 containerd[1501]: time="2026-04-13T20:08:47.491784021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:47.496068 containerd[1501]: time="2026-04-13T20:08:47.494140613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:47.496068 containerd[1501]: time="2026-04-13T20:08:47.494173633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:08:47.496068 containerd[1501]: time="2026-04-13T20:08:47.494193823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:08:47.497164 containerd[1501]: time="2026-04-13T20:08:47.496870815Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:08:47.497164 containerd[1501]: time="2026-04-13T20:08:47.496891885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:47.497164 containerd[1501]: time="2026-04-13T20:08:47.496950465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:47.497164 containerd[1501]: time="2026-04-13T20:08:47.496959695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:47.501893 containerd[1501]: time="2026-04-13T20:08:47.497751266Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:47.501893 containerd[1501]: time="2026-04-13T20:08:47.497766766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:47.501893 containerd[1501]: time="2026-04-13T20:08:47.497778376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:47.501893 containerd[1501]: time="2026-04-13T20:08:47.497785836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:47.502402 containerd[1501]: time="2026-04-13T20:08:47.502026379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:47.502402 containerd[1501]: time="2026-04-13T20:08:47.502276729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:47.502616 containerd[1501]: time="2026-04-13T20:08:47.502601720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:47.502703 containerd[1501]: time="2026-04-13T20:08:47.502692940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:08:47.502856 containerd[1501]: time="2026-04-13T20:08:47.502845290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:08:47.502957 containerd[1501]: time="2026-04-13T20:08:47.502946160Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:08:47.504636 update-ssh-keys[1560]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:08:47.505840 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:08:47.512540 systemd[1]: Finished sshkeys.service. Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.526672970Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.526761500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.526775040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.526787560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.526798240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527132920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527302200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527393000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527402580Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527415050Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527427690Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527436700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527446010Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.527832 containerd[1501]: time="2026-04-13T20:08:47.527456650Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527466960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527476760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527485830Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527493891Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527508321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527518401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527528311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527538541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527547101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527556791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527565601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527576211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527587381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528084 containerd[1501]: time="2026-04-13T20:08:47.527597301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528293 containerd[1501]: time="2026-04-13T20:08:47.527605361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528293 containerd[1501]: time="2026-04-13T20:08:47.527613801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528293 containerd[1501]: time="2026-04-13T20:08:47.527622571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.528293 containerd[1501]: time="2026-04-13T20:08:47.527632921Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:08:47.529393 containerd[1501]: time="2026-04-13T20:08:47.529167662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.529393 containerd[1501]: time="2026-04-13T20:08:47.529187782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.529393 containerd[1501]: time="2026-04-13T20:08:47.529316052Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:08:47.529773 containerd[1501]: time="2026-04-13T20:08:47.529559242Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:08:47.529773 containerd[1501]: time="2026-04-13T20:08:47.529578612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:08:47.529773 containerd[1501]: time="2026-04-13T20:08:47.529587962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:08:47.530217 containerd[1501]: time="2026-04-13T20:08:47.529597362Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:08:47.530217 containerd[1501]: time="2026-04-13T20:08:47.530114243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.530217 containerd[1501]: time="2026-04-13T20:08:47.530127653Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:08:47.530217 containerd[1501]: time="2026-04-13T20:08:47.530136643Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:08:47.530217 containerd[1501]: time="2026-04-13T20:08:47.530144253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:08:47.531766 containerd[1501]: time="2026-04-13T20:08:47.531140534Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:08:47.531766 containerd[1501]: time="2026-04-13T20:08:47.531321084Z" level=info msg="Connect containerd service" Apr 13 20:08:47.531766 containerd[1501]: time="2026-04-13T20:08:47.531688684Z" level=info msg="using legacy CRI server" Apr 13 20:08:47.531766 containerd[1501]: time="2026-04-13T20:08:47.531697554Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:08:47.532281 containerd[1501]: time="2026-04-13T20:08:47.532044204Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:08:47.533679 containerd[1501]: time="2026-04-13T20:08:47.533545606Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:08:47.534014 containerd[1501]: time="2026-04-13T20:08:47.533982536Z" level=info msg="Start subscribing containerd event" Apr 13 20:08:47.534443 containerd[1501]: time="2026-04-13T20:08:47.534244976Z" level=info msg="Start recovering state" Apr 13 20:08:47.534443 containerd[1501]: time="2026-04-13T20:08:47.534305776Z" level=info msg="Start event monitor" Apr 13 20:08:47.534443 containerd[1501]: time="2026-04-13T20:08:47.534324166Z" level=info msg="Start snapshots syncer" Apr 13 20:08:47.534443 containerd[1501]: time="2026-04-13T20:08:47.534331166Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:08:47.534443 containerd[1501]: time="2026-04-13T20:08:47.534337046Z" level=info msg="Start streaming server" Apr 13 20:08:47.535773 containerd[1501]: time="2026-04-13T20:08:47.535620297Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:08:47.535821 containerd[1501]: time="2026-04-13T20:08:47.535810107Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:08:47.536095 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:08:47.540629 containerd[1501]: time="2026-04-13T20:08:47.540423641Z" level=info msg="containerd successfully booted in 0.079614s" Apr 13 20:08:47.556277 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 13 20:08:47.580052 extend-filesystems[1502]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:08:47.580052 extend-filesystems[1502]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 13 20:08:47.580052 extend-filesystems[1502]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 13 20:08:47.586860 extend-filesystems[1475]: Resized filesystem in /dev/sda9 Apr 13 20:08:47.586860 extend-filesystems[1475]: Found sr0 Apr 13 20:08:47.586392 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:08:47.586586 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:08:47.596658 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:08:47.615993 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:08:47.627364 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:08:47.633694 systemd[1]: Started sshd@0-62.238.3.135:22-20.229.252.112:50600.service - OpenSSH per-connection server daemon (20.229.252.112:50600). Apr 13 20:08:47.643254 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:08:47.643441 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:08:47.655898 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:08:47.679663 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:08:47.690141 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:08:47.696025 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:08:47.698363 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:08:47.855949 sshd[1577]: Accepted publickey for core from 20.229.252.112 port 50600 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:47.857844 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:47.869911 systemd-logind[1483]: New session 1 of user core. Apr 13 20:08:47.870163 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:08:47.882572 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:08:47.897719 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:08:47.906866 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:08:47.913156 tar[1496]: linux-amd64/README.md Apr 13 20:08:47.920301 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:08:47.926382 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:08:48.010008 systemd[1589]: Queued start job for default target default.target. Apr 13 20:08:48.016720 systemd[1589]: Created slice app.slice - User Application Slice. Apr 13 20:08:48.016755 systemd[1589]: Reached target paths.target - Paths. Apr 13 20:08:48.016766 systemd[1589]: Reached target timers.target - Timers. Apr 13 20:08:48.018141 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:08:48.028147 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:08:48.028200 systemd[1589]: Reached target sockets.target - Sockets. Apr 13 20:08:48.028211 systemd[1589]: Reached target basic.target - Basic System. Apr 13 20:08:48.028245 systemd[1589]: Reached target default.target - Main User Target. Apr 13 20:08:48.028274 systemd[1589]: Startup finished in 102ms. Apr 13 20:08:48.028367 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:08:48.037782 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:08:48.223042 systemd[1]: Started sshd@1-62.238.3.135:22-20.229.252.112:50604.service - OpenSSH per-connection server daemon (20.229.252.112:50604). Apr 13 20:08:48.435704 sshd[1603]: Accepted publickey for core from 20.229.252.112 port 50604 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:48.437696 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:48.446167 systemd-logind[1483]: New session 2 of user core. Apr 13 20:08:48.451883 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:08:48.608825 sshd[1603]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:48.615340 systemd[1]: sshd@1-62.238.3.135:22-20.229.252.112:50604.service: Deactivated successfully. Apr 13 20:08:48.618758 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:08:48.619871 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:08:48.621940 systemd-logind[1483]: Removed session 2. Apr 13 20:08:48.661045 systemd[1]: Started sshd@2-62.238.3.135:22-20.229.252.112:50608.service - OpenSSH per-connection server daemon (20.229.252.112:50608). Apr 13 20:08:48.779886 systemd-networkd[1408]: eth1: Gained IPv6LL Apr 13 20:08:48.781210 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Apr 13 20:08:48.785695 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:08:48.789541 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:08:48.799097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:08:48.815576 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:08:48.859288 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:08:48.884410 sshd[1610]: Accepted publickey for core from 20.229.252.112 port 50608 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:48.887190 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:48.895783 systemd-logind[1483]: New session 3 of user core. Apr 13 20:08:48.903836 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:08:48.907875 systemd-networkd[1408]: eth0: Gained IPv6LL Apr 13 20:08:48.908853 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Apr 13 20:08:49.064862 sshd[1610]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:49.071563 systemd[1]: sshd@2-62.238.3.135:22-20.229.252.112:50608.service: Deactivated successfully. Apr 13 20:08:49.074863 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:08:49.077488 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:08:49.079895 systemd-logind[1483]: Removed session 3. Apr 13 20:08:49.739710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:08:49.743416 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:08:49.743617 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:08:49.745793 systemd[1]: Startup finished in 1.497s (kernel) + 5.775s (initrd) + 5.274s (userspace) = 12.548s. Apr 13 20:08:50.276386 kubelet[1632]: E0413 20:08:50.276290 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:08:50.279199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:08:50.279505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:08:59.113855 systemd[1]: Started sshd@3-62.238.3.135:22-20.229.252.112:60864.service - OpenSSH per-connection server daemon (20.229.252.112:60864). Apr 13 20:08:59.336508 sshd[1644]: Accepted publickey for core from 20.229.252.112 port 60864 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:59.339418 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:59.347074 systemd-logind[1483]: New session 4 of user core. Apr 13 20:08:59.354873 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:08:59.510236 sshd[1644]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:59.515341 systemd[1]: sshd@3-62.238.3.135:22-20.229.252.112:60864.service: Deactivated successfully. Apr 13 20:08:59.519565 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:08:59.521831 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:08:59.523835 systemd-logind[1483]: Removed session 4. Apr 13 20:08:59.555167 systemd[1]: Started sshd@4-62.238.3.135:22-20.229.252.112:60866.service - OpenSSH per-connection server daemon (20.229.252.112:60866). Apr 13 20:08:59.782629 sshd[1651]: Accepted publickey for core from 20.229.252.112 port 60866 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:59.783452 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:59.791439 systemd-logind[1483]: New session 5 of user core. Apr 13 20:08:59.797863 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:08:59.941636 sshd[1651]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:59.946310 systemd[1]: sshd@4-62.238.3.135:22-20.229.252.112:60866.service: Deactivated successfully. Apr 13 20:08:59.949442 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:08:59.950437 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:08:59.952208 systemd-logind[1483]: Removed session 5. Apr 13 20:08:59.987840 systemd[1]: Started sshd@5-62.238.3.135:22-20.229.252.112:60874.service - OpenSSH per-connection server daemon (20.229.252.112:60874). Apr 13 20:09:00.214683 sshd[1658]: Accepted publickey for core from 20.229.252.112 port 60874 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:09:00.216569 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:00.224129 systemd-logind[1483]: New session 6 of user core. Apr 13 20:09:00.233869 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:09:00.290448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:09:00.297223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:00.385097 sshd[1658]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:00.388157 systemd[1]: sshd@5-62.238.3.135:22-20.229.252.112:60874.service: Deactivated successfully. Apr 13 20:09:00.391178 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:09:00.393344 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:09:00.394295 systemd-logind[1483]: Removed session 6. Apr 13 20:09:00.434887 systemd[1]: Started sshd@6-62.238.3.135:22-20.229.252.112:60878.service - OpenSSH per-connection server daemon (20.229.252.112:60878). Apr 13 20:09:00.484803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:00.488492 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:00.521808 kubelet[1675]: E0413 20:09:00.521739 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:00.525508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:00.525994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:00.635123 sshd[1668]: Accepted publickey for core from 20.229.252.112 port 60878 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:09:00.636282 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:00.644727 systemd-logind[1483]: New session 7 of user core. Apr 13 20:09:00.659905 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:09:00.792852 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:09:00.793212 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:00.812903 sudo[1684]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:00.844778 sshd[1668]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:00.849846 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:09:00.850866 systemd[1]: sshd@6-62.238.3.135:22-20.229.252.112:60878.service: Deactivated successfully. Apr 13 20:09:00.854075 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:09:00.855331 systemd-logind[1483]: Removed session 7. Apr 13 20:09:00.889017 systemd[1]: Started sshd@7-62.238.3.135:22-20.229.252.112:60890.service - OpenSSH per-connection server daemon (20.229.252.112:60890). Apr 13 20:09:01.091743 sshd[1689]: Accepted publickey for core from 20.229.252.112 port 60890 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:09:01.093553 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:01.099749 systemd-logind[1483]: New session 8 of user core. Apr 13 20:09:01.113956 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:09:01.228931 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:09:01.229326 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:01.233684 sudo[1693]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:01.241419 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:09:01.241926 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:01.257905 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:01.260853 auditctl[1696]: No rules Apr 13 20:09:01.261366 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:09:01.261595 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:01.267026 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:01.294239 augenrules[1714]: No rules Apr 13 20:09:01.295016 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:01.296341 sudo[1692]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:01.326888 sshd[1689]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:01.330862 systemd[1]: sshd@7-62.238.3.135:22-20.229.252.112:60890.service: Deactivated successfully. Apr 13 20:09:01.333076 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:09:01.334556 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:09:01.336039 systemd-logind[1483]: Removed session 8. Apr 13 20:09:01.376905 systemd[1]: Started sshd@8-62.238.3.135:22-20.229.252.112:60894.service - OpenSSH per-connection server daemon (20.229.252.112:60894). Apr 13 20:09:01.577185 sshd[1722]: Accepted publickey for core from 20.229.252.112 port 60894 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:09:01.580060 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:01.589744 systemd-logind[1483]: New session 9 of user core. Apr 13 20:09:01.596027 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:09:01.723635 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:09:01.724465 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:02.026023 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:09:02.026497 (dockerd)[1740]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:09:02.257242 dockerd[1740]: time="2026-04-13T20:09:02.257172221Z" level=info msg="Starting up" Apr 13 20:09:02.354028 dockerd[1740]: time="2026-04-13T20:09:02.353990512Z" level=info msg="Loading containers: start." Apr 13 20:09:02.443667 kernel: Initializing XFRM netlink socket Apr 13 20:09:02.464354 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Apr 13 20:09:03.355239 systemd-resolved[1409]: Clock change detected. Flushing caches. Apr 13 20:09:03.355557 systemd-timesyncd[1411]: Contacted time server 162.19.170.154:123 (2.flatcar.pool.ntp.org). Apr 13 20:09:03.355607 systemd-timesyncd[1411]: Initial clock synchronization to Mon 2026-04-13 20:09:03.355189 UTC. Apr 13 20:09:03.382636 systemd-networkd[1408]: docker0: Link UP Apr 13 20:09:03.396572 dockerd[1740]: time="2026-04-13T20:09:03.396541486Z" level=info msg="Loading containers: done." Apr 13 20:09:03.411636 dockerd[1740]: time="2026-04-13T20:09:03.411602369Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:09:03.411765 dockerd[1740]: time="2026-04-13T20:09:03.411674989Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:09:03.411765 dockerd[1740]: time="2026-04-13T20:09:03.411754729Z" level=info msg="Daemon has completed initialization" Apr 13 20:09:03.444900 dockerd[1740]: time="2026-04-13T20:09:03.444593596Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:09:03.445212 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:09:03.914767 containerd[1501]: time="2026-04-13T20:09:03.914699318Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 20:09:04.586688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294120148.mount: Deactivated successfully. Apr 13 20:09:05.530729 containerd[1501]: time="2026-04-13T20:09:05.530667144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:05.531771 containerd[1501]: time="2026-04-13T20:09:05.531645165Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947842" Apr 13 20:09:05.533894 containerd[1501]: time="2026-04-13T20:09:05.533056206Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:05.535118 containerd[1501]: time="2026-04-13T20:09:05.535089658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:05.535929 containerd[1501]: time="2026-04-13T20:09:05.535729868Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 1.62096085s" Apr 13 20:09:05.535929 containerd[1501]: time="2026-04-13T20:09:05.535756428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 13 20:09:05.536474 containerd[1501]: time="2026-04-13T20:09:05.536452099Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 20:09:06.519377 containerd[1501]: time="2026-04-13T20:09:06.519324478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:06.520283 containerd[1501]: time="2026-04-13T20:09:06.520112538Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165834" Apr 13 20:09:06.522325 containerd[1501]: time="2026-04-13T20:09:06.521158239Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:06.524037 containerd[1501]: time="2026-04-13T20:09:06.523218631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:06.524037 containerd[1501]: time="2026-04-13T20:09:06.523845791Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 987.372752ms" Apr 13 20:09:06.524037 containerd[1501]: time="2026-04-13T20:09:06.523866891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 13 20:09:06.524496 containerd[1501]: time="2026-04-13T20:09:06.524475722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 20:09:07.450444 containerd[1501]: time="2026-04-13T20:09:07.450397563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:07.451532 containerd[1501]: time="2026-04-13T20:09:07.451325034Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729869" Apr 13 20:09:07.452401 containerd[1501]: time="2026-04-13T20:09:07.452379715Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:07.455211 containerd[1501]: time="2026-04-13T20:09:07.455180437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:07.455980 containerd[1501]: time="2026-04-13T20:09:07.455932358Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 931.436396ms" Apr 13 20:09:07.455980 containerd[1501]: time="2026-04-13T20:09:07.455955158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 13 20:09:07.456580 containerd[1501]: time="2026-04-13T20:09:07.456563548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 20:09:08.446982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111428198.mount: Deactivated successfully. Apr 13 20:09:08.654139 containerd[1501]: time="2026-04-13T20:09:08.654077076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:08.655218 containerd[1501]: time="2026-04-13T20:09:08.655042847Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861802" Apr 13 20:09:08.656631 containerd[1501]: time="2026-04-13T20:09:08.655952378Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:08.658193 containerd[1501]: time="2026-04-13T20:09:08.657638569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:08.658193 containerd[1501]: time="2026-04-13T20:09:08.658085569Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.201500491s" Apr 13 20:09:08.658193 containerd[1501]: time="2026-04-13T20:09:08.658108579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 13 20:09:08.658866 containerd[1501]: time="2026-04-13T20:09:08.658839680Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 20:09:09.178866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992485100.mount: Deactivated successfully. Apr 13 20:09:10.030633 containerd[1501]: time="2026-04-13T20:09:10.030573533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.031563 containerd[1501]: time="2026-04-13T20:09:10.031414633Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Apr 13 20:09:10.032580 containerd[1501]: time="2026-04-13T20:09:10.032549714Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.035363 containerd[1501]: time="2026-04-13T20:09:10.034825406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.035920 containerd[1501]: time="2026-04-13T20:09:10.035742207Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.376826697s" Apr 13 20:09:10.035920 containerd[1501]: time="2026-04-13T20:09:10.035779557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 13 20:09:10.036683 containerd[1501]: time="2026-04-13T20:09:10.036660678Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:09:10.477659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311365352.mount: Deactivated successfully. Apr 13 20:09:10.484125 containerd[1501]: time="2026-04-13T20:09:10.484058251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.485025 containerd[1501]: time="2026-04-13T20:09:10.484903091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Apr 13 20:09:10.487147 containerd[1501]: time="2026-04-13T20:09:10.485892432Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.488361 containerd[1501]: time="2026-04-13T20:09:10.488119014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.488914 containerd[1501]: time="2026-04-13T20:09:10.488885745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 452.199847ms" Apr 13 20:09:10.488969 containerd[1501]: time="2026-04-13T20:09:10.488917185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:09:10.489889 containerd[1501]: time="2026-04-13T20:09:10.489862605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 20:09:11.031246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482081723.mount: Deactivated successfully. Apr 13 20:09:11.404674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:09:11.410895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:11.529116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:11.533244 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:11.573198 kubelet[2068]: E0413 20:09:11.572891 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:11.576300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:11.576486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:11.735973 containerd[1501]: time="2026-04-13T20:09:11.735666153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:11.736851 containerd[1501]: time="2026-04-13T20:09:11.736807374Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874325" Apr 13 20:09:11.737985 containerd[1501]: time="2026-04-13T20:09:11.737951775Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:11.740631 containerd[1501]: time="2026-04-13T20:09:11.740275307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:11.741390 containerd[1501]: time="2026-04-13T20:09:11.741368618Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.251479383s" Apr 13 20:09:11.741440 containerd[1501]: time="2026-04-13T20:09:11.741393708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 13 20:09:13.414132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:13.422880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:13.451492 systemd[1]: Reloading requested from client PID 2114 ('systemctl') (unit session-9.scope)... Apr 13 20:09:13.451509 systemd[1]: Reloading... Apr 13 20:09:13.566358 zram_generator::config[2153]: No configuration found. Apr 13 20:09:13.663268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:13.724798 systemd[1]: Reloading finished in 272 ms. Apr 13 20:09:13.768868 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:09:13.768958 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:09:13.769256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:13.775526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:13.953910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:13.962094 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:14.015472 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:09:14.015472 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:14.015472 kubelet[2206]: I0413 20:09:14.015234 2206 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:09:14.450234 kubelet[2206]: I0413 20:09:14.450167 2206 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:09:14.450234 kubelet[2206]: I0413 20:09:14.450187 2206 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:14.455437 kubelet[2206]: I0413 20:09:14.455358 2206 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:14.455437 kubelet[2206]: I0413 20:09:14.455385 2206 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:14.455665 kubelet[2206]: I0413 20:09:14.455616 2206 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:09:14.487085 kubelet[2206]: E0413 20:09:14.487005 2206 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://62.238.3.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 62.238.3.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:09:14.489826 kubelet[2206]: I0413 20:09:14.487715 2206 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:14.502357 kubelet[2206]: E0413 20:09:14.498040 2206 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:14.502357 kubelet[2206]: I0413 20:09:14.498082 2206 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:14.502357 kubelet[2206]: I0413 20:09:14.501871 2206 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:14.507342 kubelet[2206]: I0413 20:09:14.507303 2206 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:14.507457 kubelet[2206]: I0413 20:09:14.507330 2206 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-2-642afe6700","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:14.507457 kubelet[2206]: I0413 20:09:14.507451 2206 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:09:14.507457 kubelet[2206]: I0413 20:09:14.507458 2206 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:09:14.507573 kubelet[2206]: I0413 20:09:14.507539 2206 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:14.517454 kubelet[2206]: I0413 20:09:14.517426 2206 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:14.517639 kubelet[2206]: I0413 20:09:14.517617 2206 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:09:14.517639 kubelet[2206]: I0413 20:09:14.517630 2206 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:14.517675 kubelet[2206]: I0413 20:09:14.517647 2206 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:09:14.517919 kubelet[2206]: I0413 20:09:14.517660 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:14.518228 kubelet[2206]: E0413 20:09:14.518202 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://62.238.3.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-2-642afe6700&limit=500&resourceVersion=0\": dial tcp 62.238.3.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:09:14.520174 kubelet[2206]: E0413 20:09:14.519880 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://62.238.3.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 62.238.3.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:09:14.520174 kubelet[2206]: I0413 20:09:14.519982 2206 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:14.520497 kubelet[2206]: I0413 20:09:14.520486 2206 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:14.520573 kubelet[2206]: I0413 20:09:14.520551 2206 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:14.520665 kubelet[2206]: W0413 20:09:14.520657 2206 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:09:14.523052 kubelet[2206]: I0413 20:09:14.523041 2206 server.go:1262] "Started kubelet" Apr 13 20:09:14.524317 kubelet[2206]: I0413 20:09:14.524304 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:09:14.527314 kubelet[2206]: E0413 20:09:14.526291 2206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://62.238.3.135:6443/api/v1/namespaces/default/events\": dial tcp 62.238.3.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-2-642afe6700.18a60380be0648ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-2-642afe6700,UID:ci-4081-3-7-2-642afe6700,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-2-642afe6700,},FirstTimestamp:2026-04-13 20:09:14.523019435 +0000 UTC m=+0.554336483,LastTimestamp:2026-04-13 20:09:14.523019435 +0000 UTC m=+0.554336483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-2-642afe6700,}" Apr 13 20:09:14.527475 kubelet[2206]: I0413 20:09:14.527459 2206 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:14.528664 kubelet[2206]: I0413 20:09:14.528652 2206 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:09:14.531680 kubelet[2206]: I0413 20:09:14.531650 2206 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:14.531725 kubelet[2206]: I0413 20:09:14.531693 2206 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:14.531848 kubelet[2206]: I0413 20:09:14.531830 2206 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:14.532195 kubelet[2206]: I0413 20:09:14.531990 2206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:14.535317 kubelet[2206]: I0413 20:09:14.534887 2206 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:09:14.535317 kubelet[2206]: E0413 20:09:14.535035 2206 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-2-642afe6700\" not found" Apr 13 20:09:14.535540 kubelet[2206]: E0413 20:09:14.535517 2206 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:09:14.535614 kubelet[2206]: E0413 20:09:14.535594 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://62.238.3.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2-642afe6700?timeout=10s\": dial tcp 62.238.3.135:6443: connect: connection refused" interval="200ms" Apr 13 20:09:14.535727 kubelet[2206]: I0413 20:09:14.535709 2206 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:14.535790 kubelet[2206]: I0413 20:09:14.535771 2206 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:14.536715 kubelet[2206]: I0413 20:09:14.536697 2206 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:14.536754 kubelet[2206]: I0413 20:09:14.536725 2206 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:14.536945 kubelet[2206]: E0413 20:09:14.536922 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://62.238.3.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 62.238.3.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:09:14.537079 kubelet[2206]: I0413 20:09:14.537063 2206 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:14.544171 kubelet[2206]: I0413 20:09:14.544152 2206 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:14.545210 kubelet[2206]: I0413 20:09:14.545198 2206 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:14.545268 kubelet[2206]: I0413 20:09:14.545260 2206 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:09:14.545312 kubelet[2206]: I0413 20:09:14.545306 2206 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:09:14.545403 kubelet[2206]: E0413 20:09:14.545391 2206 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:14.552082 kubelet[2206]: E0413 20:09:14.552067 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://62.238.3.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 62.238.3.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:09:14.561542 kubelet[2206]: I0413 20:09:14.561528 2206 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:09:14.561788 kubelet[2206]: I0413 20:09:14.561607 2206 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:14.561788 kubelet[2206]: I0413 20:09:14.561621 2206 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:14.563095 kubelet[2206]: I0413 20:09:14.562918 2206 policy_none.go:49] "None policy: Start" Apr 13 20:09:14.563095 kubelet[2206]: I0413 20:09:14.562933 2206 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:14.563095 kubelet[2206]: I0413 20:09:14.562943 2206 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:14.565377 kubelet[2206]: I0413 20:09:14.563825 2206 policy_none.go:47] "Start" Apr 13 20:09:14.570960 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:09:14.583829 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:09:14.593034 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:09:14.594141 kubelet[2206]: E0413 20:09:14.594118 2206 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:14.594291 kubelet[2206]: I0413 20:09:14.594273 2206 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:09:14.594310 kubelet[2206]: I0413 20:09:14.594285 2206 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:14.594829 kubelet[2206]: I0413 20:09:14.594788 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:09:14.596361 kubelet[2206]: E0413 20:09:14.596322 2206 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:14.596478 kubelet[2206]: E0413 20:09:14.596379 2206 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-2-642afe6700\" not found" Apr 13 20:09:14.663004 systemd[1]: Created slice kubepods-burstable-pod08263d610ab2e5a419e44cbe56866e2e.slice - libcontainer container kubepods-burstable-pod08263d610ab2e5a419e44cbe56866e2e.slice. Apr 13 20:09:14.684509 kubelet[2206]: E0413 20:09:14.684442 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2-642afe6700\" not found" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.691727 systemd[1]: Created slice kubepods-burstable-pod4464158112ae13ce45c1bd99c0c22877.slice - libcontainer container kubepods-burstable-pod4464158112ae13ce45c1bd99c0c22877.slice. Apr 13 20:09:14.696417 kubelet[2206]: E0413 20:09:14.696096 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2-642afe6700\" not found" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.698729 kubelet[2206]: I0413 20:09:14.698088 2206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.698729 kubelet[2206]: E0413 20:09:14.698574 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://62.238.3.135:6443/api/v1/nodes\": dial tcp 62.238.3.135:6443: connect: connection refused" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.702118 systemd[1]: Created slice kubepods-burstable-pod0de25791aec59abfd16016e2eac9d1a1.slice - libcontainer container kubepods-burstable-pod0de25791aec59abfd16016e2eac9d1a1.slice. Apr 13 20:09:14.706830 kubelet[2206]: E0413 20:09:14.706806 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2-642afe6700\" not found" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.736641 kubelet[2206]: E0413 20:09:14.736541 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://62.238.3.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2-642afe6700?timeout=10s\": dial tcp 62.238.3.135:6443: connect: connection refused" interval="400ms" Apr 13 20:09:14.738036 kubelet[2206]: I0413 20:09:14.737785 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738036 kubelet[2206]: I0413 20:09:14.737833 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738036 kubelet[2206]: I0413 20:09:14.737900 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08263d610ab2e5a419e44cbe56866e2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-2-642afe6700\" (UID: \"08263d610ab2e5a419e44cbe56866e2e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738036 kubelet[2206]: I0413 20:09:14.737959 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738036 kubelet[2206]: I0413 20:09:14.737982 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738360 kubelet[2206]: I0413 20:09:14.738002 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738360 kubelet[2206]: I0413 20:09:14.738042 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0de25791aec59abfd16016e2eac9d1a1-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-2-642afe6700\" (UID: \"0de25791aec59abfd16016e2eac9d1a1\") " pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738360 kubelet[2206]: I0413 20:09:14.738068 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08263d610ab2e5a419e44cbe56866e2e-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-2-642afe6700\" (UID: \"08263d610ab2e5a419e44cbe56866e2e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.738360 kubelet[2206]: I0413 20:09:14.738093 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08263d610ab2e5a419e44cbe56866e2e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-2-642afe6700\" (UID: \"08263d610ab2e5a419e44cbe56866e2e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.901921 kubelet[2206]: I0413 20:09:14.901879 2206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.902603 kubelet[2206]: E0413 20:09:14.902485 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://62.238.3.135:6443/api/v1/nodes\": dial tcp 62.238.3.135:6443: connect: connection refused" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:14.990825 containerd[1501]: time="2026-04-13T20:09:14.990668435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-2-642afe6700,Uid:08263d610ab2e5a419e44cbe56866e2e,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:15.000186 containerd[1501]: time="2026-04-13T20:09:15.000138553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-2-642afe6700,Uid:4464158112ae13ce45c1bd99c0c22877,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:15.009685 containerd[1501]: time="2026-04-13T20:09:15.009613521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-2-642afe6700,Uid:0de25791aec59abfd16016e2eac9d1a1,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:15.138357 kubelet[2206]: E0413 20:09:15.138269 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://62.238.3.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2-642afe6700?timeout=10s\": dial tcp 62.238.3.135:6443: connect: connection refused" interval="800ms" Apr 13 20:09:15.305843 kubelet[2206]: I0413 20:09:15.305085 2206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:15.306098 kubelet[2206]: E0413 20:09:15.305839 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://62.238.3.135:6443/api/v1/nodes\": dial tcp 62.238.3.135:6443: connect: connection refused" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:15.485726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446832188.mount: Deactivated successfully. Apr 13 20:09:15.498266 containerd[1501]: time="2026-04-13T20:09:15.498155198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:15.501063 containerd[1501]: time="2026-04-13T20:09:15.500971220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:15.507240 containerd[1501]: time="2026-04-13T20:09:15.507141015Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:15.510382 containerd[1501]: time="2026-04-13T20:09:15.509160407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:15.510382 containerd[1501]: time="2026-04-13T20:09:15.509838297Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 13 20:09:15.511844 containerd[1501]: time="2026-04-13T20:09:15.511779579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:15.511928 containerd[1501]: time="2026-04-13T20:09:15.511882099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:15.516801 containerd[1501]: time="2026-04-13T20:09:15.516755933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:15.519322 containerd[1501]: time="2026-04-13T20:09:15.519281595Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.46687ms" Apr 13 20:09:15.522103 containerd[1501]: time="2026-04-13T20:09:15.522018817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.310416ms" Apr 13 20:09:15.523799 containerd[1501]: time="2026-04-13T20:09:15.523754209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.531716ms" Apr 13 20:09:15.632711 containerd[1501]: time="2026-04-13T20:09:15.629901327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:15.632711 containerd[1501]: time="2026-04-13T20:09:15.629962487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:15.632711 containerd[1501]: time="2026-04-13T20:09:15.629980167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:15.632711 containerd[1501]: time="2026-04-13T20:09:15.630062377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:15.635929 containerd[1501]: time="2026-04-13T20:09:15.635850592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:15.638991 containerd[1501]: time="2026-04-13T20:09:15.638750075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:15.638991 containerd[1501]: time="2026-04-13T20:09:15.638786645Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:15.638991 containerd[1501]: time="2026-04-13T20:09:15.638797895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:15.640470 containerd[1501]: time="2026-04-13T20:09:15.640442766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:15.643260 containerd[1501]: time="2026-04-13T20:09:15.640589456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:15.643260 containerd[1501]: time="2026-04-13T20:09:15.640605756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:15.643260 containerd[1501]: time="2026-04-13T20:09:15.640683626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:15.655476 systemd[1]: Started cri-containerd-6cdb979c557feb2773b6d93b8a701fd7971e611725d4eca2daa7c68602504d3e.scope - libcontainer container 6cdb979c557feb2773b6d93b8a701fd7971e611725d4eca2daa7c68602504d3e. Apr 13 20:09:15.677462 systemd[1]: Started cri-containerd-48ee0c53afe9e4b5548720c6084aeb0c287f6dc0a2b8dc80402ae0d49886b8ac.scope - libcontainer container 48ee0c53afe9e4b5548720c6084aeb0c287f6dc0a2b8dc80402ae0d49886b8ac. Apr 13 20:09:15.680275 systemd[1]: Started cri-containerd-4c1b9baece7a382d6571cf8eea0888249bdffa59f2771d30e94c9761228acec1.scope - libcontainer container 4c1b9baece7a382d6571cf8eea0888249bdffa59f2771d30e94c9761228acec1. Apr 13 20:09:15.685503 kubelet[2206]: E0413 20:09:15.685477 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://62.238.3.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 62.238.3.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:09:15.720350 kubelet[2206]: E0413 20:09:15.720049 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://62.238.3.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 62.238.3.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:09:15.729543 containerd[1501]: time="2026-04-13T20:09:15.729108560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-2-642afe6700,Uid:4464158112ae13ce45c1bd99c0c22877,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cdb979c557feb2773b6d93b8a701fd7971e611725d4eca2daa7c68602504d3e\"" Apr 13 20:09:15.736689 containerd[1501]: time="2026-04-13T20:09:15.736527216Z" level=info msg="CreateContainer within sandbox \"6cdb979c557feb2773b6d93b8a701fd7971e611725d4eca2daa7c68602504d3e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:09:15.740753 containerd[1501]: time="2026-04-13T20:09:15.740658100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-2-642afe6700,Uid:08263d610ab2e5a419e44cbe56866e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"48ee0c53afe9e4b5548720c6084aeb0c287f6dc0a2b8dc80402ae0d49886b8ac\"" Apr 13 20:09:15.745713 containerd[1501]: time="2026-04-13T20:09:15.745696874Z" level=info msg="CreateContainer within sandbox \"48ee0c53afe9e4b5548720c6084aeb0c287f6dc0a2b8dc80402ae0d49886b8ac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:09:15.756814 containerd[1501]: time="2026-04-13T20:09:15.756790903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-2-642afe6700,Uid:0de25791aec59abfd16016e2eac9d1a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c1b9baece7a382d6571cf8eea0888249bdffa59f2771d30e94c9761228acec1\"" Apr 13 20:09:15.760965 containerd[1501]: time="2026-04-13T20:09:15.760942817Z" level=info msg="CreateContainer within sandbox \"4c1b9baece7a382d6571cf8eea0888249bdffa59f2771d30e94c9761228acec1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:09:15.761546 containerd[1501]: time="2026-04-13T20:09:15.761283607Z" level=info msg="CreateContainer within sandbox \"6cdb979c557feb2773b6d93b8a701fd7971e611725d4eca2daa7c68602504d3e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1\"" Apr 13 20:09:15.762151 containerd[1501]: time="2026-04-13T20:09:15.762137518Z" level=info msg="StartContainer for \"ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1\"" Apr 13 20:09:15.762663 containerd[1501]: time="2026-04-13T20:09:15.762639728Z" level=info msg="CreateContainer within sandbox \"48ee0c53afe9e4b5548720c6084aeb0c287f6dc0a2b8dc80402ae0d49886b8ac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a87650b7c930863caf62c05998804600781a1fa22ba3bd210e89f5bbfa82b1a6\"" Apr 13 20:09:15.763057 containerd[1501]: time="2026-04-13T20:09:15.763044368Z" level=info msg="StartContainer for \"a87650b7c930863caf62c05998804600781a1fa22ba3bd210e89f5bbfa82b1a6\"" Apr 13 20:09:15.777402 containerd[1501]: time="2026-04-13T20:09:15.777358610Z" level=info msg="CreateContainer within sandbox \"4c1b9baece7a382d6571cf8eea0888249bdffa59f2771d30e94c9761228acec1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2\"" Apr 13 20:09:15.779481 containerd[1501]: time="2026-04-13T20:09:15.779455362Z" level=info msg="StartContainer for \"44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2\"" Apr 13 20:09:15.789778 systemd[1]: Started cri-containerd-a87650b7c930863caf62c05998804600781a1fa22ba3bd210e89f5bbfa82b1a6.scope - libcontainer container a87650b7c930863caf62c05998804600781a1fa22ba3bd210e89f5bbfa82b1a6. Apr 13 20:09:15.799447 systemd[1]: Started cri-containerd-ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1.scope - libcontainer container ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1. Apr 13 20:09:15.811436 systemd[1]: Started cri-containerd-44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2.scope - libcontainer container 44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2. Apr 13 20:09:15.875784 containerd[1501]: time="2026-04-13T20:09:15.873174100Z" level=info msg="StartContainer for \"ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1\" returns successfully" Apr 13 20:09:15.875784 containerd[1501]: time="2026-04-13T20:09:15.873390000Z" level=info msg="StartContainer for \"a87650b7c930863caf62c05998804600781a1fa22ba3bd210e89f5bbfa82b1a6\" returns successfully" Apr 13 20:09:15.886878 containerd[1501]: time="2026-04-13T20:09:15.885822451Z" level=info msg="StartContainer for \"44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2\" returns successfully" Apr 13 20:09:16.109232 kubelet[2206]: I0413 20:09:16.109202 2206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:16.578307 kubelet[2206]: E0413 20:09:16.578272 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2-642afe6700\" not found" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:16.578684 kubelet[2206]: E0413 20:09:16.578630 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2-642afe6700\" not found" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:16.582031 kubelet[2206]: E0413 20:09:16.582012 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2-642afe6700\" not found" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.058192 kubelet[2206]: E0413 20:09:17.058160 2206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-2-642afe6700\" not found" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.103963 kubelet[2206]: I0413 20:09:17.103454 2206 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.103963 kubelet[2206]: E0413 20:09:17.103480 2206 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-2-642afe6700\": node \"ci-4081-3-7-2-642afe6700\" not found" Apr 13 20:09:17.138261 kubelet[2206]: I0413 20:09:17.136053 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.140723 kubelet[2206]: E0413 20:09:17.140701 2206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-2-642afe6700\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.140723 kubelet[2206]: I0413 20:09:17.140720 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.141618 kubelet[2206]: E0413 20:09:17.141566 2206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.141618 kubelet[2206]: I0413 20:09:17.141609 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.142486 kubelet[2206]: E0413 20:09:17.142469 2206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-2-642afe6700\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.520768 kubelet[2206]: I0413 20:09:17.520684 2206 apiserver.go:52] "Watching apiserver" Apr 13 20:09:17.537388 kubelet[2206]: I0413 20:09:17.537286 2206 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:17.582726 kubelet[2206]: I0413 20:09:17.582229 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.582726 kubelet[2206]: I0413 20:09:17.582309 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.584941 kubelet[2206]: E0413 20:09:17.584861 2206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-2-642afe6700\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:17.586889 kubelet[2206]: E0413 20:09:17.586841 2206 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-2-642afe6700\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:18.584225 kubelet[2206]: I0413 20:09:18.584171 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:18.764665 kubelet[2206]: I0413 20:09:18.764620 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:19.417720 systemd[1]: Reloading requested from client PID 2491 ('systemctl') (unit session-9.scope)... Apr 13 20:09:19.417736 systemd[1]: Reloading... Apr 13 20:09:19.511393 zram_generator::config[2531]: No configuration found. Apr 13 20:09:19.605314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:19.678141 systemd[1]: Reloading finished in 259 ms. Apr 13 20:09:19.732022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:19.742855 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:09:19.743051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:19.751852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:19.866420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:19.870097 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:19.904650 kubelet[2582]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:09:19.904650 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:19.907903 kubelet[2582]: I0413 20:09:19.904691 2582 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:09:19.916539 kubelet[2582]: I0413 20:09:19.916510 2582 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:09:19.916539 kubelet[2582]: I0413 20:09:19.916531 2582 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:19.916640 kubelet[2582]: I0413 20:09:19.916554 2582 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:19.916640 kubelet[2582]: I0413 20:09:19.916560 2582 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:19.916747 kubelet[2582]: I0413 20:09:19.916726 2582 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:09:19.917647 kubelet[2582]: I0413 20:09:19.917624 2582 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:09:19.919854 kubelet[2582]: I0413 20:09:19.918974 2582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:19.921725 kubelet[2582]: E0413 20:09:19.921689 2582 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:19.921869 kubelet[2582]: I0413 20:09:19.921789 2582 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:19.925388 kubelet[2582]: I0413 20:09:19.925369 2582 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:19.925688 kubelet[2582]: I0413 20:09:19.925658 2582 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:19.925783 kubelet[2582]: I0413 20:09:19.925683 2582 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-2-642afe6700","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:19.925783 kubelet[2582]: I0413 20:09:19.925780 2582 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:09:19.925862 kubelet[2582]: I0413 20:09:19.925787 2582 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:09:19.925862 kubelet[2582]: I0413 20:09:19.925804 2582 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:19.925955 kubelet[2582]: I0413 20:09:19.925937 2582 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:19.926085 kubelet[2582]: I0413 20:09:19.926068 2582 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:09:19.926085 kubelet[2582]: I0413 20:09:19.926081 2582 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:19.926120 kubelet[2582]: I0413 20:09:19.926095 2582 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:09:19.926120 kubelet[2582]: I0413 20:09:19.926108 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:19.927695 kubelet[2582]: I0413 20:09:19.927673 2582 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:19.929102 kubelet[2582]: I0413 20:09:19.929042 2582 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:19.929102 kubelet[2582]: I0413 20:09:19.929066 2582 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:19.938913 kubelet[2582]: I0413 20:09:19.938707 2582 server.go:1262] "Started kubelet" Apr 13 20:09:19.941369 kubelet[2582]: I0413 20:09:19.941301 2582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:09:19.944366 kubelet[2582]: I0413 20:09:19.942710 2582 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:19.944366 kubelet[2582]: I0413 20:09:19.943383 2582 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:09:19.944366 kubelet[2582]: I0413 20:09:19.939689 2582 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:19.944366 kubelet[2582]: I0413 20:09:19.943979 2582 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:19.944366 kubelet[2582]: I0413 20:09:19.944101 2582 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:19.950074 kubelet[2582]: I0413 20:09:19.950044 2582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:19.952773 kubelet[2582]: E0413 20:09:19.952750 2582 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:09:19.953919 kubelet[2582]: I0413 20:09:19.953908 2582 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:09:19.954492 kubelet[2582]: I0413 20:09:19.954473 2582 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:19.954579 kubelet[2582]: I0413 20:09:19.954550 2582 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:19.954886 kubelet[2582]: I0413 20:09:19.954858 2582 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:19.955144 kubelet[2582]: I0413 20:09:19.955125 2582 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:19.955968 kubelet[2582]: I0413 20:09:19.955636 2582 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:19.958888 kubelet[2582]: I0413 20:09:19.958855 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:19.960053 kubelet[2582]: I0413 20:09:19.960040 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:19.960106 kubelet[2582]: I0413 20:09:19.960098 2582 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:09:19.960150 kubelet[2582]: I0413 20:09:19.960144 2582 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:09:19.960218 kubelet[2582]: E0413 20:09:19.960202 2582 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:20.001382 kubelet[2582]: I0413 20:09:20.001363 2582 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:09:20.001514 kubelet[2582]: I0413 20:09:20.001505 2582 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:20.001606 kubelet[2582]: I0413 20:09:20.001548 2582 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:20.001797 kubelet[2582]: I0413 20:09:20.001773 2582 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:09:20.001907 kubelet[2582]: I0413 20:09:20.001847 2582 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:09:20.001972 kubelet[2582]: I0413 20:09:20.001941 2582 policy_none.go:49] "None policy: Start" Apr 13 20:09:20.002035 kubelet[2582]: I0413 20:09:20.002027 2582 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:20.002096 kubelet[2582]: I0413 20:09:20.002089 2582 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:20.002243 kubelet[2582]: I0413 20:09:20.002236 2582 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:09:20.002281 kubelet[2582]: I0413 20:09:20.002276 2582 policy_none.go:47] "Start" Apr 13 20:09:20.007805 kubelet[2582]: E0413 20:09:20.007776 2582 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:20.008558 kubelet[2582]: I0413 20:09:20.008538 2582 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:09:20.008606 kubelet[2582]: I0413 20:09:20.008556 2582 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:20.008787 kubelet[2582]: I0413 20:09:20.008765 2582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:09:20.009670 kubelet[2582]: E0413 20:09:20.009649 2582 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:20.061229 kubelet[2582]: I0413 20:09:20.061194 2582 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.061421 kubelet[2582]: I0413 20:09:20.061407 2582 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.061517 kubelet[2582]: I0413 20:09:20.061498 2582 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.067842 kubelet[2582]: E0413 20:09:20.067730 2582 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-2-642afe6700\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.068547 kubelet[2582]: E0413 20:09:20.068523 2582 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-2-642afe6700\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.117128 kubelet[2582]: I0413 20:09:20.117051 2582 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.129206 kubelet[2582]: I0413 20:09:20.128877 2582 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.129206 kubelet[2582]: I0413 20:09:20.128962 2582 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.155785 kubelet[2582]: I0413 20:09:20.155700 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0de25791aec59abfd16016e2eac9d1a1-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-2-642afe6700\" (UID: \"0de25791aec59abfd16016e2eac9d1a1\") " pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.155785 kubelet[2582]: I0413 20:09:20.155787 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08263d610ab2e5a419e44cbe56866e2e-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-2-642afe6700\" (UID: \"08263d610ab2e5a419e44cbe56866e2e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.155960 kubelet[2582]: I0413 20:09:20.155809 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.155960 kubelet[2582]: I0413 20:09:20.155865 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.155960 kubelet[2582]: I0413 20:09:20.155885 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.156100 kubelet[2582]: I0413 20:09:20.155970 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08263d610ab2e5a419e44cbe56866e2e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-2-642afe6700\" (UID: \"08263d610ab2e5a419e44cbe56866e2e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.156100 kubelet[2582]: I0413 20:09:20.155989 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08263d610ab2e5a419e44cbe56866e2e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-2-642afe6700\" (UID: \"08263d610ab2e5a419e44cbe56866e2e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.156100 kubelet[2582]: I0413 20:09:20.156041 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.156217 kubelet[2582]: I0413 20:09:20.156058 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4464158112ae13ce45c1bd99c0c22877-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-2-642afe6700\" (UID: \"4464158112ae13ce45c1bd99c0c22877\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.927445 kubelet[2582]: I0413 20:09:20.927301 2582 apiserver.go:52] "Watching apiserver" Apr 13 20:09:20.955201 kubelet[2582]: I0413 20:09:20.955135 2582 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:20.988496 kubelet[2582]: I0413 20:09:20.987936 2582 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.988496 kubelet[2582]: I0413 20:09:20.988398 2582 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.999356 kubelet[2582]: E0413 20:09:20.997268 2582 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-2-642afe6700\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" Apr 13 20:09:20.999356 kubelet[2582]: E0413 20:09:20.998112 2582 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-2-642afe6700\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" Apr 13 20:09:21.020315 kubelet[2582]: I0413 20:09:21.020243 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-2-642afe6700" podStartSLOduration=1.020222408 podStartE2EDuration="1.020222408s" podCreationTimestamp="2026-04-13 20:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:21.009199589 +0000 UTC m=+1.136084618" watchObservedRunningTime="2026-04-13 20:09:21.020222408 +0000 UTC m=+1.147107437" Apr 13 20:09:21.032163 kubelet[2582]: I0413 20:09:21.032087 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-2-642afe6700" podStartSLOduration=3.031706137 podStartE2EDuration="3.031706137s" podCreationTimestamp="2026-04-13 20:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:21.031516237 +0000 UTC m=+1.158401266" watchObservedRunningTime="2026-04-13 20:09:21.031706137 +0000 UTC m=+1.158591166" Apr 13 20:09:21.032287 kubelet[2582]: I0413 20:09:21.032267 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-2-642afe6700" podStartSLOduration=3.032262498 podStartE2EDuration="3.032262498s" podCreationTimestamp="2026-04-13 20:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:21.020895848 +0000 UTC m=+1.147780877" watchObservedRunningTime="2026-04-13 20:09:21.032262498 +0000 UTC m=+1.159147517" Apr 13 20:09:24.392280 kubelet[2582]: I0413 20:09:24.392216 2582 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:09:24.392815 containerd[1501]: time="2026-04-13T20:09:24.392764217Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:09:24.393053 kubelet[2582]: I0413 20:09:24.393022 2582 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:09:25.408587 systemd[1]: Created slice kubepods-besteffort-pod8365ebd7_f3b9_440a_8897_d229d083ce23.slice - libcontainer container kubepods-besteffort-pod8365ebd7_f3b9_440a_8897_d229d083ce23.slice. Apr 13 20:09:25.490288 kubelet[2582]: I0413 20:09:25.490211 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8365ebd7-f3b9-440a-8897-d229d083ce23-kube-proxy\") pod \"kube-proxy-9qqw9\" (UID: \"8365ebd7-f3b9-440a-8897-d229d083ce23\") " pod="kube-system/kube-proxy-9qqw9" Apr 13 20:09:25.490288 kubelet[2582]: I0413 20:09:25.490248 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8365ebd7-f3b9-440a-8897-d229d083ce23-lib-modules\") pod \"kube-proxy-9qqw9\" (UID: \"8365ebd7-f3b9-440a-8897-d229d083ce23\") " pod="kube-system/kube-proxy-9qqw9" Apr 13 20:09:25.490288 kubelet[2582]: I0413 20:09:25.490273 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zdj\" (UniqueName: \"kubernetes.io/projected/8365ebd7-f3b9-440a-8897-d229d083ce23-kube-api-access-84zdj\") pod \"kube-proxy-9qqw9\" (UID: \"8365ebd7-f3b9-440a-8897-d229d083ce23\") " pod="kube-system/kube-proxy-9qqw9" Apr 13 20:09:25.490288 kubelet[2582]: I0413 20:09:25.490290 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8365ebd7-f3b9-440a-8897-d229d083ce23-xtables-lock\") pod \"kube-proxy-9qqw9\" (UID: \"8365ebd7-f3b9-440a-8897-d229d083ce23\") " pod="kube-system/kube-proxy-9qqw9" Apr 13 20:09:25.561764 systemd[1]: Created slice kubepods-besteffort-pod790ec4e6_3240_4795_bf98_9753489fa169.slice - libcontainer container kubepods-besteffort-pod790ec4e6_3240_4795_bf98_9753489fa169.slice. Apr 13 20:09:25.592197 kubelet[2582]: I0413 20:09:25.591298 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/790ec4e6-3240-4795-bf98-9753489fa169-var-lib-calico\") pod \"tigera-operator-5588576f44-6g474\" (UID: \"790ec4e6-3240-4795-bf98-9753489fa169\") " pod="tigera-operator/tigera-operator-5588576f44-6g474" Apr 13 20:09:25.592197 kubelet[2582]: I0413 20:09:25.591327 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcssn\" (UniqueName: \"kubernetes.io/projected/790ec4e6-3240-4795-bf98-9753489fa169-kube-api-access-hcssn\") pod \"tigera-operator-5588576f44-6g474\" (UID: \"790ec4e6-3240-4795-bf98-9753489fa169\") " pod="tigera-operator/tigera-operator-5588576f44-6g474" Apr 13 20:09:25.719544 containerd[1501]: time="2026-04-13T20:09:25.719383143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qqw9,Uid:8365ebd7-f3b9-440a-8897-d229d083ce23,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:25.760188 containerd[1501]: time="2026-04-13T20:09:25.759867246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:25.760188 containerd[1501]: time="2026-04-13T20:09:25.759933116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:25.760188 containerd[1501]: time="2026-04-13T20:09:25.759951906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:25.760770 containerd[1501]: time="2026-04-13T20:09:25.760075926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:25.786470 systemd[1]: Started cri-containerd-39192dbb600a7ac16e8603e622d625439676c46ac897c4589432d532d01b58c1.scope - libcontainer container 39192dbb600a7ac16e8603e622d625439676c46ac897c4589432d532d01b58c1. Apr 13 20:09:25.805422 containerd[1501]: time="2026-04-13T20:09:25.805369974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9qqw9,Uid:8365ebd7-f3b9-440a-8897-d229d083ce23,Namespace:kube-system,Attempt:0,} returns sandbox id \"39192dbb600a7ac16e8603e622d625439676c46ac897c4589432d532d01b58c1\"" Apr 13 20:09:25.811652 containerd[1501]: time="2026-04-13T20:09:25.811556189Z" level=info msg="CreateContainer within sandbox \"39192dbb600a7ac16e8603e622d625439676c46ac897c4589432d532d01b58c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:09:25.825665 containerd[1501]: time="2026-04-13T20:09:25.825630561Z" level=info msg="CreateContainer within sandbox \"39192dbb600a7ac16e8603e622d625439676c46ac897c4589432d532d01b58c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6701c0785a85b31c35d7480544d829f5fc2bda7f0b0412f4a43c1b95e7974cd0\"" Apr 13 20:09:25.826145 containerd[1501]: time="2026-04-13T20:09:25.826121301Z" level=info msg="StartContainer for \"6701c0785a85b31c35d7480544d829f5fc2bda7f0b0412f4a43c1b95e7974cd0\"" Apr 13 20:09:25.849449 systemd[1]: Started cri-containerd-6701c0785a85b31c35d7480544d829f5fc2bda7f0b0412f4a43c1b95e7974cd0.scope - libcontainer container 6701c0785a85b31c35d7480544d829f5fc2bda7f0b0412f4a43c1b95e7974cd0. Apr 13 20:09:25.869430 containerd[1501]: time="2026-04-13T20:09:25.869174007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-6g474,Uid:790ec4e6-3240-4795-bf98-9753489fa169,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:09:25.873974 containerd[1501]: time="2026-04-13T20:09:25.873935971Z" level=info msg="StartContainer for \"6701c0785a85b31c35d7480544d829f5fc2bda7f0b0412f4a43c1b95e7974cd0\" returns successfully" Apr 13 20:09:25.897613 containerd[1501]: time="2026-04-13T20:09:25.896571620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:25.897613 containerd[1501]: time="2026-04-13T20:09:25.896631510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:25.897613 containerd[1501]: time="2026-04-13T20:09:25.896639290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:25.898584 containerd[1501]: time="2026-04-13T20:09:25.898109491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:25.917457 systemd[1]: Started cri-containerd-aa3a351dea63a2a6a70df2cec5d47ef8d69f490c4c3d93968950affb9e98c0a7.scope - libcontainer container aa3a351dea63a2a6a70df2cec5d47ef8d69f490c4c3d93968950affb9e98c0a7. Apr 13 20:09:25.963269 containerd[1501]: time="2026-04-13T20:09:25.963229396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-6g474,Uid:790ec4e6-3240-4795-bf98-9753489fa169,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aa3a351dea63a2a6a70df2cec5d47ef8d69f490c4c3d93968950affb9e98c0a7\"" Apr 13 20:09:25.965127 containerd[1501]: time="2026-04-13T20:09:25.965111577Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:09:26.014549 kubelet[2582]: I0413 20:09:26.014145 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9qqw9" podStartSLOduration=1.014119508 podStartE2EDuration="1.014119508s" podCreationTimestamp="2026-04-13 20:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:26.006693062 +0000 UTC m=+6.133578091" watchObservedRunningTime="2026-04-13 20:09:26.014119508 +0000 UTC m=+6.141004537" Apr 13 20:09:27.525884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866088331.mount: Deactivated successfully. Apr 13 20:09:28.091101 containerd[1501]: time="2026-04-13T20:09:28.091051708Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:28.091910 containerd[1501]: time="2026-04-13T20:09:28.091819289Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:09:28.092511 containerd[1501]: time="2026-04-13T20:09:28.092479590Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:28.094064 containerd[1501]: time="2026-04-13T20:09:28.094043101Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:28.094957 containerd[1501]: time="2026-04-13T20:09:28.094483631Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.129216104s" Apr 13 20:09:28.094957 containerd[1501]: time="2026-04-13T20:09:28.094507881Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:09:28.097454 containerd[1501]: time="2026-04-13T20:09:28.097429694Z" level=info msg="CreateContainer within sandbox \"aa3a351dea63a2a6a70df2cec5d47ef8d69f490c4c3d93968950affb9e98c0a7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:09:28.111460 containerd[1501]: time="2026-04-13T20:09:28.111433725Z" level=info msg="CreateContainer within sandbox \"aa3a351dea63a2a6a70df2cec5d47ef8d69f490c4c3d93968950affb9e98c0a7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90\"" Apr 13 20:09:28.112366 containerd[1501]: time="2026-04-13T20:09:28.111876686Z" level=info msg="StartContainer for \"7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90\"" Apr 13 20:09:28.140456 systemd[1]: Started cri-containerd-7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90.scope - libcontainer container 7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90. Apr 13 20:09:28.161119 containerd[1501]: time="2026-04-13T20:09:28.161088787Z" level=info msg="StartContainer for \"7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90\" returns successfully" Apr 13 20:09:29.020214 kubelet[2582]: I0413 20:09:29.019985 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-6g474" podStartSLOduration=1.889297777 podStartE2EDuration="4.019973442s" podCreationTimestamp="2026-04-13 20:09:25 +0000 UTC" firstStartedPulling="2026-04-13 20:09:25.964491727 +0000 UTC m=+6.091376746" lastFinishedPulling="2026-04-13 20:09:28.095167382 +0000 UTC m=+8.222052411" observedRunningTime="2026-04-13 20:09:29.018839151 +0000 UTC m=+9.145724170" watchObservedRunningTime="2026-04-13 20:09:29.019973442 +0000 UTC m=+9.146858471" Apr 13 20:09:33.064053 update_engine[1485]: I20260413 20:09:33.063966 1485 update_attempter.cc:509] Updating boot flags... Apr 13 20:09:33.156376 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2964) Apr 13 20:09:33.263811 sudo[1725]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:33.297004 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2965) Apr 13 20:09:33.302917 sshd[1722]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:33.308450 systemd[1]: sshd@8-62.238.3.135:22-20.229.252.112:60894.service: Deactivated successfully. Apr 13 20:09:33.311317 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:09:33.311541 systemd[1]: session-9.scope: Consumed 3.629s CPU time, 159.9M memory peak, 0B memory swap peak. Apr 13 20:09:33.313412 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:09:33.320290 systemd-logind[1483]: Removed session 9. Apr 13 20:09:33.366663 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2965) Apr 13 20:09:35.495612 systemd[1]: Created slice kubepods-besteffort-podfeaa34bd_87c3_492a_8913_be97ffc362e9.slice - libcontainer container kubepods-besteffort-podfeaa34bd_87c3_492a_8913_be97ffc362e9.slice. Apr 13 20:09:35.545448 systemd[1]: Created slice kubepods-besteffort-pod3571c896_a45b_40b1_883f_96dcf544c2c6.slice - libcontainer container kubepods-besteffort-pod3571c896_a45b_40b1_883f_96dcf544c2c6.slice. Apr 13 20:09:35.558180 kubelet[2582]: I0413 20:09:35.558124 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feaa34bd-87c3-492a-8913-be97ffc362e9-tigera-ca-bundle\") pod \"calico-typha-6db99d56dc-dpr84\" (UID: \"feaa34bd-87c3-492a-8913-be97ffc362e9\") " pod="calico-system/calico-typha-6db99d56dc-dpr84" Apr 13 20:09:35.558180 kubelet[2582]: I0413 20:09:35.558167 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-cni-bin-dir\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.558180 kubelet[2582]: I0413 20:09:35.558186 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/feaa34bd-87c3-492a-8913-be97ffc362e9-typha-certs\") pod \"calico-typha-6db99d56dc-dpr84\" (UID: \"feaa34bd-87c3-492a-8913-be97ffc362e9\") " pod="calico-system/calico-typha-6db99d56dc-dpr84" Apr 13 20:09:35.558657 kubelet[2582]: I0413 20:09:35.558200 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-cni-net-dir\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.558657 kubelet[2582]: I0413 20:09:35.558214 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-lib-modules\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.558657 kubelet[2582]: I0413 20:09:35.558231 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3571c896-a45b-40b1-883f-96dcf544c2c6-tigera-ca-bundle\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.558657 kubelet[2582]: I0413 20:09:35.558261 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5bd\" (UniqueName: \"kubernetes.io/projected/feaa34bd-87c3-492a-8913-be97ffc362e9-kube-api-access-tf5bd\") pod \"calico-typha-6db99d56dc-dpr84\" (UID: \"feaa34bd-87c3-492a-8913-be97ffc362e9\") " pod="calico-system/calico-typha-6db99d56dc-dpr84" Apr 13 20:09:35.558657 kubelet[2582]: I0413 20:09:35.558274 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-flexvol-driver-host\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.558780 kubelet[2582]: I0413 20:09:35.558288 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-bpffs\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.558780 kubelet[2582]: I0413 20:09:35.558311 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-nodeproc\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.558780 kubelet[2582]: I0413 20:09:35.558329 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-policysync\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.559801 kubelet[2582]: I0413 20:09:35.559516 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-xtables-lock\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.559801 kubelet[2582]: I0413 20:09:35.559540 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-sys-fs\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.559801 kubelet[2582]: I0413 20:09:35.559588 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-cni-log-dir\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.559801 kubelet[2582]: I0413 20:09:35.559601 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3571c896-a45b-40b1-883f-96dcf544c2c6-node-certs\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.559801 kubelet[2582]: I0413 20:09:35.559614 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-var-lib-calico\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.559938 kubelet[2582]: I0413 20:09:35.559628 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3571c896-a45b-40b1-883f-96dcf544c2c6-var-run-calico\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.559938 kubelet[2582]: I0413 20:09:35.559709 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nlmw\" (UniqueName: \"kubernetes.io/projected/3571c896-a45b-40b1-883f-96dcf544c2c6-kube-api-access-5nlmw\") pod \"calico-node-m8sjt\" (UID: \"3571c896-a45b-40b1-883f-96dcf544c2c6\") " pod="calico-system/calico-node-m8sjt" Apr 13 20:09:35.657032 kubelet[2582]: E0413 20:09:35.656796 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:35.673231 kubelet[2582]: E0413 20:09:35.673213 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.673395 kubelet[2582]: W0413 20:09:35.673267 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.673395 kubelet[2582]: E0413 20:09:35.673289 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.675972 kubelet[2582]: E0413 20:09:35.675716 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.676143 kubelet[2582]: W0413 20:09:35.676025 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.676143 kubelet[2582]: E0413 20:09:35.676039 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.684377 kubelet[2582]: E0413 20:09:35.683885 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.684377 kubelet[2582]: W0413 20:09:35.683903 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.684377 kubelet[2582]: E0413 20:09:35.683921 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.688435 kubelet[2582]: E0413 20:09:35.688410 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.688435 kubelet[2582]: W0413 20:09:35.688428 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.688543 kubelet[2582]: E0413 20:09:35.688441 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.742621 kubelet[2582]: E0413 20:09:35.742588 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.742621 kubelet[2582]: W0413 20:09:35.742625 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.742803 kubelet[2582]: E0413 20:09:35.742646 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.742920 kubelet[2582]: E0413 20:09:35.742898 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.742920 kubelet[2582]: W0413 20:09:35.742912 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.742999 kubelet[2582]: E0413 20:09:35.742923 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.743166 kubelet[2582]: E0413 20:09:35.743148 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.743166 kubelet[2582]: W0413 20:09:35.743159 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.743210 kubelet[2582]: E0413 20:09:35.743168 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.743434 kubelet[2582]: E0413 20:09:35.743419 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.743434 kubelet[2582]: W0413 20:09:35.743430 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.743479 kubelet[2582]: E0413 20:09:35.743438 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.743705 kubelet[2582]: E0413 20:09:35.743690 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.743705 kubelet[2582]: W0413 20:09:35.743701 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.743752 kubelet[2582]: E0413 20:09:35.743708 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.743937 kubelet[2582]: E0413 20:09:35.743923 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.743937 kubelet[2582]: W0413 20:09:35.743933 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.743977 kubelet[2582]: E0413 20:09:35.743940 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.744128 kubelet[2582]: E0413 20:09:35.744113 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.744128 kubelet[2582]: W0413 20:09:35.744122 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.744172 kubelet[2582]: E0413 20:09:35.744129 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.744346 kubelet[2582]: E0413 20:09:35.744319 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.744346 kubelet[2582]: W0413 20:09:35.744328 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.744387 kubelet[2582]: E0413 20:09:35.744349 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.744573 kubelet[2582]: E0413 20:09:35.744559 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.744573 kubelet[2582]: W0413 20:09:35.744569 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.744617 kubelet[2582]: E0413 20:09:35.744578 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.744798 kubelet[2582]: E0413 20:09:35.744784 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.744798 kubelet[2582]: W0413 20:09:35.744794 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.744842 kubelet[2582]: E0413 20:09:35.744802 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.745015 kubelet[2582]: E0413 20:09:35.745001 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.745015 kubelet[2582]: W0413 20:09:35.745011 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.745058 kubelet[2582]: E0413 20:09:35.745019 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.745222 kubelet[2582]: E0413 20:09:35.745209 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.745222 kubelet[2582]: W0413 20:09:35.745219 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.745267 kubelet[2582]: E0413 20:09:35.745226 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.745455 kubelet[2582]: E0413 20:09:35.745441 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.745455 kubelet[2582]: W0413 20:09:35.745451 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.745502 kubelet[2582]: E0413 20:09:35.745458 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.745681 kubelet[2582]: E0413 20:09:35.745655 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.745681 kubelet[2582]: W0413 20:09:35.745665 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.745720 kubelet[2582]: E0413 20:09:35.745680 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.745881 kubelet[2582]: E0413 20:09:35.745867 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.745950 kubelet[2582]: W0413 20:09:35.745890 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.745950 kubelet[2582]: E0413 20:09:35.745898 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.746111 kubelet[2582]: E0413 20:09:35.746097 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.746111 kubelet[2582]: W0413 20:09:35.746107 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.746143 kubelet[2582]: E0413 20:09:35.746115 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.746364 kubelet[2582]: E0413 20:09:35.746322 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.746364 kubelet[2582]: W0413 20:09:35.746353 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.746364 kubelet[2582]: E0413 20:09:35.746361 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.747121 kubelet[2582]: E0413 20:09:35.746555 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.747121 kubelet[2582]: W0413 20:09:35.746565 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.747121 kubelet[2582]: E0413 20:09:35.746572 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.747460 kubelet[2582]: E0413 20:09:35.747449 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.747653 kubelet[2582]: W0413 20:09:35.747512 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.747653 kubelet[2582]: E0413 20:09:35.747525 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.747801 kubelet[2582]: E0413 20:09:35.747791 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.747908 kubelet[2582]: W0413 20:09:35.747840 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.747908 kubelet[2582]: E0413 20:09:35.747850 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.761087 kubelet[2582]: E0413 20:09:35.761066 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.761087 kubelet[2582]: W0413 20:09:35.761080 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.761087 kubelet[2582]: E0413 20:09:35.761091 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.761189 kubelet[2582]: I0413 20:09:35.761110 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e-socket-dir\") pod \"csi-node-driver-bd8jl\" (UID: \"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e\") " pod="calico-system/csi-node-driver-bd8jl" Apr 13 20:09:35.761348 kubelet[2582]: E0413 20:09:35.761321 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.761348 kubelet[2582]: W0413 20:09:35.761344 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.761392 kubelet[2582]: E0413 20:09:35.761351 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.761392 kubelet[2582]: I0413 20:09:35.761364 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e-varrun\") pod \"csi-node-driver-bd8jl\" (UID: \"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e\") " pod="calico-system/csi-node-driver-bd8jl" Apr 13 20:09:35.761587 kubelet[2582]: E0413 20:09:35.761568 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.761587 kubelet[2582]: W0413 20:09:35.761581 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.761640 kubelet[2582]: E0413 20:09:35.761589 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.761640 kubelet[2582]: I0413 20:09:35.761605 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e-registration-dir\") pod \"csi-node-driver-bd8jl\" (UID: \"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e\") " pod="calico-system/csi-node-driver-bd8jl" Apr 13 20:09:35.761847 kubelet[2582]: E0413 20:09:35.761832 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.761847 kubelet[2582]: W0413 20:09:35.761842 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.761887 kubelet[2582]: E0413 20:09:35.761851 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.761887 kubelet[2582]: I0413 20:09:35.761863 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e-kubelet-dir\") pod \"csi-node-driver-bd8jl\" (UID: \"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e\") " pod="calico-system/csi-node-driver-bd8jl" Apr 13 20:09:35.762072 kubelet[2582]: E0413 20:09:35.762057 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.762096 kubelet[2582]: W0413 20:09:35.762067 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.762119 kubelet[2582]: E0413 20:09:35.762096 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.762119 kubelet[2582]: I0413 20:09:35.762109 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrc8\" (UniqueName: \"kubernetes.io/projected/63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e-kube-api-access-rlrc8\") pod \"csi-node-driver-bd8jl\" (UID: \"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e\") " pod="calico-system/csi-node-driver-bd8jl" Apr 13 20:09:35.762322 kubelet[2582]: E0413 20:09:35.762307 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.762322 kubelet[2582]: W0413 20:09:35.762316 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.762398 kubelet[2582]: E0413 20:09:35.762324 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.762532 kubelet[2582]: E0413 20:09:35.762513 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.762532 kubelet[2582]: W0413 20:09:35.762526 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.762568 kubelet[2582]: E0413 20:09:35.762533 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.762762 kubelet[2582]: E0413 20:09:35.762727 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.762762 kubelet[2582]: W0413 20:09:35.762733 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.762762 kubelet[2582]: E0413 20:09:35.762740 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.763157 kubelet[2582]: E0413 20:09:35.763131 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.763157 kubelet[2582]: W0413 20:09:35.763139 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.763157 kubelet[2582]: E0413 20:09:35.763145 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.763409 kubelet[2582]: E0413 20:09:35.763388 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.763409 kubelet[2582]: W0413 20:09:35.763394 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.763409 kubelet[2582]: E0413 20:09:35.763400 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.763604 kubelet[2582]: E0413 20:09:35.763590 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.763604 kubelet[2582]: W0413 20:09:35.763600 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.763647 kubelet[2582]: E0413 20:09:35.763606 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.763822 kubelet[2582]: E0413 20:09:35.763806 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.763822 kubelet[2582]: W0413 20:09:35.763816 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.763822 kubelet[2582]: E0413 20:09:35.763822 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.764007 kubelet[2582]: E0413 20:09:35.763994 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.764007 kubelet[2582]: W0413 20:09:35.764004 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.764052 kubelet[2582]: E0413 20:09:35.764009 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.764202 kubelet[2582]: E0413 20:09:35.764188 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.764202 kubelet[2582]: W0413 20:09:35.764199 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.764250 kubelet[2582]: E0413 20:09:35.764205 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.764432 kubelet[2582]: E0413 20:09:35.764418 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.764432 kubelet[2582]: W0413 20:09:35.764428 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.764475 kubelet[2582]: E0413 20:09:35.764435 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.805103 containerd[1501]: time="2026-04-13T20:09:35.804762214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6db99d56dc-dpr84,Uid:feaa34bd-87c3-492a-8913-be97ffc362e9,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:35.828379 containerd[1501]: time="2026-04-13T20:09:35.827944414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:35.828379 containerd[1501]: time="2026-04-13T20:09:35.827987964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:35.828379 containerd[1501]: time="2026-04-13T20:09:35.827998364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:35.828379 containerd[1501]: time="2026-04-13T20:09:35.828062694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:35.849488 systemd[1]: Started cri-containerd-f7395c94237185afccb128ff5693157da1c4d5a9af9a2d44afd995cdce71a720.scope - libcontainer container f7395c94237185afccb128ff5693157da1c4d5a9af9a2d44afd995cdce71a720. Apr 13 20:09:35.850518 containerd[1501]: time="2026-04-13T20:09:35.850442452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8sjt,Uid:3571c896-a45b-40b1-883f-96dcf544c2c6,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:35.863696 kubelet[2582]: E0413 20:09:35.863608 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.863696 kubelet[2582]: W0413 20:09:35.863626 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.863696 kubelet[2582]: E0413 20:09:35.863642 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.864310 kubelet[2582]: E0413 20:09:35.864293 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.864310 kubelet[2582]: W0413 20:09:35.864305 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.865050 kubelet[2582]: E0413 20:09:35.864313 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.865080 kubelet[2582]: E0413 20:09:35.865050 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.865080 kubelet[2582]: W0413 20:09:35.865058 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.865080 kubelet[2582]: E0413 20:09:35.865066 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.865477 kubelet[2582]: E0413 20:09:35.865449 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.865477 kubelet[2582]: W0413 20:09:35.865461 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.865477 kubelet[2582]: E0413 20:09:35.865468 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.866111 kubelet[2582]: E0413 20:09:35.866084 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.866111 kubelet[2582]: W0413 20:09:35.866098 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.866111 kubelet[2582]: E0413 20:09:35.866105 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.867389 kubelet[2582]: E0413 20:09:35.867368 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.867389 kubelet[2582]: W0413 20:09:35.867382 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.867389 kubelet[2582]: E0413 20:09:35.867391 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.869450 kubelet[2582]: E0413 20:09:35.869386 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.869450 kubelet[2582]: W0413 20:09:35.869397 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.869450 kubelet[2582]: E0413 20:09:35.869405 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.869641 kubelet[2582]: E0413 20:09:35.869622 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.869641 kubelet[2582]: W0413 20:09:35.869636 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.869641 kubelet[2582]: E0413 20:09:35.869643 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.869881 kubelet[2582]: E0413 20:09:35.869860 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.869881 kubelet[2582]: W0413 20:09:35.869872 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.869881 kubelet[2582]: E0413 20:09:35.869879 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.870609 kubelet[2582]: E0413 20:09:35.870406 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.870609 kubelet[2582]: W0413 20:09:35.870416 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.870609 kubelet[2582]: E0413 20:09:35.870423 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.870687 kubelet[2582]: E0413 20:09:35.870627 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.870687 kubelet[2582]: W0413 20:09:35.870634 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.870687 kubelet[2582]: E0413 20:09:35.870640 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.871470 kubelet[2582]: E0413 20:09:35.871393 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.871470 kubelet[2582]: W0413 20:09:35.871403 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.871470 kubelet[2582]: E0413 20:09:35.871409 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.871888 kubelet[2582]: E0413 20:09:35.871602 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.871888 kubelet[2582]: W0413 20:09:35.871610 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.871888 kubelet[2582]: E0413 20:09:35.871616 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.871888 kubelet[2582]: E0413 20:09:35.871820 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.871888 kubelet[2582]: W0413 20:09:35.871826 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.871888 kubelet[2582]: E0413 20:09:35.871833 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.872640 kubelet[2582]: E0413 20:09:35.872595 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.872640 kubelet[2582]: W0413 20:09:35.872604 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.872640 kubelet[2582]: E0413 20:09:35.872612 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.873486 kubelet[2582]: E0413 20:09:35.873468 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.873486 kubelet[2582]: W0413 20:09:35.873482 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.873646 kubelet[2582]: E0413 20:09:35.873490 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.873790 kubelet[2582]: E0413 20:09:35.873731 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.873790 kubelet[2582]: W0413 20:09:35.873740 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.873790 kubelet[2582]: E0413 20:09:35.873749 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.874534 kubelet[2582]: E0413 20:09:35.874507 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.874534 kubelet[2582]: W0413 20:09:35.874517 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.874534 kubelet[2582]: E0413 20:09:35.874524 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.874733 kubelet[2582]: E0413 20:09:35.874724 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.874733 kubelet[2582]: W0413 20:09:35.874731 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.874812 kubelet[2582]: E0413 20:09:35.874738 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.876372 kubelet[2582]: E0413 20:09:35.876244 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.876372 kubelet[2582]: W0413 20:09:35.876256 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.876372 kubelet[2582]: E0413 20:09:35.876264 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.876695 kubelet[2582]: E0413 20:09:35.876580 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.876695 kubelet[2582]: W0413 20:09:35.876589 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.876695 kubelet[2582]: E0413 20:09:35.876595 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.876825 kubelet[2582]: E0413 20:09:35.876794 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.876825 kubelet[2582]: W0413 20:09:35.876802 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.876825 kubelet[2582]: E0413 20:09:35.876808 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.877798 containerd[1501]: time="2026-04-13T20:09:35.876486224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:35.877798 containerd[1501]: time="2026-04-13T20:09:35.876522994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:35.877798 containerd[1501]: time="2026-04-13T20:09:35.876582294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:35.877798 containerd[1501]: time="2026-04-13T20:09:35.876656334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:35.877912 kubelet[2582]: E0413 20:09:35.876992 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.877912 kubelet[2582]: W0413 20:09:35.876998 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.877912 kubelet[2582]: E0413 20:09:35.877004 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.878486 kubelet[2582]: E0413 20:09:35.878470 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.878486 kubelet[2582]: W0413 20:09:35.878483 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.878554 kubelet[2582]: E0413 20:09:35.878492 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.878725 kubelet[2582]: E0413 20:09:35.878709 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.878725 kubelet[2582]: W0413 20:09:35.878721 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.878880 kubelet[2582]: E0413 20:09:35.878729 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.887836 kubelet[2582]: E0413 20:09:35.887813 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:35.887836 kubelet[2582]: W0413 20:09:35.887830 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:35.887836 kubelet[2582]: E0413 20:09:35.887839 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:35.901494 systemd[1]: Started cri-containerd-7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0.scope - libcontainer container 7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0. Apr 13 20:09:35.903573 containerd[1501]: time="2026-04-13T20:09:35.903044496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6db99d56dc-dpr84,Uid:feaa34bd-87c3-492a-8913-be97ffc362e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7395c94237185afccb128ff5693157da1c4d5a9af9a2d44afd995cdce71a720\"" Apr 13 20:09:35.905729 containerd[1501]: time="2026-04-13T20:09:35.905716418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:09:35.924755 containerd[1501]: time="2026-04-13T20:09:35.924713144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8sjt,Uid:3571c896-a45b-40b1-883f-96dcf544c2c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\"" Apr 13 20:09:37.633489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225968929.mount: Deactivated successfully. Apr 13 20:09:37.961744 kubelet[2582]: E0413 20:09:37.961702 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:37.997078 containerd[1501]: time="2026-04-13T20:09:37.996401657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.998103 containerd[1501]: time="2026-04-13T20:09:37.998072832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:09:37.999324 containerd[1501]: time="2026-04-13T20:09:37.999299779Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:38.001513 containerd[1501]: time="2026-04-13T20:09:38.001489789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:38.001953 containerd[1501]: time="2026-04-13T20:09:38.001920685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.096077456s" Apr 13 20:09:38.001993 containerd[1501]: time="2026-04-13T20:09:38.001953588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:09:38.002932 containerd[1501]: time="2026-04-13T20:09:38.002917606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:09:38.015740 containerd[1501]: time="2026-04-13T20:09:38.015706492Z" level=info msg="CreateContainer within sandbox \"f7395c94237185afccb128ff5693157da1c4d5a9af9a2d44afd995cdce71a720\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:09:38.030557 containerd[1501]: time="2026-04-13T20:09:38.030511931Z" level=info msg="CreateContainer within sandbox \"f7395c94237185afccb128ff5693157da1c4d5a9af9a2d44afd995cdce71a720\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ee77c0b9a80c70c00f33fff7e08dea304ff9932174ef1d15227dd6da014d40ba\"" Apr 13 20:09:38.031175 containerd[1501]: time="2026-04-13T20:09:38.031144704Z" level=info msg="StartContainer for \"ee77c0b9a80c70c00f33fff7e08dea304ff9932174ef1d15227dd6da014d40ba\"" Apr 13 20:09:38.060617 systemd[1]: Started cri-containerd-ee77c0b9a80c70c00f33fff7e08dea304ff9932174ef1d15227dd6da014d40ba.scope - libcontainer container ee77c0b9a80c70c00f33fff7e08dea304ff9932174ef1d15227dd6da014d40ba. Apr 13 20:09:38.094670 containerd[1501]: time="2026-04-13T20:09:38.094612342Z" level=info msg="StartContainer for \"ee77c0b9a80c70c00f33fff7e08dea304ff9932174ef1d15227dd6da014d40ba\" returns successfully" Apr 13 20:09:39.010126 systemd[1]: run-containerd-runc-k8s.io-ee77c0b9a80c70c00f33fff7e08dea304ff9932174ef1d15227dd6da014d40ba-runc.psqvZ1.mount: Deactivated successfully. Apr 13 20:09:39.070361 kubelet[2582]: E0413 20:09:39.070294 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.070361 kubelet[2582]: W0413 20:09:39.070323 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.070361 kubelet[2582]: E0413 20:09:39.070355 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.071559 kubelet[2582]: E0413 20:09:39.071486 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.071559 kubelet[2582]: W0413 20:09:39.071498 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.071559 kubelet[2582]: E0413 20:09:39.071508 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.071730 kubelet[2582]: E0413 20:09:39.071686 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.071730 kubelet[2582]: W0413 20:09:39.071693 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.071730 kubelet[2582]: E0413 20:09:39.071700 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.072054 kubelet[2582]: E0413 20:09:39.071893 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.072054 kubelet[2582]: W0413 20:09:39.071902 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.072054 kubelet[2582]: E0413 20:09:39.071908 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.072291 kubelet[2582]: E0413 20:09:39.072099 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.072291 kubelet[2582]: W0413 20:09:39.072106 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.072291 kubelet[2582]: E0413 20:09:39.072112 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.072291 kubelet[2582]: E0413 20:09:39.072286 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.072291 kubelet[2582]: W0413 20:09:39.072291 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.072291 kubelet[2582]: E0413 20:09:39.072299 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.072855 kubelet[2582]: E0413 20:09:39.072520 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.072855 kubelet[2582]: W0413 20:09:39.072527 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.072855 kubelet[2582]: E0413 20:09:39.072533 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.072855 kubelet[2582]: E0413 20:09:39.072784 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.072855 kubelet[2582]: W0413 20:09:39.072791 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.072855 kubelet[2582]: E0413 20:09:39.072797 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.073724 kubelet[2582]: E0413 20:09:39.073684 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.073724 kubelet[2582]: W0413 20:09:39.073701 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.073724 kubelet[2582]: E0413 20:09:39.073709 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.079409 kubelet[2582]: E0413 20:09:39.079383 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.079409 kubelet[2582]: W0413 20:09:39.079401 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.079409 kubelet[2582]: E0413 20:09:39.079410 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.079616 kubelet[2582]: E0413 20:09:39.079598 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.079616 kubelet[2582]: W0413 20:09:39.079611 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.079661 kubelet[2582]: E0413 20:09:39.079617 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.079794 kubelet[2582]: E0413 20:09:39.079778 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.079794 kubelet[2582]: W0413 20:09:39.079790 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.079847 kubelet[2582]: E0413 20:09:39.079797 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.079981 kubelet[2582]: E0413 20:09:39.079965 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.079981 kubelet[2582]: W0413 20:09:39.079977 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.080019 kubelet[2582]: E0413 20:09:39.079990 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.080394 kubelet[2582]: E0413 20:09:39.080377 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.080394 kubelet[2582]: W0413 20:09:39.080389 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.080394 kubelet[2582]: E0413 20:09:39.080399 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.082414 kubelet[2582]: E0413 20:09:39.082391 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.082464 kubelet[2582]: W0413 20:09:39.082415 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.082464 kubelet[2582]: E0413 20:09:39.082424 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.092572 kubelet[2582]: E0413 20:09:39.092399 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.092572 kubelet[2582]: W0413 20:09:39.092413 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.092572 kubelet[2582]: E0413 20:09:39.092424 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.092719 kubelet[2582]: E0413 20:09:39.092701 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.092719 kubelet[2582]: W0413 20:09:39.092714 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.092719 kubelet[2582]: E0413 20:09:39.092721 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.093542 kubelet[2582]: E0413 20:09:39.093524 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.093542 kubelet[2582]: W0413 20:09:39.093537 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.093603 kubelet[2582]: E0413 20:09:39.093545 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.093832 kubelet[2582]: E0413 20:09:39.093816 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.093832 kubelet[2582]: W0413 20:09:39.093828 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.093832 kubelet[2582]: E0413 20:09:39.093834 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.094629 kubelet[2582]: E0413 20:09:39.094612 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.094629 kubelet[2582]: W0413 20:09:39.094625 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.094673 kubelet[2582]: E0413 20:09:39.094632 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.094878 kubelet[2582]: E0413 20:09:39.094862 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.094878 kubelet[2582]: W0413 20:09:39.094874 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.094924 kubelet[2582]: E0413 20:09:39.094880 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.096512 kubelet[2582]: E0413 20:09:39.096492 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.096512 kubelet[2582]: W0413 20:09:39.096507 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.096512 kubelet[2582]: E0413 20:09:39.096514 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.098557 kubelet[2582]: E0413 20:09:39.098538 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.098557 kubelet[2582]: W0413 20:09:39.098551 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.098557 kubelet[2582]: E0413 20:09:39.098559 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.098811 kubelet[2582]: E0413 20:09:39.098795 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.098811 kubelet[2582]: W0413 20:09:39.098807 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.098852 kubelet[2582]: E0413 20:09:39.098813 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.099055 kubelet[2582]: E0413 20:09:39.099040 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.099055 kubelet[2582]: W0413 20:09:39.099052 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.099101 kubelet[2582]: E0413 20:09:39.099066 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.099390 kubelet[2582]: E0413 20:09:39.099330 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.099390 kubelet[2582]: W0413 20:09:39.099362 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.099390 kubelet[2582]: E0413 20:09:39.099368 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.099865 kubelet[2582]: E0413 20:09:39.099849 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.099865 kubelet[2582]: W0413 20:09:39.099861 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.099907 kubelet[2582]: E0413 20:09:39.099868 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.100127 kubelet[2582]: E0413 20:09:39.100112 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.100127 kubelet[2582]: W0413 20:09:39.100124 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.100168 kubelet[2582]: E0413 20:09:39.100130 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.100411 kubelet[2582]: E0413 20:09:39.100396 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.100411 kubelet[2582]: W0413 20:09:39.100407 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.100461 kubelet[2582]: E0413 20:09:39.100415 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.100773 kubelet[2582]: E0413 20:09:39.100758 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.100773 kubelet[2582]: W0413 20:09:39.100770 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.100814 kubelet[2582]: E0413 20:09:39.100776 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.101034 kubelet[2582]: E0413 20:09:39.101019 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.101034 kubelet[2582]: W0413 20:09:39.101030 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.101072 kubelet[2582]: E0413 20:09:39.101037 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.101771 kubelet[2582]: E0413 20:09:39.101751 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.101801 kubelet[2582]: W0413 20:09:39.101775 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.101801 kubelet[2582]: E0413 20:09:39.101783 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.103376 kubelet[2582]: E0413 20:09:39.102050 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.103376 kubelet[2582]: W0413 20:09:39.102059 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.103376 kubelet[2582]: E0413 20:09:39.102066 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.724957 containerd[1501]: time="2026-04-13T20:09:39.724898960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.726102 containerd[1501]: time="2026-04-13T20:09:39.726058549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:09:39.727300 containerd[1501]: time="2026-04-13T20:09:39.727247421Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.730136 containerd[1501]: time="2026-04-13T20:09:39.730096789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.730988 containerd[1501]: time="2026-04-13T20:09:39.730604868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.727618227s" Apr 13 20:09:39.730988 containerd[1501]: time="2026-04-13T20:09:39.730631790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:09:39.734759 containerd[1501]: time="2026-04-13T20:09:39.734725804Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:09:39.749508 containerd[1501]: time="2026-04-13T20:09:39.749471035Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1\"" Apr 13 20:09:39.750528 containerd[1501]: time="2026-04-13T20:09:39.750503014Z" level=info msg="StartContainer for \"a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1\"" Apr 13 20:09:39.781460 systemd[1]: Started cri-containerd-a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1.scope - libcontainer container a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1. Apr 13 20:09:39.808003 containerd[1501]: time="2026-04-13T20:09:39.807965931Z" level=info msg="StartContainer for \"a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1\" returns successfully" Apr 13 20:09:39.817164 systemd[1]: cri-containerd-a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1.scope: Deactivated successfully. Apr 13 20:09:39.836419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1-rootfs.mount: Deactivated successfully. Apr 13 20:09:39.962917 kubelet[2582]: E0413 20:09:39.962759 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:39.973012 containerd[1501]: time="2026-04-13T20:09:39.972940403Z" level=info msg="shim disconnected" id=a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1 namespace=k8s.io Apr 13 20:09:39.973151 containerd[1501]: time="2026-04-13T20:09:39.973019450Z" level=warning msg="cleaning up after shim disconnected" id=a4a0b95e1057e15db3fd84cd79b9b5812ed9fbc69a3625500d2d8f099b7c0bc1 namespace=k8s.io Apr 13 20:09:39.973151 containerd[1501]: time="2026-04-13T20:09:39.973036681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:09:40.036113 kubelet[2582]: I0413 20:09:40.035537 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:09:40.038289 containerd[1501]: time="2026-04-13T20:09:40.037997059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:09:40.051626 kubelet[2582]: I0413 20:09:40.051567 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6db99d56dc-dpr84" podStartSLOduration=2.953916082 podStartE2EDuration="5.051555754s" podCreationTimestamp="2026-04-13 20:09:35 +0000 UTC" firstStartedPulling="2026-04-13 20:09:35.904959468 +0000 UTC m=+16.031844497" lastFinishedPulling="2026-04-13 20:09:38.00259915 +0000 UTC m=+18.129484169" observedRunningTime="2026-04-13 20:09:39.064617451 +0000 UTC m=+19.191502500" watchObservedRunningTime="2026-04-13 20:09:40.051555754 +0000 UTC m=+20.178440773" Apr 13 20:09:41.962373 kubelet[2582]: E0413 20:09:41.962296 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:43.695398 kubelet[2582]: I0413 20:09:43.694910 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:09:43.963788 kubelet[2582]: E0413 20:09:43.963076 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:45.961794 kubelet[2582]: E0413 20:09:45.961762 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:46.684312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount691890968.mount: Deactivated successfully. Apr 13 20:09:46.709247 containerd[1501]: time="2026-04-13T20:09:46.709203451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:46.710234 containerd[1501]: time="2026-04-13T20:09:46.710162828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:09:46.711508 containerd[1501]: time="2026-04-13T20:09:46.711087593Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:46.712796 containerd[1501]: time="2026-04-13T20:09:46.712765476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:46.713172 containerd[1501]: time="2026-04-13T20:09:46.713149124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.675117213s" Apr 13 20:09:46.713199 containerd[1501]: time="2026-04-13T20:09:46.713173826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:09:46.716757 containerd[1501]: time="2026-04-13T20:09:46.716682708Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:09:46.729757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3234500571.mount: Deactivated successfully. Apr 13 20:09:46.732581 containerd[1501]: time="2026-04-13T20:09:46.732556138Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359\"" Apr 13 20:09:46.733840 containerd[1501]: time="2026-04-13T20:09:46.733363718Z" level=info msg="StartContainer for \"e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359\"" Apr 13 20:09:46.762459 systemd[1]: Started cri-containerd-e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359.scope - libcontainer container e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359. Apr 13 20:09:46.790654 containerd[1501]: time="2026-04-13T20:09:46.790617760Z" level=info msg="StartContainer for \"e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359\" returns successfully" Apr 13 20:09:46.818011 systemd[1]: cri-containerd-e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359.scope: Deactivated successfully. Apr 13 20:09:46.920611 containerd[1501]: time="2026-04-13T20:09:46.920495380Z" level=info msg="shim disconnected" id=e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359 namespace=k8s.io Apr 13 20:09:46.920611 containerd[1501]: time="2026-04-13T20:09:46.920590924Z" level=warning msg="cleaning up after shim disconnected" id=e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359 namespace=k8s.io Apr 13 20:09:46.920611 containerd[1501]: time="2026-04-13T20:09:46.920599155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:09:47.053443 containerd[1501]: time="2026-04-13T20:09:47.052881266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:09:47.687831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6979d6ed6f4fe6437425468db887565303da640488d8766e7e12c879d575359-rootfs.mount: Deactivated successfully. Apr 13 20:09:47.961868 kubelet[2582]: E0413 20:09:47.960728 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:49.961745 kubelet[2582]: E0413 20:09:49.961050 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:50.852328 containerd[1501]: time="2026-04-13T20:09:50.852269281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:50.853324 containerd[1501]: time="2026-04-13T20:09:50.853289530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:09:50.854179 containerd[1501]: time="2026-04-13T20:09:50.854146683Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:50.855735 containerd[1501]: time="2026-04-13T20:09:50.855707822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:50.856513 containerd[1501]: time="2026-04-13T20:09:50.856149188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.803216789s" Apr 13 20:09:50.856513 containerd[1501]: time="2026-04-13T20:09:50.856173189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:09:50.858788 containerd[1501]: time="2026-04-13T20:09:50.858765319Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:09:50.873293 containerd[1501]: time="2026-04-13T20:09:50.873261082Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0\"" Apr 13 20:09:50.873889 containerd[1501]: time="2026-04-13T20:09:50.873871744Z" level=info msg="StartContainer for \"1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0\"" Apr 13 20:09:50.897445 systemd[1]: Started cri-containerd-1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0.scope - libcontainer container 1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0. Apr 13 20:09:50.931737 containerd[1501]: time="2026-04-13T20:09:50.931707640Z" level=info msg="StartContainer for \"1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0\" returns successfully" Apr 13 20:09:51.381473 systemd[1]: cri-containerd-1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0.scope: Deactivated successfully. Apr 13 20:09:51.397120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0-rootfs.mount: Deactivated successfully. Apr 13 20:09:51.403344 containerd[1501]: time="2026-04-13T20:09:51.402098821Z" level=info msg="shim disconnected" id=1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0 namespace=k8s.io Apr 13 20:09:51.403344 containerd[1501]: time="2026-04-13T20:09:51.402152063Z" level=warning msg="cleaning up after shim disconnected" id=1bf8624239219f28be278adbed5cc6c448ca04b587e188c0a13404ec3827bca0 namespace=k8s.io Apr 13 20:09:51.403344 containerd[1501]: time="2026-04-13T20:09:51.402159373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:09:51.430397 kubelet[2582]: I0413 20:09:51.430381 2582 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 20:09:51.470378 systemd[1]: Created slice kubepods-burstable-podb1dedca3_06af_44ff_b14b_b383f1cac2f6.slice - libcontainer container kubepods-burstable-podb1dedca3_06af_44ff_b14b_b383f1cac2f6.slice. Apr 13 20:09:51.479416 systemd[1]: Created slice kubepods-besteffort-pod1febeed9_7aaa_4a97_a2b4_1f1caf66c1e4.slice - libcontainer container kubepods-besteffort-pod1febeed9_7aaa_4a97_a2b4_1f1caf66c1e4.slice. Apr 13 20:09:51.486378 kubelet[2582]: I0413 20:09:51.485924 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb-config-volume\") pod \"coredns-66bc5c9577-4df44\" (UID: \"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb\") " pod="kube-system/coredns-66bc5c9577-4df44" Apr 13 20:09:51.486378 kubelet[2582]: I0413 20:09:51.485945 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tknb\" (UniqueName: \"kubernetes.io/projected/0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb-kube-api-access-5tknb\") pod \"coredns-66bc5c9577-4df44\" (UID: \"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb\") " pod="kube-system/coredns-66bc5c9577-4df44" Apr 13 20:09:51.486378 kubelet[2582]: I0413 20:09:51.485960 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22znj\" (UniqueName: \"kubernetes.io/projected/b1dedca3-06af-44ff-b14b-b383f1cac2f6-kube-api-access-22znj\") pod \"coredns-66bc5c9577-tddkd\" (UID: \"b1dedca3-06af-44ff-b14b-b383f1cac2f6\") " pod="kube-system/coredns-66bc5c9577-tddkd" Apr 13 20:09:51.486378 kubelet[2582]: I0413 20:09:51.485970 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3-tigera-ca-bundle\") pod \"calico-kube-controllers-ffb7f679-bnvkh\" (UID: \"ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3\") " pod="calico-system/calico-kube-controllers-ffb7f679-bnvkh" Apr 13 20:09:51.486378 kubelet[2582]: I0413 20:09:51.485980 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtprk\" (UniqueName: \"kubernetes.io/projected/ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3-kube-api-access-vtprk\") pod \"calico-kube-controllers-ffb7f679-bnvkh\" (UID: \"ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3\") " pod="calico-system/calico-kube-controllers-ffb7f679-bnvkh" Apr 13 20:09:51.486534 kubelet[2582]: I0413 20:09:51.485990 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4-calico-apiserver-certs\") pod \"calico-apiserver-5d559f55b6-fdgdr\" (UID: \"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4\") " pod="calico-system/calico-apiserver-5d559f55b6-fdgdr" Apr 13 20:09:51.486534 kubelet[2582]: I0413 20:09:51.486001 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4b45\" (UniqueName: \"kubernetes.io/projected/1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4-kube-api-access-l4b45\") pod \"calico-apiserver-5d559f55b6-fdgdr\" (UID: \"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4\") " pod="calico-system/calico-apiserver-5d559f55b6-fdgdr" Apr 13 20:09:51.486534 kubelet[2582]: I0413 20:09:51.486016 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1dedca3-06af-44ff-b14b-b383f1cac2f6-config-volume\") pod \"coredns-66bc5c9577-tddkd\" (UID: \"b1dedca3-06af-44ff-b14b-b383f1cac2f6\") " pod="kube-system/coredns-66bc5c9577-tddkd" Apr 13 20:09:51.486431 systemd[1]: Created slice kubepods-burstable-pod0454e9bc_ec53_4cc9_a0f4_2ba8ec7662fb.slice - libcontainer container kubepods-burstable-pod0454e9bc_ec53_4cc9_a0f4_2ba8ec7662fb.slice. Apr 13 20:09:51.494715 systemd[1]: Created slice kubepods-besteffort-podef5a4fd8_83c5_4b36_9eb8_ac26cc2345f3.slice - libcontainer container kubepods-besteffort-podef5a4fd8_83c5_4b36_9eb8_ac26cc2345f3.slice. Apr 13 20:09:51.501134 systemd[1]: Created slice kubepods-besteffort-pod81e9c78f_9c13_4884_ab29_0daba08c8e1a.slice - libcontainer container kubepods-besteffort-pod81e9c78f_9c13_4884_ab29_0daba08c8e1a.slice. Apr 13 20:09:51.507813 systemd[1]: Created slice kubepods-besteffort-pod9b1ddd38_936b_4249_ae6d_50277142aab0.slice - libcontainer container kubepods-besteffort-pod9b1ddd38_936b_4249_ae6d_50277142aab0.slice. Apr 13 20:09:51.512399 systemd[1]: Created slice kubepods-besteffort-pod95e2e62a_c377_4112_9481_1c4f900ed72b.slice - libcontainer container kubepods-besteffort-pod95e2e62a_c377_4112_9481_1c4f900ed72b.slice. Apr 13 20:09:51.588898 kubelet[2582]: I0413 20:09:51.587512 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-backend-key-pair\") pod \"whisker-5c8c5b9bcf-vb6pl\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " pod="calico-system/whisker-5c8c5b9bcf-vb6pl" Apr 13 20:09:51.588898 kubelet[2582]: I0413 20:09:51.587589 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-ca-bundle\") pod \"whisker-5c8c5b9bcf-vb6pl\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " pod="calico-system/whisker-5c8c5b9bcf-vb6pl" Apr 13 20:09:51.588898 kubelet[2582]: I0413 20:09:51.587636 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b1ddd38-936b-4249-ae6d-50277142aab0-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-ngkh6\" (UID: \"9b1ddd38-936b-4249-ae6d-50277142aab0\") " pod="calico-system/goldmane-cccfbd5cf-ngkh6" Apr 13 20:09:51.588898 kubelet[2582]: I0413 20:09:51.587656 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgb2\" (UniqueName: \"kubernetes.io/projected/9b1ddd38-936b-4249-ae6d-50277142aab0-kube-api-access-pqgb2\") pod \"goldmane-cccfbd5cf-ngkh6\" (UID: \"9b1ddd38-936b-4249-ae6d-50277142aab0\") " pod="calico-system/goldmane-cccfbd5cf-ngkh6" Apr 13 20:09:51.588898 kubelet[2582]: I0413 20:09:51.587769 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b1ddd38-936b-4249-ae6d-50277142aab0-config\") pod \"goldmane-cccfbd5cf-ngkh6\" (UID: \"9b1ddd38-936b-4249-ae6d-50277142aab0\") " pod="calico-system/goldmane-cccfbd5cf-ngkh6" Apr 13 20:09:51.589186 kubelet[2582]: I0413 20:09:51.587817 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74sv4\" (UniqueName: \"kubernetes.io/projected/81e9c78f-9c13-4884-ab29-0daba08c8e1a-kube-api-access-74sv4\") pod \"whisker-5c8c5b9bcf-vb6pl\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " pod="calico-system/whisker-5c8c5b9bcf-vb6pl" Apr 13 20:09:51.589186 kubelet[2582]: I0413 20:09:51.587837 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95e2e62a-c377-4112-9481-1c4f900ed72b-calico-apiserver-certs\") pod \"calico-apiserver-5d559f55b6-jwmwb\" (UID: \"95e2e62a-c377-4112-9481-1c4f900ed72b\") " pod="calico-system/calico-apiserver-5d559f55b6-jwmwb" Apr 13 20:09:51.589186 kubelet[2582]: I0413 20:09:51.587855 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfds\" (UniqueName: \"kubernetes.io/projected/95e2e62a-c377-4112-9481-1c4f900ed72b-kube-api-access-xnfds\") pod \"calico-apiserver-5d559f55b6-jwmwb\" (UID: \"95e2e62a-c377-4112-9481-1c4f900ed72b\") " pod="calico-system/calico-apiserver-5d559f55b6-jwmwb" Apr 13 20:09:51.589186 kubelet[2582]: I0413 20:09:51.587877 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-nginx-config\") pod \"whisker-5c8c5b9bcf-vb6pl\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " pod="calico-system/whisker-5c8c5b9bcf-vb6pl" Apr 13 20:09:51.589186 kubelet[2582]: I0413 20:09:51.587923 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9b1ddd38-936b-4249-ae6d-50277142aab0-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-ngkh6\" (UID: \"9b1ddd38-936b-4249-ae6d-50277142aab0\") " pod="calico-system/goldmane-cccfbd5cf-ngkh6" Apr 13 20:09:51.782641 containerd[1501]: time="2026-04-13T20:09:51.782548012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tddkd,Uid:b1dedca3-06af-44ff-b14b-b383f1cac2f6,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:51.786416 containerd[1501]: time="2026-04-13T20:09:51.786313877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-fdgdr,Uid:1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.793658 containerd[1501]: time="2026-04-13T20:09:51.793614658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4df44,Uid:0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:51.800617 containerd[1501]: time="2026-04-13T20:09:51.800163043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb7f679-bnvkh,Uid:ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.814944 containerd[1501]: time="2026-04-13T20:09:51.814849749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngkh6,Uid:9b1ddd38-936b-4249-ae6d-50277142aab0,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.815908 containerd[1501]: time="2026-04-13T20:09:51.815692380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c8c5b9bcf-vb6pl,Uid:81e9c78f-9c13-4884-ab29-0daba08c8e1a,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.816149 containerd[1501]: time="2026-04-13T20:09:51.816100894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-jwmwb,Uid:95e2e62a-c377-4112-9481-1c4f900ed72b,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.969242 systemd[1]: Created slice kubepods-besteffort-pod63fb5bd2_87bc_48b2_990d_3ba3eaa6c20e.slice - libcontainer container kubepods-besteffort-pod63fb5bd2_87bc_48b2_990d_3ba3eaa6c20e.slice. Apr 13 20:09:51.972965 containerd[1501]: time="2026-04-13T20:09:51.972670630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bd8jl,Uid:63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:52.014173 containerd[1501]: time="2026-04-13T20:09:52.014136374Z" level=error msg="Failed to destroy network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.014642 containerd[1501]: time="2026-04-13T20:09:52.014616800Z" level=error msg="encountered an error cleaning up failed sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.014741 containerd[1501]: time="2026-04-13T20:09:52.014724584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c8c5b9bcf-vb6pl,Uid:81e9c78f-9c13-4884-ab29-0daba08c8e1a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.015031 kubelet[2582]: E0413 20:09:52.015006 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.015125 kubelet[2582]: E0413 20:09:52.015113 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c8c5b9bcf-vb6pl" Apr 13 20:09:52.015950 kubelet[2582]: E0413 20:09:52.015164 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c8c5b9bcf-vb6pl" Apr 13 20:09:52.015950 kubelet[2582]: E0413 20:09:52.015211 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c8c5b9bcf-vb6pl_calico-system(81e9c78f-9c13-4884-ab29-0daba08c8e1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c8c5b9bcf-vb6pl_calico-system(81e9c78f-9c13-4884-ab29-0daba08c8e1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c8c5b9bcf-vb6pl" podUID="81e9c78f-9c13-4884-ab29-0daba08c8e1a" Apr 13 20:09:52.035526 containerd[1501]: time="2026-04-13T20:09:52.035434050Z" level=error msg="Failed to destroy network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.036420 containerd[1501]: time="2026-04-13T20:09:52.036398483Z" level=error msg="encountered an error cleaning up failed sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.036598 containerd[1501]: time="2026-04-13T20:09:52.036580919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tddkd,Uid:b1dedca3-06af-44ff-b14b-b383f1cac2f6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.037389 kubelet[2582]: E0413 20:09:52.036830 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.037546 kubelet[2582]: E0413 20:09:52.036875 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tddkd" Apr 13 20:09:52.037546 kubelet[2582]: E0413 20:09:52.037466 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tddkd" Apr 13 20:09:52.037546 kubelet[2582]: E0413 20:09:52.037517 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tddkd_kube-system(b1dedca3-06af-44ff-b14b-b383f1cac2f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tddkd_kube-system(b1dedca3-06af-44ff-b14b-b383f1cac2f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tddkd" podUID="b1dedca3-06af-44ff-b14b-b383f1cac2f6" Apr 13 20:09:52.058263 containerd[1501]: time="2026-04-13T20:09:52.058220596Z" level=error msg="Failed to destroy network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.058637 containerd[1501]: time="2026-04-13T20:09:52.058587778Z" level=error msg="encountered an error cleaning up failed sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.058671 containerd[1501]: time="2026-04-13T20:09:52.058655250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-fdgdr,Uid:1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.059954 kubelet[2582]: E0413 20:09:52.058874 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.059954 kubelet[2582]: E0413 20:09:52.058930 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d559f55b6-fdgdr" Apr 13 20:09:52.059954 kubelet[2582]: E0413 20:09:52.058949 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d559f55b6-fdgdr" Apr 13 20:09:52.060051 kubelet[2582]: E0413 20:09:52.058991 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d559f55b6-fdgdr_calico-system(1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d559f55b6-fdgdr_calico-system(1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5d559f55b6-fdgdr" podUID="1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4" Apr 13 20:09:52.065891 containerd[1501]: time="2026-04-13T20:09:52.065865753Z" level=error msg="Failed to destroy network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.066266 containerd[1501]: time="2026-04-13T20:09:52.066246915Z" level=error msg="encountered an error cleaning up failed sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.066378 containerd[1501]: time="2026-04-13T20:09:52.066326779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4df44,Uid:0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.068024 kubelet[2582]: E0413 20:09:52.066571 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.068024 kubelet[2582]: E0413 20:09:52.067390 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4df44" Apr 13 20:09:52.068024 kubelet[2582]: E0413 20:09:52.067415 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4df44" Apr 13 20:09:52.068117 kubelet[2582]: E0413 20:09:52.067557 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4df44_kube-system(0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4df44_kube-system(0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4df44" podUID="0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb" Apr 13 20:09:52.083053 kubelet[2582]: I0413 20:09:52.083008 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:09:52.084761 containerd[1501]: time="2026-04-13T20:09:52.084643504Z" level=error msg="Failed to destroy network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.085150 containerd[1501]: time="2026-04-13T20:09:52.085130310Z" level=error msg="encountered an error cleaning up failed sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.085288 containerd[1501]: time="2026-04-13T20:09:52.085201933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-jwmwb,Uid:95e2e62a-c377-4112-9481-1c4f900ed72b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.085436 containerd[1501]: time="2026-04-13T20:09:52.085392729Z" level=error msg="Failed to destroy network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.085768 containerd[1501]: time="2026-04-13T20:09:52.085659589Z" level=error msg="encountered an error cleaning up failed sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.085768 containerd[1501]: time="2026-04-13T20:09:52.085718931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngkh6,Uid:9b1ddd38-936b-4249-ae6d-50277142aab0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.086019 kubelet[2582]: E0413 20:09:52.085936 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.086019 kubelet[2582]: E0413 20:09:52.085964 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d559f55b6-jwmwb" Apr 13 20:09:52.086019 kubelet[2582]: E0413 20:09:52.085978 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d559f55b6-jwmwb" Apr 13 20:09:52.086156 containerd[1501]: time="2026-04-13T20:09:52.086137764Z" level=info msg="StopPodSandbox for \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\"" Apr 13 20:09:52.086232 kubelet[2582]: E0413 20:09:52.086190 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d559f55b6-jwmwb_calico-system(95e2e62a-c377-4112-9481-1c4f900ed72b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d559f55b6-jwmwb_calico-system(95e2e62a-c377-4112-9481-1c4f900ed72b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5d559f55b6-jwmwb" podUID="95e2e62a-c377-4112-9481-1c4f900ed72b" Apr 13 20:09:52.086876 kubelet[2582]: E0413 20:09:52.086782 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.086876 kubelet[2582]: E0413 20:09:52.086803 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-ngkh6" Apr 13 20:09:52.086876 kubelet[2582]: E0413 20:09:52.086861 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-ngkh6" Apr 13 20:09:52.086964 kubelet[2582]: E0413 20:09:52.086894 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-ngkh6_calico-system(9b1ddd38-936b-4249-ae6d-50277142aab0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-ngkh6_calico-system(9b1ddd38-936b-4249-ae6d-50277142aab0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-ngkh6" podUID="9b1ddd38-936b-4249-ae6d-50277142aab0" Apr 13 20:09:52.088786 containerd[1501]: time="2026-04-13T20:09:52.088766423Z" level=info msg="Ensure that sandbox aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f in task-service has been cleanup successfully" Apr 13 20:09:52.091516 kubelet[2582]: I0413 20:09:52.091482 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:09:52.094381 containerd[1501]: time="2026-04-13T20:09:52.094049941Z" level=info msg="StopPodSandbox for \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\"" Apr 13 20:09:52.094381 containerd[1501]: time="2026-04-13T20:09:52.094203366Z" level=info msg="Ensure that sandbox ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e in task-service has been cleanup successfully" Apr 13 20:09:52.098002 containerd[1501]: time="2026-04-13T20:09:52.097978763Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:09:52.099151 containerd[1501]: time="2026-04-13T20:09:52.099126261Z" level=error msg="Failed to destroy network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.101366 kubelet[2582]: I0413 20:09:52.100751 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:09:52.102222 containerd[1501]: time="2026-04-13T20:09:52.102206954Z" level=info msg="StopPodSandbox for \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\"" Apr 13 20:09:52.102314 containerd[1501]: time="2026-04-13T20:09:52.102290718Z" level=error msg="encountered an error cleaning up failed sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.102421 containerd[1501]: time="2026-04-13T20:09:52.102327509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb7f679-bnvkh,Uid:ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.102909 kubelet[2582]: E0413 20:09:52.102667 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.102909 kubelet[2582]: E0413 20:09:52.102717 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ffb7f679-bnvkh" Apr 13 20:09:52.102909 kubelet[2582]: E0413 20:09:52.102732 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ffb7f679-bnvkh" Apr 13 20:09:52.103086 kubelet[2582]: E0413 20:09:52.102876 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ffb7f679-bnvkh_calico-system(ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ffb7f679-bnvkh_calico-system(ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ffb7f679-bnvkh" podUID="ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3" Apr 13 20:09:52.103230 containerd[1501]: time="2026-04-13T20:09:52.102802325Z" level=info msg="Ensure that sandbox 77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e in task-service has been cleanup successfully" Apr 13 20:09:52.110269 kubelet[2582]: I0413 20:09:52.109643 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:09:52.110841 containerd[1501]: time="2026-04-13T20:09:52.110820185Z" level=info msg="StopPodSandbox for \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\"" Apr 13 20:09:52.110978 containerd[1501]: time="2026-04-13T20:09:52.110955379Z" level=info msg="Ensure that sandbox 8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed in task-service has been cleanup successfully" Apr 13 20:09:52.130201 containerd[1501]: time="2026-04-13T20:09:52.130162685Z" level=info msg="CreateContainer within sandbox \"7c57d0a0da10b6daab3b0f5d410cb7fcd69ed4e45d874f1ed0505dab8796faa0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"99c636de26e6529ec002b0859d5b1670328491a21a300730f1539b5dc2224bfd\"" Apr 13 20:09:52.131557 containerd[1501]: time="2026-04-13T20:09:52.130767265Z" level=info msg="StartContainer for \"99c636de26e6529ec002b0859d5b1670328491a21a300730f1539b5dc2224bfd\"" Apr 13 20:09:52.139287 containerd[1501]: time="2026-04-13T20:09:52.139243429Z" level=error msg="StopPodSandbox for \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\" failed" error="failed to destroy network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.139621 kubelet[2582]: E0413 20:09:52.139423 2582 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:09:52.139621 kubelet[2582]: E0413 20:09:52.139459 2582 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e"} Apr 13 20:09:52.139621 kubelet[2582]: E0413 20:09:52.139505 2582 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.139621 kubelet[2582]: E0413 20:09:52.139533 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c8c5b9bcf-vb6pl" podUID="81e9c78f-9c13-4884-ab29-0daba08c8e1a" Apr 13 20:09:52.168623 containerd[1501]: time="2026-04-13T20:09:52.168283216Z" level=error msg="StopPodSandbox for \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\" failed" error="failed to destroy network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.168746 kubelet[2582]: E0413 20:09:52.168512 2582 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:09:52.168746 kubelet[2582]: E0413 20:09:52.168546 2582 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f"} Apr 13 20:09:52.168746 kubelet[2582]: E0413 20:09:52.168571 2582 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.168746 kubelet[2582]: E0413 20:09:52.168593 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4df44" podUID="0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb" Apr 13 20:09:52.175254 containerd[1501]: time="2026-04-13T20:09:52.174980551Z" level=error msg="StopPodSandbox for \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\" failed" error="failed to destroy network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.177915 kubelet[2582]: E0413 20:09:52.177688 2582 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:09:52.178851 kubelet[2582]: E0413 20:09:52.178019 2582 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed"} Apr 13 20:09:52.178851 kubelet[2582]: E0413 20:09:52.178051 2582 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1dedca3-06af-44ff-b14b-b383f1cac2f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.178851 kubelet[2582]: E0413 20:09:52.178087 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1dedca3-06af-44ff-b14b-b383f1cac2f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tddkd" podUID="b1dedca3-06af-44ff-b14b-b383f1cac2f6" Apr 13 20:09:52.181207 containerd[1501]: time="2026-04-13T20:09:52.181178069Z" level=error msg="Failed to destroy network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.181725 containerd[1501]: time="2026-04-13T20:09:52.181612645Z" level=error msg="encountered an error cleaning up failed sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.181725 containerd[1501]: time="2026-04-13T20:09:52.181654696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bd8jl,Uid:63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.181805 kubelet[2582]: E0413 20:09:52.181779 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.181829 kubelet[2582]: E0413 20:09:52.181814 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bd8jl" Apr 13 20:09:52.181852 kubelet[2582]: E0413 20:09:52.181828 2582 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bd8jl" Apr 13 20:09:52.181897 kubelet[2582]: E0413 20:09:52.181863 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bd8jl_calico-system(63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bd8jl_calico-system(63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bd8jl" podUID="63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e" Apr 13 20:09:52.185216 systemd[1]: Started cri-containerd-99c636de26e6529ec002b0859d5b1670328491a21a300730f1539b5dc2224bfd.scope - libcontainer container 99c636de26e6529ec002b0859d5b1670328491a21a300730f1539b5dc2224bfd. Apr 13 20:09:52.190791 containerd[1501]: time="2026-04-13T20:09:52.190769782Z" level=error msg="StopPodSandbox for \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\" failed" error="failed to destroy network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.191040 kubelet[2582]: E0413 20:09:52.191015 2582 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:09:52.191078 kubelet[2582]: E0413 20:09:52.191044 2582 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e"} Apr 13 20:09:52.191078 kubelet[2582]: E0413 20:09:52.191061 2582 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.191140 kubelet[2582]: E0413 20:09:52.191078 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5d559f55b6-fdgdr" podUID="1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4" Apr 13 20:09:52.216699 containerd[1501]: time="2026-04-13T20:09:52.216596890Z" level=info msg="StartContainer for \"99c636de26e6529ec002b0859d5b1670328491a21a300730f1539b5dc2224bfd\" returns successfully" Apr 13 20:09:52.879188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e-shm.mount: Deactivated successfully. Apr 13 20:09:52.879472 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599-shm.mount: Deactivated successfully. Apr 13 20:09:52.879642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e-shm.mount: Deactivated successfully. Apr 13 20:09:52.879917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575-shm.mount: Deactivated successfully. Apr 13 20:09:52.880106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f-shm.mount: Deactivated successfully. Apr 13 20:09:52.880615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e-shm.mount: Deactivated successfully. Apr 13 20:09:52.881034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed-shm.mount: Deactivated successfully. Apr 13 20:09:53.114996 kubelet[2582]: I0413 20:09:53.113950 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:09:53.116647 containerd[1501]: time="2026-04-13T20:09:53.115259204Z" level=info msg="StopPodSandbox for \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\"" Apr 13 20:09:53.116647 containerd[1501]: time="2026-04-13T20:09:53.115630457Z" level=info msg="Ensure that sandbox b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3 in task-service has been cleanup successfully" Apr 13 20:09:53.118424 containerd[1501]: time="2026-04-13T20:09:53.117734972Z" level=info msg="StopPodSandbox for \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\"" Apr 13 20:09:53.118424 containerd[1501]: time="2026-04-13T20:09:53.117976111Z" level=info msg="Ensure that sandbox ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575 in task-service has been cleanup successfully" Apr 13 20:09:53.118554 kubelet[2582]: I0413 20:09:53.116739 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:09:53.128162 kubelet[2582]: I0413 20:09:53.128088 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:09:53.131858 containerd[1501]: time="2026-04-13T20:09:53.131656952Z" level=info msg="StopPodSandbox for \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\"" Apr 13 20:09:53.131960 containerd[1501]: time="2026-04-13T20:09:53.131926721Z" level=info msg="Ensure that sandbox 2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599 in task-service has been cleanup successfully" Apr 13 20:09:53.149786 kubelet[2582]: I0413 20:09:53.148476 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:09:53.149978 containerd[1501]: time="2026-04-13T20:09:53.149789235Z" level=info msg="StopPodSandbox for \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\"" Apr 13 20:09:53.150050 containerd[1501]: time="2026-04-13T20:09:53.149990521Z" level=info msg="Ensure that sandbox 439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e in task-service has been cleanup successfully" Apr 13 20:09:53.162154 containerd[1501]: time="2026-04-13T20:09:53.161787203Z" level=info msg="StopPodSandbox for \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\"" Apr 13 20:09:53.166783 kubelet[2582]: I0413 20:09:53.166714 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m8sjt" podStartSLOduration=3.23559267 podStartE2EDuration="18.166700648s" podCreationTimestamp="2026-04-13 20:09:35 +0000 UTC" firstStartedPulling="2026-04-13 20:09:35.925688905 +0000 UTC m=+16.052573934" lastFinishedPulling="2026-04-13 20:09:50.856796883 +0000 UTC m=+30.983681912" observedRunningTime="2026-04-13 20:09:53.165593404 +0000 UTC m=+33.292478433" watchObservedRunningTime="2026-04-13 20:09:53.166700648 +0000 UTC m=+33.293585667" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.269 [INFO][3798] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.269 [INFO][3798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" iface="eth0" netns="/var/run/netns/cni-fbd13eec-4b55-c708-7eaa-4f33f74f8b8c" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.270 [INFO][3798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" iface="eth0" netns="/var/run/netns/cni-fbd13eec-4b55-c708-7eaa-4f33f74f8b8c" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.271 [INFO][3798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" iface="eth0" netns="/var/run/netns/cni-fbd13eec-4b55-c708-7eaa-4f33f74f8b8c" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.271 [INFO][3798] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.271 [INFO][3798] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.314 [INFO][3889] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.316 [INFO][3889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.316 [INFO][3889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.329 [WARNING][3889] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.329 [INFO][3889] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.332 [INFO][3889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.347903 containerd[1501]: 2026-04-13 20:09:53.346 [INFO][3798] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:09:53.348670 containerd[1501]: time="2026-04-13T20:09:53.348438776Z" level=info msg="TearDown network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\" successfully" Apr 13 20:09:53.348670 containerd[1501]: time="2026-04-13T20:09:53.348463527Z" level=info msg="StopPodSandbox for \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\" returns successfully" Apr 13 20:09:53.351012 systemd[1]: run-netns-cni\x2dfbd13eec\x2d4b55\x2dc708\x2d7eaa\x2d4f33f74f8b8c.mount: Deactivated successfully. Apr 13 20:09:53.352479 containerd[1501]: time="2026-04-13T20:09:53.352457683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb7f679-bnvkh,Uid:ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3,Namespace:calico-system,Attempt:1,}" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.287 [INFO][3844] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.289 [INFO][3844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" iface="eth0" netns="/var/run/netns/cni-4ba75794-17aa-4614-cac6-4b46558222ea" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.289 [INFO][3844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" iface="eth0" netns="/var/run/netns/cni-4ba75794-17aa-4614-cac6-4b46558222ea" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.289 [INFO][3844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" iface="eth0" netns="/var/run/netns/cni-4ba75794-17aa-4614-cac6-4b46558222ea" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.289 [INFO][3844] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.289 [INFO][3844] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.332 [INFO][3896] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.333 [INFO][3896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.333 [INFO][3896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.341 [WARNING][3896] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.341 [INFO][3896] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.346 [INFO][3896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.361604 containerd[1501]: 2026-04-13 20:09:53.356 [INFO][3844] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:09:53.363692 containerd[1501]: time="2026-04-13T20:09:53.363605564Z" level=info msg="TearDown network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\" successfully" Apr 13 20:09:53.363692 containerd[1501]: time="2026-04-13T20:09:53.363640916Z" level=info msg="StopPodSandbox for \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\" returns successfully" Apr 13 20:09:53.366772 systemd[1]: run-netns-cni\x2d4ba75794\x2d17aa\x2d4614\x2dcac6\x2d4b46558222ea.mount: Deactivated successfully. Apr 13 20:09:53.368724 containerd[1501]: time="2026-04-13T20:09:53.368689395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-jwmwb,Uid:95e2e62a-c377-4112-9481-1c4f900ed72b,Namespace:calico-system,Attempt:1,}" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.262 [INFO][3843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.263 [INFO][3843] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" iface="eth0" netns="/var/run/netns/cni-67c48f07-7f33-d763-b01d-3206af9141ca" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.264 [INFO][3843] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" iface="eth0" netns="/var/run/netns/cni-67c48f07-7f33-d763-b01d-3206af9141ca" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.264 [INFO][3843] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" iface="eth0" netns="/var/run/netns/cni-67c48f07-7f33-d763-b01d-3206af9141ca" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.264 [INFO][3843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.264 [INFO][3843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.349 [INFO][3877] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.349 [INFO][3877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.350 [INFO][3877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.359 [WARNING][3877] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.359 [INFO][3877] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.364 [INFO][3877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.376417 containerd[1501]: 2026-04-13 20:09:53.369 [INFO][3843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:09:53.378408 containerd[1501]: time="2026-04-13T20:09:53.378384952Z" level=info msg="TearDown network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\" successfully" Apr 13 20:09:53.378515 containerd[1501]: time="2026-04-13T20:09:53.378487585Z" level=info msg="StopPodSandbox for \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\" returns successfully" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.260 [INFO][3818] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.262 [INFO][3818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" iface="eth0" netns="/var/run/netns/cni-27f9eeaf-1a0f-3e33-8c28-21db9b0ea933" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.263 [INFO][3818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" iface="eth0" netns="/var/run/netns/cni-27f9eeaf-1a0f-3e33-8c28-21db9b0ea933" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.263 [INFO][3818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" iface="eth0" netns="/var/run/netns/cni-27f9eeaf-1a0f-3e33-8c28-21db9b0ea933" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.263 [INFO][3818] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.263 [INFO][3818] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.355 [INFO][3880] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.355 [INFO][3880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.366 [INFO][3880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.372 [WARNING][3880] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.373 [INFO][3880] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.374 [INFO][3880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.394176 containerd[1501]: 2026-04-13 20:09:53.382 [INFO][3818] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:09:53.394176 containerd[1501]: time="2026-04-13T20:09:53.394093598Z" level=info msg="TearDown network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\" successfully" Apr 13 20:09:53.394535 containerd[1501]: time="2026-04-13T20:09:53.394108798Z" level=info msg="StopPodSandbox for \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\" returns successfully" Apr 13 20:09:53.396768 containerd[1501]: time="2026-04-13T20:09:53.396567405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngkh6,Uid:9b1ddd38-936b-4249-ae6d-50277142aab0,Namespace:calico-system,Attempt:1,}" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.282 [INFO][3812] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.284 [INFO][3812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" iface="eth0" netns="/var/run/netns/cni-1800d1de-e501-d436-3597-5ed6a61bfda5" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.286 [INFO][3812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" iface="eth0" netns="/var/run/netns/cni-1800d1de-e501-d436-3597-5ed6a61bfda5" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.286 [INFO][3812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" iface="eth0" netns="/var/run/netns/cni-1800d1de-e501-d436-3597-5ed6a61bfda5" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.286 [INFO][3812] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.286 [INFO][3812] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.370 [INFO][3894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.373 [INFO][3894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.375 [INFO][3894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.381 [WARNING][3894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.381 [INFO][3894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.383 [INFO][3894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.396969 containerd[1501]: 2026-04-13 20:09:53.387 [INFO][3812] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:09:53.398467 containerd[1501]: time="2026-04-13T20:09:53.397371720Z" level=info msg="TearDown network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\" successfully" Apr 13 20:09:53.398467 containerd[1501]: time="2026-04-13T20:09:53.397386681Z" level=info msg="StopPodSandbox for \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\" returns successfully" Apr 13 20:09:53.399957 containerd[1501]: time="2026-04-13T20:09:53.399942892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bd8jl,Uid:63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e,Namespace:calico-system,Attempt:1,}" Apr 13 20:09:53.500042 kubelet[2582]: I0413 20:09:53.499188 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-ca-bundle\") pod \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " Apr 13 20:09:53.500042 kubelet[2582]: I0413 20:09:53.499228 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-nginx-config\") pod \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " Apr 13 20:09:53.500042 kubelet[2582]: I0413 20:09:53.499249 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74sv4\" (UniqueName: \"kubernetes.io/projected/81e9c78f-9c13-4884-ab29-0daba08c8e1a-kube-api-access-74sv4\") pod \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " Apr 13 20:09:53.500042 kubelet[2582]: I0413 20:09:53.499270 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-backend-key-pair\") pod \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\" (UID: \"81e9c78f-9c13-4884-ab29-0daba08c8e1a\") " Apr 13 20:09:53.501472 kubelet[2582]: I0413 20:09:53.500847 2582 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "81e9c78f-9c13-4884-ab29-0daba08c8e1a" (UID: "81e9c78f-9c13-4884-ab29-0daba08c8e1a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:09:53.503240 kubelet[2582]: I0413 20:09:53.502971 2582 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "81e9c78f-9c13-4884-ab29-0daba08c8e1a" (UID: "81e9c78f-9c13-4884-ab29-0daba08c8e1a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:09:53.509808 kubelet[2582]: I0413 20:09:53.509789 2582 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e9c78f-9c13-4884-ab29-0daba08c8e1a-kube-api-access-74sv4" (OuterVolumeSpecName: "kube-api-access-74sv4") pod "81e9c78f-9c13-4884-ab29-0daba08c8e1a" (UID: "81e9c78f-9c13-4884-ab29-0daba08c8e1a"). InnerVolumeSpecName "kube-api-access-74sv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:09:53.515487 kubelet[2582]: I0413 20:09:53.515468 2582 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "81e9c78f-9c13-4884-ab29-0daba08c8e1a" (UID: "81e9c78f-9c13-4884-ab29-0daba08c8e1a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:09:53.599558 kubelet[2582]: I0413 20:09:53.599528 2582 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-ca-bundle\") on node \"ci-4081-3-7-2-642afe6700\" DevicePath \"\"" Apr 13 20:09:53.600056 kubelet[2582]: I0413 20:09:53.600046 2582 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/81e9c78f-9c13-4884-ab29-0daba08c8e1a-nginx-config\") on node \"ci-4081-3-7-2-642afe6700\" DevicePath \"\"" Apr 13 20:09:53.600114 kubelet[2582]: I0413 20:09:53.600097 2582 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-74sv4\" (UniqueName: \"kubernetes.io/projected/81e9c78f-9c13-4884-ab29-0daba08c8e1a-kube-api-access-74sv4\") on node \"ci-4081-3-7-2-642afe6700\" DevicePath \"\"" Apr 13 20:09:53.600234 kubelet[2582]: I0413 20:09:53.600163 2582 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81e9c78f-9c13-4884-ab29-0daba08c8e1a-whisker-backend-key-pair\") on node \"ci-4081-3-7-2-642afe6700\" DevicePath \"\"" Apr 13 20:09:53.660503 systemd-networkd[1408]: cali4afa56b62cd: Link UP Apr 13 20:09:53.660703 systemd-networkd[1408]: cali4afa56b62cd: Gained carrier Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.417 [ERROR][3916] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.431 [INFO][3916] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0 calico-kube-controllers-ffb7f679- calico-system ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3 909 0 2026-04-13 20:09:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ffb7f679 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 calico-kube-controllers-ffb7f679-bnvkh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4afa56b62cd [] [] }} ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.432 [INFO][3916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.540 [INFO][3965] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" HandleID="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.557 [INFO][3965] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" HandleID="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"calico-kube-controllers-ffb7f679-bnvkh", "timestamp":"2026-04-13 20:09:53.540965044 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188dc0)} Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.557 [INFO][3965] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.557 [INFO][3965] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.557 [INFO][3965] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.562 [INFO][3965] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.578 [INFO][3965] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.610 [INFO][3965] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.615 [INFO][3965] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.618 [INFO][3965] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.618 [INFO][3965] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.623 [INFO][3965] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0 Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.631 [INFO][3965] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.645 [INFO][3965] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.65/26] block=192.168.44.64/26 handle="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.645 [INFO][3965] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.65/26] handle="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.645 [INFO][3965] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.707515 containerd[1501]: 2026-04-13 20:09:53.645 [INFO][3965] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.65/26] IPv6=[] ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" HandleID="k8s-pod-network.d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.708002 containerd[1501]: 2026-04-13 20:09:53.647 [INFO][3916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0", GenerateName:"calico-kube-controllers-ffb7f679-", Namespace:"calico-system", SelfLink:"", UID:"ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ffb7f679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"calico-kube-controllers-ffb7f679-bnvkh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4afa56b62cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:53.708002 containerd[1501]: 2026-04-13 20:09:53.647 [INFO][3916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.65/32] ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.708002 containerd[1501]: 2026-04-13 20:09:53.647 [INFO][3916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4afa56b62cd ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.708002 containerd[1501]: 2026-04-13 20:09:53.661 [INFO][3916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.708002 containerd[1501]: 2026-04-13 20:09:53.662 [INFO][3916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0", GenerateName:"calico-kube-controllers-ffb7f679-", Namespace:"calico-system", SelfLink:"", UID:"ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ffb7f679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0", Pod:"calico-kube-controllers-ffb7f679-bnvkh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4afa56b62cd", MAC:"ba:fd:85:88:0a:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:53.708002 containerd[1501]: 2026-04-13 20:09:53.701 [INFO][3916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0" Namespace="calico-system" Pod="calico-kube-controllers-ffb7f679-bnvkh" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:09:53.737594 containerd[1501]: time="2026-04-13T20:09:53.736887438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:53.737736 containerd[1501]: time="2026-04-13T20:09:53.736977821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:53.737736 containerd[1501]: time="2026-04-13T20:09:53.736990812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:53.737736 containerd[1501]: time="2026-04-13T20:09:53.737093625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:53.757456 systemd[1]: Started cri-containerd-d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0.scope - libcontainer container d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0. Apr 13 20:09:53.808603 systemd-networkd[1408]: calid10fcd8f375: Link UP Apr 13 20:09:53.811031 systemd-networkd[1408]: calid10fcd8f375: Gained carrier Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.427 [ERROR][3925] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.439 [INFO][3925] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0 calico-apiserver-5d559f55b6- calico-system 95e2e62a-c377-4112-9481-1c4f900ed72b 911 0 2026-04-13 20:09:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d559f55b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 calico-apiserver-5d559f55b6-jwmwb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid10fcd8f375 [] [] }} ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.439 [INFO][3925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.566 [INFO][3967] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" HandleID="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.577 [INFO][3967] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" HandleID="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"calico-apiserver-5d559f55b6-jwmwb", "timestamp":"2026-04-13 20:09:53.566961865 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003acdc0)} Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.577 [INFO][3967] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.645 [INFO][3967] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.645 [INFO][3967] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.677 [INFO][3967] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.703 [INFO][3967] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.714 [INFO][3967] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.719 [INFO][3967] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.725 [INFO][3967] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.727 [INFO][3967] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.742 [INFO][3967] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665 Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.763 [INFO][3967] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.789 [INFO][3967] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.66/26] block=192.168.44.64/26 handle="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.789 [INFO][3967] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.66/26] handle="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.789 [INFO][3967] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.833034 containerd[1501]: 2026-04-13 20:09:53.789 [INFO][3967] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.66/26] IPv6=[] ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" HandleID="k8s-pod-network.afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.834197 containerd[1501]: 2026-04-13 20:09:53.795 [INFO][3925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"95e2e62a-c377-4112-9481-1c4f900ed72b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"calico-apiserver-5d559f55b6-jwmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid10fcd8f375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:53.834197 containerd[1501]: 2026-04-13 20:09:53.795 [INFO][3925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.66/32] ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.834197 containerd[1501]: 2026-04-13 20:09:53.795 [INFO][3925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid10fcd8f375 ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.834197 containerd[1501]: 2026-04-13 20:09:53.811 [INFO][3925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.834197 containerd[1501]: 2026-04-13 20:09:53.815 [INFO][3925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"95e2e62a-c377-4112-9481-1c4f900ed72b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665", Pod:"calico-apiserver-5d559f55b6-jwmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid10fcd8f375", MAC:"9e:fa:11:fd:3d:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:53.834197 containerd[1501]: 2026-04-13 20:09:53.826 [INFO][3925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-jwmwb" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:09:53.864435 systemd-networkd[1408]: cali15a0b3b10d3: Link UP Apr 13 20:09:53.865848 systemd-networkd[1408]: cali15a0b3b10d3: Gained carrier Apr 13 20:09:53.880103 systemd[1]: run-netns-cni\x2d1800d1de\x2de501\x2dd436\x2d3597\x2d5ed6a61bfda5.mount: Deactivated successfully. Apr 13 20:09:53.880186 systemd[1]: run-netns-cni\x2d27f9eeaf\x2d1a0f\x2d3e33\x2d8c28\x2d21db9b0ea933.mount: Deactivated successfully. Apr 13 20:09:53.880249 systemd[1]: run-netns-cni\x2d67c48f07\x2d7f33\x2dd763\x2db01d\x2d3206af9141ca.mount: Deactivated successfully. Apr 13 20:09:53.880307 systemd[1]: var-lib-kubelet-pods-81e9c78f\x2d9c13\x2d4884\x2dab29\x2d0daba08c8e1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74sv4.mount: Deactivated successfully. Apr 13 20:09:53.880390 systemd[1]: var-lib-kubelet-pods-81e9c78f\x2d9c13\x2d4884\x2dab29\x2d0daba08c8e1a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:09:53.892512 containerd[1501]: time="2026-04-13T20:09:53.879378997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:53.892512 containerd[1501]: time="2026-04-13T20:09:53.879425218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:53.892512 containerd[1501]: time="2026-04-13T20:09:53.879433199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:53.892512 containerd[1501]: time="2026-04-13T20:09:53.879847861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:53.914040 systemd[1]: Started cri-containerd-afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665.scope - libcontainer container afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665. Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.495 [ERROR][3937] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.532 [INFO][3937] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0 goldmane-cccfbd5cf- calico-system 9b1ddd38-936b-4249-ae6d-50277142aab0 908 0 2026-04-13 20:09:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 goldmane-cccfbd5cf-ngkh6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali15a0b3b10d3 [] [] }} ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.532 [INFO][3937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.603 [INFO][4024] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" HandleID="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.618 [INFO][4024] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" HandleID="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000379d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"goldmane-cccfbd5cf-ngkh6", "timestamp":"2026-04-13 20:09:53.603638872 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188580)} Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.618 [INFO][4024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.790 [INFO][4024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.790 [INFO][4024] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.795 [INFO][4024] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.805 [INFO][4024] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.830 [INFO][4024] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.834 [INFO][4024] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.838 [INFO][4024] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.838 [INFO][4024] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.840 [INFO][4024] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75 Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.846 [INFO][4024] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.855 [INFO][4024] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.67/26] block=192.168.44.64/26 handle="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.855 [INFO][4024] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.67/26] handle="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.855 [INFO][4024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.928231 containerd[1501]: 2026-04-13 20:09:53.855 [INFO][4024] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.67/26] IPv6=[] ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" HandleID="k8s-pod-network.3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.929003 containerd[1501]: 2026-04-13 20:09:53.860 [INFO][3937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9b1ddd38-936b-4249-ae6d-50277142aab0", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"goldmane-cccfbd5cf-ngkh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15a0b3b10d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:53.929003 containerd[1501]: 2026-04-13 20:09:53.860 [INFO][3937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.67/32] ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.929003 containerd[1501]: 2026-04-13 20:09:53.860 [INFO][3937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15a0b3b10d3 ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.929003 containerd[1501]: 2026-04-13 20:09:53.870 [INFO][3937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.929003 containerd[1501]: 2026-04-13 20:09:53.872 [INFO][3937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9b1ddd38-936b-4249-ae6d-50277142aab0", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75", Pod:"goldmane-cccfbd5cf-ngkh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15a0b3b10d3", MAC:"7e:1c:ca:04:be:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:53.929003 containerd[1501]: 2026-04-13 20:09:53.905 [INFO][3937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75" Namespace="calico-system" Pod="goldmane-cccfbd5cf-ngkh6" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:09:53.933156 containerd[1501]: time="2026-04-13T20:09:53.931765551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ffb7f679-bnvkh,Uid:ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0\"" Apr 13 20:09:53.937649 containerd[1501]: time="2026-04-13T20:09:53.937458420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:09:53.959920 containerd[1501]: time="2026-04-13T20:09:53.959838997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:53.960122 containerd[1501]: time="2026-04-13T20:09:53.960061374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:53.961357 containerd[1501]: time="2026-04-13T20:09:53.960100225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:53.967487 containerd[1501]: time="2026-04-13T20:09:53.963668948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:53.989505 systemd-networkd[1408]: calia2bff448f7d: Link UP Apr 13 20:09:53.990676 systemd-networkd[1408]: calia2bff448f7d: Gained carrier Apr 13 20:09:53.992453 systemd[1]: Started cri-containerd-3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75.scope - libcontainer container 3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75. Apr 13 20:09:53.992873 systemd[1]: Removed slice kubepods-besteffort-pod81e9c78f_9c13_4884_ab29_0daba08c8e1a.slice - libcontainer container kubepods-besteffort-pod81e9c78f_9c13_4884_ab29_0daba08c8e1a.slice. Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.519 [ERROR][3943] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.554 [INFO][3943] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0 csi-node-driver- calico-system 63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e 910 0 2026-04-13 20:09:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 csi-node-driver-bd8jl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia2bff448f7d [] [] }} ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.554 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.612 [INFO][4032] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" HandleID="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.623 [INFO][4032] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" HandleID="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"csi-node-driver-bd8jl", "timestamp":"2026-04-13 20:09:53.612219623 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188c60)} Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.624 [INFO][4032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.856 [INFO][4032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.856 [INFO][4032] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.900 [INFO][4032] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.923 [INFO][4032] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.935 [INFO][4032] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.939 [INFO][4032] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.942 [INFO][4032] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.943 [INFO][4032] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.946 [INFO][4032] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7 Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.953 [INFO][4032] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.960 [INFO][4032] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.68/26] block=192.168.44.64/26 handle="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.961 [INFO][4032] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.68/26] handle="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.961 [INFO][4032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:54.029989 containerd[1501]: 2026-04-13 20:09:53.961 [INFO][4032] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.68/26] IPv6=[] ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" HandleID="k8s-pod-network.31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:54.030534 containerd[1501]: 2026-04-13 20:09:53.984 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"csi-node-driver-bd8jl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2bff448f7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:54.030534 containerd[1501]: 2026-04-13 20:09:53.984 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.68/32] ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:54.030534 containerd[1501]: 2026-04-13 20:09:53.984 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2bff448f7d ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:54.030534 containerd[1501]: 2026-04-13 20:09:53.991 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:54.030534 containerd[1501]: 2026-04-13 20:09:53.993 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7", Pod:"csi-node-driver-bd8jl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2bff448f7d", MAC:"f2:a7:44:c7:b8:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:54.030534 containerd[1501]: 2026-04-13 20:09:54.010 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7" Namespace="calico-system" Pod="csi-node-driver-bd8jl" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:09:54.068388 containerd[1501]: time="2026-04-13T20:09:54.068289670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:54.068623 containerd[1501]: time="2026-04-13T20:09:54.068497036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:54.068623 containerd[1501]: time="2026-04-13T20:09:54.068525716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.068696 containerd[1501]: time="2026-04-13T20:09:54.068614420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.089985 systemd[1]: Started cri-containerd-31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7.scope - libcontainer container 31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7. Apr 13 20:09:54.108421 containerd[1501]: time="2026-04-13T20:09:54.108381799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-jwmwb,Uid:95e2e62a-c377-4112-9481-1c4f900ed72b,Namespace:calico-system,Attempt:1,} returns sandbox id \"afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665\"" Apr 13 20:09:54.133606 containerd[1501]: time="2026-04-13T20:09:54.133527624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-ngkh6,Uid:9b1ddd38-936b-4249-ae6d-50277142aab0,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75\"" Apr 13 20:09:54.172419 containerd[1501]: time="2026-04-13T20:09:54.172161970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bd8jl,Uid:63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e,Namespace:calico-system,Attempt:1,} returns sandbox id \"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7\"" Apr 13 20:09:54.243745 systemd[1]: Created slice kubepods-besteffort-podee103d82_0cb7_40cb_bf33_533df4decf1a.slice - libcontainer container kubepods-besteffort-podee103d82_0cb7_40cb_bf33_533df4decf1a.slice. Apr 13 20:09:54.305444 kernel: calico-node[4318]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:09:54.307890 kubelet[2582]: I0413 20:09:54.307736 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee103d82-0cb7-40cb-bf33-533df4decf1a-whisker-backend-key-pair\") pod \"whisker-7787466b5b-xn5n8\" (UID: \"ee103d82-0cb7-40cb-bf33-533df4decf1a\") " pod="calico-system/whisker-7787466b5b-xn5n8" Apr 13 20:09:54.307890 kubelet[2582]: I0413 20:09:54.307780 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svfqg\" (UniqueName: \"kubernetes.io/projected/ee103d82-0cb7-40cb-bf33-533df4decf1a-kube-api-access-svfqg\") pod \"whisker-7787466b5b-xn5n8\" (UID: \"ee103d82-0cb7-40cb-bf33-533df4decf1a\") " pod="calico-system/whisker-7787466b5b-xn5n8" Apr 13 20:09:54.307890 kubelet[2582]: I0413 20:09:54.307814 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee103d82-0cb7-40cb-bf33-533df4decf1a-whisker-ca-bundle\") pod \"whisker-7787466b5b-xn5n8\" (UID: \"ee103d82-0cb7-40cb-bf33-533df4decf1a\") " pod="calico-system/whisker-7787466b5b-xn5n8" Apr 13 20:09:54.307890 kubelet[2582]: I0413 20:09:54.307828 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ee103d82-0cb7-40cb-bf33-533df4decf1a-nginx-config\") pod \"whisker-7787466b5b-xn5n8\" (UID: \"ee103d82-0cb7-40cb-bf33-533df4decf1a\") " pod="calico-system/whisker-7787466b5b-xn5n8" Apr 13 20:09:54.548152 containerd[1501]: time="2026-04-13T20:09:54.548035073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7787466b5b-xn5n8,Uid:ee103d82-0cb7-40cb-bf33-533df4decf1a,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:54.639938 systemd-networkd[1408]: calif78191e5a6f: Link UP Apr 13 20:09:54.640081 systemd-networkd[1408]: calif78191e5a6f: Gained carrier Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.583 [INFO][4339] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0 whisker-7787466b5b- calico-system ee103d82-0cb7-40cb-bf33-533df4decf1a 943 0 2026-04-13 20:09:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7787466b5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 whisker-7787466b5b-xn5n8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif78191e5a6f [] [] }} ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.583 [INFO][4339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.605 [INFO][4350] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" HandleID="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.610 [INFO][4350] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" HandleID="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003648c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"whisker-7787466b5b-xn5n8", "timestamp":"2026-04-13 20:09:54.605230789 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a0580)} Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.610 [INFO][4350] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.610 [INFO][4350] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.610 [INFO][4350] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.612 [INFO][4350] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.615 [INFO][4350] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.618 [INFO][4350] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.620 [INFO][4350] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.622 [INFO][4350] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.622 [INFO][4350] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.623 [INFO][4350] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5 Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.628 [INFO][4350] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.632 [INFO][4350] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.69/26] block=192.168.44.64/26 handle="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.632 [INFO][4350] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.69/26] handle="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" host="ci-4081-3-7-2-642afe6700" Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.632 [INFO][4350] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:54.656296 containerd[1501]: 2026-04-13 20:09:54.632 [INFO][4350] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.69/26] IPv6=[] ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" HandleID="k8s-pod-network.be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" Apr 13 20:09:54.656904 containerd[1501]: 2026-04-13 20:09:54.635 [INFO][4339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0", GenerateName:"whisker-7787466b5b-", Namespace:"calico-system", SelfLink:"", UID:"ee103d82-0cb7-40cb-bf33-533df4decf1a", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7787466b5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"whisker-7787466b5b-xn5n8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif78191e5a6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:54.656904 containerd[1501]: 2026-04-13 20:09:54.635 [INFO][4339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.69/32] ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" Apr 13 20:09:54.656904 containerd[1501]: 2026-04-13 20:09:54.635 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif78191e5a6f ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" Apr 13 20:09:54.656904 containerd[1501]: 2026-04-13 20:09:54.641 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" Apr 13 20:09:54.656904 containerd[1501]: 2026-04-13 20:09:54.643 [INFO][4339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0", GenerateName:"whisker-7787466b5b-", Namespace:"calico-system", SelfLink:"", UID:"ee103d82-0cb7-40cb-bf33-533df4decf1a", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7787466b5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5", Pod:"whisker-7787466b5b-xn5n8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif78191e5a6f", MAC:"d2:6e:7c:38:c1:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:54.656904 containerd[1501]: 2026-04-13 20:09:54.652 [INFO][4339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5" Namespace="calico-system" Pod="whisker-7787466b5b-xn5n8" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--7787466b5b--xn5n8-eth0" Apr 13 20:09:54.677010 containerd[1501]: time="2026-04-13T20:09:54.676734748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:54.677010 containerd[1501]: time="2026-04-13T20:09:54.676838161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:54.677010 containerd[1501]: time="2026-04-13T20:09:54.676855522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.677010 containerd[1501]: time="2026-04-13T20:09:54.676943805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.695468 systemd[1]: Started cri-containerd-be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5.scope - libcontainer container be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5. Apr 13 20:09:54.736137 containerd[1501]: time="2026-04-13T20:09:54.736100059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7787466b5b-xn5n8,Uid:ee103d82-0cb7-40cb-bf33-533df4decf1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5\"" Apr 13 20:09:54.744221 systemd-networkd[1408]: vxlan.calico: Link UP Apr 13 20:09:54.744228 systemd-networkd[1408]: vxlan.calico: Gained carrier Apr 13 20:09:55.437560 systemd-networkd[1408]: calid10fcd8f375: Gained IPv6LL Apr 13 20:09:55.500978 systemd-networkd[1408]: calia2bff448f7d: Gained IPv6LL Apr 13 20:09:55.503148 systemd-networkd[1408]: cali4afa56b62cd: Gained IPv6LL Apr 13 20:09:55.694544 systemd-networkd[1408]: cali15a0b3b10d3: Gained IPv6LL Apr 13 20:09:55.756928 systemd-networkd[1408]: calif78191e5a6f: Gained IPv6LL Apr 13 20:09:55.963748 kubelet[2582]: I0413 20:09:55.963041 2582 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e9c78f-9c13-4884-ab29-0daba08c8e1a" path="/var/lib/kubelet/pods/81e9c78f-9c13-4884-ab29-0daba08c8e1a/volumes" Apr 13 20:09:56.140564 systemd-networkd[1408]: vxlan.calico: Gained IPv6LL Apr 13 20:09:56.360394 containerd[1501]: time="2026-04-13T20:09:56.360135630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:56.361583 containerd[1501]: time="2026-04-13T20:09:56.361461886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:09:56.362673 containerd[1501]: time="2026-04-13T20:09:56.362628786Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:56.364741 containerd[1501]: time="2026-04-13T20:09:56.364720190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:56.365561 containerd[1501]: time="2026-04-13T20:09:56.365184763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.427697442s" Apr 13 20:09:56.365561 containerd[1501]: time="2026-04-13T20:09:56.365213584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:09:56.367434 containerd[1501]: time="2026-04-13T20:09:56.367408321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:09:56.380079 containerd[1501]: time="2026-04-13T20:09:56.380041751Z" level=info msg="CreateContainer within sandbox \"d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:09:56.395745 containerd[1501]: time="2026-04-13T20:09:56.395647499Z" level=info msg="CreateContainer within sandbox \"d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544\"" Apr 13 20:09:56.397948 containerd[1501]: time="2026-04-13T20:09:56.396266316Z" level=info msg="StartContainer for \"c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544\"" Apr 13 20:09:56.427531 systemd[1]: Started cri-containerd-c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544.scope - libcontainer container c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544. Apr 13 20:09:56.462278 containerd[1501]: time="2026-04-13T20:09:56.462161009Z" level=info msg="StartContainer for \"c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544\" returns successfully" Apr 13 20:09:57.256783 kubelet[2582]: I0413 20:09:57.256486 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-ffb7f679-bnvkh" podStartSLOduration=19.8277166 podStartE2EDuration="22.256471351s" podCreationTimestamp="2026-04-13 20:09:35 +0000 UTC" firstStartedPulling="2026-04-13 20:09:53.937005566 +0000 UTC m=+34.063890585" lastFinishedPulling="2026-04-13 20:09:56.365760307 +0000 UTC m=+36.492645336" observedRunningTime="2026-04-13 20:09:57.203176412 +0000 UTC m=+37.330061481" watchObservedRunningTime="2026-04-13 20:09:57.256471351 +0000 UTC m=+37.383356380" Apr 13 20:10:00.220099 containerd[1501]: time="2026-04-13T20:10:00.220055867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:00.221320 containerd[1501]: time="2026-04-13T20:10:00.221169080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:10:00.222351 containerd[1501]: time="2026-04-13T20:10:00.222242282Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:00.224120 containerd[1501]: time="2026-04-13T20:10:00.224091979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:00.224906 containerd[1501]: time="2026-04-13T20:10:00.224573790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.857140128s" Apr 13 20:10:00.224906 containerd[1501]: time="2026-04-13T20:10:00.224595130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:10:00.226308 containerd[1501]: time="2026-04-13T20:10:00.226097300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:10:00.228574 containerd[1501]: time="2026-04-13T20:10:00.228551090Z" level=info msg="CreateContainer within sandbox \"afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:00.250219 containerd[1501]: time="2026-04-13T20:10:00.250177782Z" level=info msg="CreateContainer within sandbox \"afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"38a68db65ea73f271c9c4e3b417f014f5e0c9723e16a1f642857d989534774ce\"" Apr 13 20:10:00.250592 containerd[1501]: time="2026-04-13T20:10:00.250556099Z" level=info msg="StartContainer for \"38a68db65ea73f271c9c4e3b417f014f5e0c9723e16a1f642857d989534774ce\"" Apr 13 20:10:00.291467 systemd[1]: Started cri-containerd-38a68db65ea73f271c9c4e3b417f014f5e0c9723e16a1f642857d989534774ce.scope - libcontainer container 38a68db65ea73f271c9c4e3b417f014f5e0c9723e16a1f642857d989534774ce. Apr 13 20:10:00.329844 containerd[1501]: time="2026-04-13T20:10:00.329733814Z" level=info msg="StartContainer for \"38a68db65ea73f271c9c4e3b417f014f5e0c9723e16a1f642857d989534774ce\" returns successfully" Apr 13 20:10:02.198950 kubelet[2582]: I0413 20:10:02.198883 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:02.965198 containerd[1501]: time="2026-04-13T20:10:02.964604270Z" level=info msg="StopPodSandbox for \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\"" Apr 13 20:10:03.028666 kubelet[2582]: I0413 20:10:03.028443 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5d559f55b6-jwmwb" podStartSLOduration=22.913705814 podStartE2EDuration="29.028423982s" podCreationTimestamp="2026-04-13 20:09:34 +0000 UTC" firstStartedPulling="2026-04-13 20:09:54.110710878 +0000 UTC m=+34.237595907" lastFinishedPulling="2026-04-13 20:10:00.225429046 +0000 UTC m=+40.352314075" observedRunningTime="2026-04-13 20:10:01.20771986 +0000 UTC m=+41.334604929" watchObservedRunningTime="2026-04-13 20:10:03.028423982 +0000 UTC m=+43.155309021" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.024 [INFO][4654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.024 [INFO][4654] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" iface="eth0" netns="/var/run/netns/cni-70b12ff5-8dae-85b5-8ca2-e00c3309e5a8" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.024 [INFO][4654] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" iface="eth0" netns="/var/run/netns/cni-70b12ff5-8dae-85b5-8ca2-e00c3309e5a8" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.025 [INFO][4654] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" iface="eth0" netns="/var/run/netns/cni-70b12ff5-8dae-85b5-8ca2-e00c3309e5a8" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.025 [INFO][4654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.025 [INFO][4654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.046 [INFO][4661] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.046 [INFO][4661] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.046 [INFO][4661] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.053 [WARNING][4661] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.053 [INFO][4661] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.056 [INFO][4661] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:03.060989 containerd[1501]: 2026-04-13 20:10:03.058 [INFO][4654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:03.063461 containerd[1501]: time="2026-04-13T20:10:03.063412345Z" level=info msg="TearDown network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\" successfully" Apr 13 20:10:03.064092 containerd[1501]: time="2026-04-13T20:10:03.063639639Z" level=info msg="StopPodSandbox for \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\" returns successfully" Apr 13 20:10:03.066657 systemd[1]: run-netns-cni\x2d70b12ff5\x2d8dae\x2d85b5\x2d8ca2\x2de00c3309e5a8.mount: Deactivated successfully. Apr 13 20:10:03.068772 containerd[1501]: time="2026-04-13T20:10:03.068718795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-fdgdr,Uid:1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:03.164744 systemd-networkd[1408]: cali88e0eabeac6: Link UP Apr 13 20:10:03.165787 systemd-networkd[1408]: cali88e0eabeac6: Gained carrier Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.107 [INFO][4668] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0 calico-apiserver-5d559f55b6- calico-system 1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4 984 0 2026-04-13 20:09:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d559f55b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 calico-apiserver-5d559f55b6-fdgdr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali88e0eabeac6 [] [] }} ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.108 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.133 [INFO][4679] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" HandleID="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.138 [INFO][4679] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" HandleID="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"calico-apiserver-5d559f55b6-fdgdr", "timestamp":"2026-04-13 20:10:03.133961581 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00053af20)} Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.138 [INFO][4679] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.138 [INFO][4679] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.139 [INFO][4679] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.140 [INFO][4679] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.143 [INFO][4679] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.147 [INFO][4679] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.148 [INFO][4679] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.150 [INFO][4679] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.150 [INFO][4679] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.151 [INFO][4679] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121 Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.154 [INFO][4679] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.159 [INFO][4679] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.70/26] block=192.168.44.64/26 handle="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.159 [INFO][4679] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.70/26] handle="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.159 [INFO][4679] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:03.181550 containerd[1501]: 2026-04-13 20:10:03.159 [INFO][4679] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.70/26] IPv6=[] ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" HandleID="k8s-pod-network.8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.182012 containerd[1501]: 2026-04-13 20:10:03.162 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"calico-apiserver-5d559f55b6-fdgdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali88e0eabeac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:03.182012 containerd[1501]: 2026-04-13 20:10:03.162 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.70/32] ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.182012 containerd[1501]: 2026-04-13 20:10:03.162 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88e0eabeac6 ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.182012 containerd[1501]: 2026-04-13 20:10:03.165 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.182012 containerd[1501]: 2026-04-13 20:10:03.166 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121", Pod:"calico-apiserver-5d559f55b6-fdgdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali88e0eabeac6", MAC:"8e:d4:83:23:33:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:03.182012 containerd[1501]: 2026-04-13 20:10:03.175 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121" Namespace="calico-system" Pod="calico-apiserver-5d559f55b6-fdgdr" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:03.205650 containerd[1501]: time="2026-04-13T20:10:03.205399472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:03.205650 containerd[1501]: time="2026-04-13T20:10:03.205453643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:03.205650 containerd[1501]: time="2026-04-13T20:10:03.205476593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:03.205650 containerd[1501]: time="2026-04-13T20:10:03.205566815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:03.228456 systemd[1]: Started cri-containerd-8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121.scope - libcontainer container 8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121. Apr 13 20:10:03.269755 containerd[1501]: time="2026-04-13T20:10:03.269719963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d559f55b6-fdgdr,Uid:1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4,Namespace:calico-system,Attempt:1,} returns sandbox id \"8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121\"" Apr 13 20:10:03.274446 containerd[1501]: time="2026-04-13T20:10:03.274420032Z" level=info msg="CreateContainer within sandbox \"8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:03.287367 containerd[1501]: time="2026-04-13T20:10:03.287305650Z" level=info msg="CreateContainer within sandbox \"8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a8551a792761327717c0e2d45692b0f14f69f6ae0c457f008b9c16e4a2807579\"" Apr 13 20:10:03.288024 containerd[1501]: time="2026-04-13T20:10:03.287731348Z" level=info msg="StartContainer for \"a8551a792761327717c0e2d45692b0f14f69f6ae0c457f008b9c16e4a2807579\"" Apr 13 20:10:03.326490 systemd[1]: Started cri-containerd-a8551a792761327717c0e2d45692b0f14f69f6ae0c457f008b9c16e4a2807579.scope - libcontainer container a8551a792761327717c0e2d45692b0f14f69f6ae0c457f008b9c16e4a2807579. Apr 13 20:10:03.371572 containerd[1501]: time="2026-04-13T20:10:03.371524818Z" level=info msg="StartContainer for \"a8551a792761327717c0e2d45692b0f14f69f6ae0c457f008b9c16e4a2807579\" returns successfully" Apr 13 20:10:04.407486 containerd[1501]: time="2026-04-13T20:10:04.407425010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:04.408844 containerd[1501]: time="2026-04-13T20:10:04.408562498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:10:04.409601 containerd[1501]: time="2026-04-13T20:10:04.409547014Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:04.411682 containerd[1501]: time="2026-04-13T20:10:04.411658058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:04.412630 containerd[1501]: time="2026-04-13T20:10:04.412221847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.186034525s" Apr 13 20:10:04.412630 containerd[1501]: time="2026-04-13T20:10:04.412246037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:10:04.414607 containerd[1501]: time="2026-04-13T20:10:04.414242109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:10:04.416477 containerd[1501]: time="2026-04-13T20:10:04.416280582Z" level=info msg="CreateContainer within sandbox \"3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:10:04.431412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228785502.mount: Deactivated successfully. Apr 13 20:10:04.435998 containerd[1501]: time="2026-04-13T20:10:04.435966786Z" level=info msg="CreateContainer within sandbox \"3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5f0e980b7383694351951533dabc438b013f638123cdfe3d8cc581a257c96b99\"" Apr 13 20:10:04.437132 containerd[1501]: time="2026-04-13T20:10:04.437099254Z" level=info msg="StartContainer for \"5f0e980b7383694351951533dabc438b013f638123cdfe3d8cc581a257c96b99\"" Apr 13 20:10:04.467501 systemd[1]: Started cri-containerd-5f0e980b7383694351951533dabc438b013f638123cdfe3d8cc581a257c96b99.scope - libcontainer container 5f0e980b7383694351951533dabc438b013f638123cdfe3d8cc581a257c96b99. Apr 13 20:10:04.510540 containerd[1501]: time="2026-04-13T20:10:04.510507364Z" level=info msg="StartContainer for \"5f0e980b7383694351951533dabc438b013f638123cdfe3d8cc581a257c96b99\" returns successfully" Apr 13 20:10:04.588544 systemd-networkd[1408]: cali88e0eabeac6: Gained IPv6LL Apr 13 20:10:04.961727 containerd[1501]: time="2026-04-13T20:10:04.961587147Z" level=info msg="StopPodSandbox for \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\"" Apr 13 20:10:05.025124 kubelet[2582]: I0413 20:10:05.024897 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5d559f55b6-fdgdr" podStartSLOduration=30.024883484 podStartE2EDuration="30.024883484s" podCreationTimestamp="2026-04-13 20:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:04.215787605 +0000 UTC m=+44.342672634" watchObservedRunningTime="2026-04-13 20:10:05.024883484 +0000 UTC m=+45.151768503" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.023 [INFO][4851] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.024 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" iface="eth0" netns="/var/run/netns/cni-07a3e9d6-6820-5f2e-7d94-5f0d2a496b5f" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.026 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" iface="eth0" netns="/var/run/netns/cni-07a3e9d6-6820-5f2e-7d94-5f0d2a496b5f" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.027 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" iface="eth0" netns="/var/run/netns/cni-07a3e9d6-6820-5f2e-7d94-5f0d2a496b5f" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.029 [INFO][4851] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.029 [INFO][4851] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.059 [INFO][4861] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.059 [INFO][4861] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.060 [INFO][4861] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.067 [WARNING][4861] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.067 [INFO][4861] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.068 [INFO][4861] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.072101 containerd[1501]: 2026-04-13 20:10:05.070 [INFO][4851] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:05.073974 containerd[1501]: time="2026-04-13T20:10:05.072296115Z" level=info msg="TearDown network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\" successfully" Apr 13 20:10:05.073974 containerd[1501]: time="2026-04-13T20:10:05.072321845Z" level=info msg="StopPodSandbox for \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\" returns successfully" Apr 13 20:10:05.076631 systemd[1]: run-netns-cni\x2d07a3e9d6\x2d6820\x2d5f2e\x2d7d94\x2d5f0d2a496b5f.mount: Deactivated successfully. Apr 13 20:10:05.077986 containerd[1501]: time="2026-04-13T20:10:05.077963870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4df44,Uid:0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:05.168239 systemd-networkd[1408]: cali2ba631761ca: Link UP Apr 13 20:10:05.170178 systemd-networkd[1408]: cali2ba631761ca: Gained carrier Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.115 [INFO][4876] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0 coredns-66bc5c9577- kube-system 0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb 1001 0 2026-04-13 20:09:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 coredns-66bc5c9577-4df44 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ba631761ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.115 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.135 [INFO][4888] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" HandleID="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.141 [INFO][4888] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" HandleID="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277380), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"coredns-66bc5c9577-4df44", "timestamp":"2026-04-13 20:10:05.135127598 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001142c0)} Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.141 [INFO][4888] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.141 [INFO][4888] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.141 [INFO][4888] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.143 [INFO][4888] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.146 [INFO][4888] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.149 [INFO][4888] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.151 [INFO][4888] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.152 [INFO][4888] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.152 [INFO][4888] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.154 [INFO][4888] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61 Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.157 [INFO][4888] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.162 [INFO][4888] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.71/26] block=192.168.44.64/26 handle="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.162 [INFO][4888] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.71/26] handle="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.162 [INFO][4888] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.183147 containerd[1501]: 2026-04-13 20:10:05.162 [INFO][4888] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.71/26] IPv6=[] ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" HandleID="k8s-pod-network.d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.183559 containerd[1501]: 2026-04-13 20:10:05.165 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"coredns-66bc5c9577-4df44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ba631761ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.183559 containerd[1501]: 2026-04-13 20:10:05.165 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.71/32] ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.183559 containerd[1501]: 2026-04-13 20:10:05.165 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ba631761ca ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.183559 containerd[1501]: 2026-04-13 20:10:05.169 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.183559 containerd[1501]: 2026-04-13 20:10:05.169 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61", Pod:"coredns-66bc5c9577-4df44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ba631761ca", MAC:"a2:b5:c1:ad:fd:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.183706 containerd[1501]: 2026-04-13 20:10:05.178 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61" Namespace="kube-system" Pod="coredns-66bc5c9577-4df44" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:05.208213 containerd[1501]: time="2026-04-13T20:10:05.207914630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:05.208213 containerd[1501]: time="2026-04-13T20:10:05.208008011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:05.208213 containerd[1501]: time="2026-04-13T20:10:05.208019261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.208631 containerd[1501]: time="2026-04-13T20:10:05.208412838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.209225 kubelet[2582]: I0413 20:10:05.208789 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:05.227193 kubelet[2582]: I0413 20:10:05.226827 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-ngkh6" podStartSLOduration=20.948916747 podStartE2EDuration="31.226813913s" podCreationTimestamp="2026-04-13 20:09:34 +0000 UTC" firstStartedPulling="2026-04-13 20:09:54.135299476 +0000 UTC m=+34.262184495" lastFinishedPulling="2026-04-13 20:10:04.413196632 +0000 UTC m=+44.540081661" observedRunningTime="2026-04-13 20:10:05.224755232 +0000 UTC m=+45.351640261" watchObservedRunningTime="2026-04-13 20:10:05.226813913 +0000 UTC m=+45.353698942" Apr 13 20:10:05.241454 systemd[1]: Started cri-containerd-d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61.scope - libcontainer container d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61. Apr 13 20:10:05.296453 containerd[1501]: time="2026-04-13T20:10:05.296417818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4df44,Uid:0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb,Namespace:kube-system,Attempt:1,} returns sandbox id \"d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61\"" Apr 13 20:10:05.301591 containerd[1501]: time="2026-04-13T20:10:05.301505794Z" level=info msg="CreateContainer within sandbox \"d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:05.318724 containerd[1501]: time="2026-04-13T20:10:05.318674981Z" level=info msg="CreateContainer within sandbox \"d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1db2dd2a3ceb02a2e6399febb9239cdd1031e04924156cca34f3f472548c2964\"" Apr 13 20:10:05.319588 containerd[1501]: time="2026-04-13T20:10:05.319570904Z" level=info msg="StartContainer for \"1db2dd2a3ceb02a2e6399febb9239cdd1031e04924156cca34f3f472548c2964\"" Apr 13 20:10:05.353499 systemd[1]: Started cri-containerd-1db2dd2a3ceb02a2e6399febb9239cdd1031e04924156cca34f3f472548c2964.scope - libcontainer container 1db2dd2a3ceb02a2e6399febb9239cdd1031e04924156cca34f3f472548c2964. Apr 13 20:10:05.386237 containerd[1501]: time="2026-04-13T20:10:05.385955340Z" level=info msg="StartContainer for \"1db2dd2a3ceb02a2e6399febb9239cdd1031e04924156cca34f3f472548c2964\" returns successfully" Apr 13 20:10:06.068161 systemd[1]: run-containerd-runc-k8s.io-d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61-runc.XVSkVZ.mount: Deactivated successfully. Apr 13 20:10:06.238788 kubelet[2582]: I0413 20:10:06.238642 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4df44" podStartSLOduration=41.238621051 podStartE2EDuration="41.238621051s" podCreationTimestamp="2026-04-13 20:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:06.238076443 +0000 UTC m=+46.364961512" watchObservedRunningTime="2026-04-13 20:10:06.238621051 +0000 UTC m=+46.365506110" Apr 13 20:10:06.447436 systemd-networkd[1408]: cali2ba631761ca: Gained IPv6LL Apr 13 20:10:06.555113 containerd[1501]: time="2026-04-13T20:10:06.555057567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.556191 containerd[1501]: time="2026-04-13T20:10:06.556108802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:10:06.557189 containerd[1501]: time="2026-04-13T20:10:06.557162927Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.559165 containerd[1501]: time="2026-04-13T20:10:06.559033824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.559921 containerd[1501]: time="2026-04-13T20:10:06.559586541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.145322411s" Apr 13 20:10:06.559921 containerd[1501]: time="2026-04-13T20:10:06.559611541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:10:06.560791 containerd[1501]: time="2026-04-13T20:10:06.560460364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:10:06.563830 containerd[1501]: time="2026-04-13T20:10:06.563800261Z" level=info msg="CreateContainer within sandbox \"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:10:06.585317 containerd[1501]: time="2026-04-13T20:10:06.585269024Z" level=info msg="CreateContainer within sandbox \"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8dd6e8162124728d2ae82a4ed52e3dbdc8d1eb06e3e32718edb2fbe8f0f4d2dd\"" Apr 13 20:10:06.586894 containerd[1501]: time="2026-04-13T20:10:06.585870912Z" level=info msg="StartContainer for \"8dd6e8162124728d2ae82a4ed52e3dbdc8d1eb06e3e32718edb2fbe8f0f4d2dd\"" Apr 13 20:10:06.614506 systemd[1]: Started cri-containerd-8dd6e8162124728d2ae82a4ed52e3dbdc8d1eb06e3e32718edb2fbe8f0f4d2dd.scope - libcontainer container 8dd6e8162124728d2ae82a4ed52e3dbdc8d1eb06e3e32718edb2fbe8f0f4d2dd. Apr 13 20:10:06.644916 containerd[1501]: time="2026-04-13T20:10:06.644673582Z" level=info msg="StartContainer for \"8dd6e8162124728d2ae82a4ed52e3dbdc8d1eb06e3e32718edb2fbe8f0f4d2dd\" returns successfully" Apr 13 20:10:06.962308 containerd[1501]: time="2026-04-13T20:10:06.961528775Z" level=info msg="StopPodSandbox for \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\"" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.008 [INFO][5096] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.008 [INFO][5096] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" iface="eth0" netns="/var/run/netns/cni-31d7d1bd-7ca6-3d69-c52b-670c5f258a72" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.009 [INFO][5096] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" iface="eth0" netns="/var/run/netns/cni-31d7d1bd-7ca6-3d69-c52b-670c5f258a72" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.010 [INFO][5096] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" iface="eth0" netns="/var/run/netns/cni-31d7d1bd-7ca6-3d69-c52b-670c5f258a72" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.010 [INFO][5096] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.010 [INFO][5096] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.036 [INFO][5104] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.036 [INFO][5104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.036 [INFO][5104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.040 [WARNING][5104] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.040 [INFO][5104] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.042 [INFO][5104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:07.045823 containerd[1501]: 2026-04-13 20:10:07.043 [INFO][5096] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:07.046274 containerd[1501]: time="2026-04-13T20:10:07.045971520Z" level=info msg="TearDown network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\" successfully" Apr 13 20:10:07.046274 containerd[1501]: time="2026-04-13T20:10:07.045992370Z" level=info msg="StopPodSandbox for \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\" returns successfully" Apr 13 20:10:07.048244 containerd[1501]: time="2026-04-13T20:10:07.048217320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tddkd,Uid:b1dedca3-06af-44ff-b14b-b383f1cac2f6,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:07.065700 systemd[1]: run-netns-cni\x2d31d7d1bd\x2d7ca6\x2d3d69\x2dc52b\x2d670c5f258a72.mount: Deactivated successfully. Apr 13 20:10:07.137946 systemd-networkd[1408]: calie98806f6d73: Link UP Apr 13 20:10:07.138091 systemd-networkd[1408]: calie98806f6d73: Gained carrier Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.084 [INFO][5110] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0 coredns-66bc5c9577- kube-system b1dedca3-06af-44ff-b14b-b383f1cac2f6 1029 0 2026-04-13 20:09:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-2-642afe6700 coredns-66bc5c9577-tddkd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie98806f6d73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.084 [INFO][5110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.105 [INFO][5123] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" HandleID="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.110 [INFO][5123] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" HandleID="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-2-642afe6700", "pod":"coredns-66bc5c9577-tddkd", "timestamp":"2026-04-13 20:10:07.105138837 +0000 UTC"}, Hostname:"ci-4081-3-7-2-642afe6700", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000206dc0)} Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.110 [INFO][5123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.110 [INFO][5123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.110 [INFO][5123] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2-642afe6700' Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.112 [INFO][5123] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.115 [INFO][5123] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.119 [INFO][5123] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.120 [INFO][5123] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.122 [INFO][5123] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.122 [INFO][5123] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.123 [INFO][5123] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.126 [INFO][5123] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.132 [INFO][5123] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.72/26] block=192.168.44.64/26 handle="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.132 [INFO][5123] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.72/26] handle="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" host="ci-4081-3-7-2-642afe6700" Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.132 [INFO][5123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:07.154031 containerd[1501]: 2026-04-13 20:10:07.132 [INFO][5123] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.72/26] IPv6=[] ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" HandleID="k8s-pod-network.4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.154866 containerd[1501]: 2026-04-13 20:10:07.135 [INFO][5110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b1dedca3-06af-44ff-b14b-b383f1cac2f6", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"", Pod:"coredns-66bc5c9577-tddkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98806f6d73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:07.154866 containerd[1501]: 2026-04-13 20:10:07.135 [INFO][5110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.72/32] ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.154866 containerd[1501]: 2026-04-13 20:10:07.135 [INFO][5110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie98806f6d73 ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.154866 containerd[1501]: 2026-04-13 20:10:07.137 [INFO][5110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.154866 containerd[1501]: 2026-04-13 20:10:07.140 [INFO][5110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b1dedca3-06af-44ff-b14b-b383f1cac2f6", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b", Pod:"coredns-66bc5c9577-tddkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98806f6d73", MAC:"6e:96:02:20:b5:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:07.155010 containerd[1501]: 2026-04-13 20:10:07.148 [INFO][5110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b" Namespace="kube-system" Pod="coredns-66bc5c9577-tddkd" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:07.173256 containerd[1501]: time="2026-04-13T20:10:07.172974537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:07.173256 containerd[1501]: time="2026-04-13T20:10:07.173015268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:07.173256 containerd[1501]: time="2026-04-13T20:10:07.173022918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:07.173256 containerd[1501]: time="2026-04-13T20:10:07.173077048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:07.192929 systemd[1]: run-containerd-runc-k8s.io-4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b-runc.OzRjoK.mount: Deactivated successfully. Apr 13 20:10:07.203570 systemd[1]: Started cri-containerd-4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b.scope - libcontainer container 4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b. Apr 13 20:10:07.237814 containerd[1501]: time="2026-04-13T20:10:07.237761448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tddkd,Uid:b1dedca3-06af-44ff-b14b-b383f1cac2f6,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b\"" Apr 13 20:10:07.244690 containerd[1501]: time="2026-04-13T20:10:07.244572879Z" level=info msg="CreateContainer within sandbox \"4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:07.257174 containerd[1501]: time="2026-04-13T20:10:07.257100075Z" level=info msg="CreateContainer within sandbox \"4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d3d07da22369c7e59bdb706b710d6ad061671ef1de9137e8a8069502f02eeb2\"" Apr 13 20:10:07.258357 containerd[1501]: time="2026-04-13T20:10:07.258188289Z" level=info msg="StartContainer for \"6d3d07da22369c7e59bdb706b710d6ad061671ef1de9137e8a8069502f02eeb2\"" Apr 13 20:10:07.291475 systemd[1]: Started cri-containerd-6d3d07da22369c7e59bdb706b710d6ad061671ef1de9137e8a8069502f02eeb2.scope - libcontainer container 6d3d07da22369c7e59bdb706b710d6ad061671ef1de9137e8a8069502f02eeb2. Apr 13 20:10:07.312650 containerd[1501]: time="2026-04-13T20:10:07.312604722Z" level=info msg="StartContainer for \"6d3d07da22369c7e59bdb706b710d6ad061671ef1de9137e8a8069502f02eeb2\" returns successfully" Apr 13 20:10:08.264756 kubelet[2582]: I0413 20:10:08.264655 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tddkd" podStartSLOduration=43.264635266 podStartE2EDuration="43.264635266s" podCreationTimestamp="2026-04-13 20:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:08.242442778 +0000 UTC m=+48.369327837" watchObservedRunningTime="2026-04-13 20:10:08.264635266 +0000 UTC m=+48.391520325" Apr 13 20:10:08.685764 systemd-networkd[1408]: calie98806f6d73: Gained IPv6LL Apr 13 20:10:08.800511 containerd[1501]: time="2026-04-13T20:10:08.800468857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:08.801488 containerd[1501]: time="2026-04-13T20:10:08.801453730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:10:08.802378 containerd[1501]: time="2026-04-13T20:10:08.802312360Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:08.804174 containerd[1501]: time="2026-04-13T20:10:08.804150723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:08.804655 containerd[1501]: time="2026-04-13T20:10:08.804638069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.244155035s" Apr 13 20:10:08.804760 containerd[1501]: time="2026-04-13T20:10:08.804694350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:10:08.805788 containerd[1501]: time="2026-04-13T20:10:08.805474870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:10:08.808416 containerd[1501]: time="2026-04-13T20:10:08.808400507Z" level=info msg="CreateContainer within sandbox \"be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:10:08.828764 containerd[1501]: time="2026-04-13T20:10:08.828736220Z" level=info msg="CreateContainer within sandbox \"be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"93081237083241980a2d6eb76a14bd7ecd0622edac67d36d31a5f501d0b0374c\"" Apr 13 20:10:08.829912 containerd[1501]: time="2026-04-13T20:10:08.829173906Z" level=info msg="StartContainer for \"93081237083241980a2d6eb76a14bd7ecd0622edac67d36d31a5f501d0b0374c\"" Apr 13 20:10:08.857458 systemd[1]: Started cri-containerd-93081237083241980a2d6eb76a14bd7ecd0622edac67d36d31a5f501d0b0374c.scope - libcontainer container 93081237083241980a2d6eb76a14bd7ecd0622edac67d36d31a5f501d0b0374c. Apr 13 20:10:08.894964 containerd[1501]: time="2026-04-13T20:10:08.894810698Z" level=info msg="StartContainer for \"93081237083241980a2d6eb76a14bd7ecd0622edac67d36d31a5f501d0b0374c\" returns successfully" Apr 13 20:10:11.132643 containerd[1501]: time="2026-04-13T20:10:11.132570581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:11.133926 containerd[1501]: time="2026-04-13T20:10:11.133728903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:10:11.135393 containerd[1501]: time="2026-04-13T20:10:11.134690723Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:11.137005 containerd[1501]: time="2026-04-13T20:10:11.136941846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:11.137491 containerd[1501]: time="2026-04-13T20:10:11.137455852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.331962652s" Apr 13 20:10:11.137491 containerd[1501]: time="2026-04-13T20:10:11.137486152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:10:11.139721 containerd[1501]: time="2026-04-13T20:10:11.139693206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:10:11.142717 containerd[1501]: time="2026-04-13T20:10:11.142678386Z" level=info msg="CreateContainer within sandbox \"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:10:11.161663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3477378361.mount: Deactivated successfully. Apr 13 20:10:11.162507 containerd[1501]: time="2026-04-13T20:10:11.162423343Z" level=info msg="CreateContainer within sandbox \"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8e58664154febb39e67c6982060cc6302c451c0ddb581eea0f2fb13c55a4f713\"" Apr 13 20:10:11.163383 containerd[1501]: time="2026-04-13T20:10:11.163053469Z" level=info msg="StartContainer for \"8e58664154febb39e67c6982060cc6302c451c0ddb581eea0f2fb13c55a4f713\"" Apr 13 20:10:11.193446 systemd[1]: Started cri-containerd-8e58664154febb39e67c6982060cc6302c451c0ddb581eea0f2fb13c55a4f713.scope - libcontainer container 8e58664154febb39e67c6982060cc6302c451c0ddb581eea0f2fb13c55a4f713. Apr 13 20:10:11.219477 containerd[1501]: time="2026-04-13T20:10:11.219428269Z" level=info msg="StartContainer for \"8e58664154febb39e67c6982060cc6302c451c0ddb581eea0f2fb13c55a4f713\" returns successfully" Apr 13 20:10:12.048366 kubelet[2582]: I0413 20:10:12.048221 2582 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:10:12.050464 kubelet[2582]: I0413 20:10:12.050417 2582 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:10:13.071933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352408953.mount: Deactivated successfully. Apr 13 20:10:13.082645 containerd[1501]: time="2026-04-13T20:10:13.082607966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:13.083507 containerd[1501]: time="2026-04-13T20:10:13.083412864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:10:13.084163 containerd[1501]: time="2026-04-13T20:10:13.084003439Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:13.085704 containerd[1501]: time="2026-04-13T20:10:13.085677155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:13.086135 containerd[1501]: time="2026-04-13T20:10:13.086103308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.946385442s" Apr 13 20:10:13.086135 containerd[1501]: time="2026-04-13T20:10:13.086127999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:10:13.089192 containerd[1501]: time="2026-04-13T20:10:13.089156297Z" level=info msg="CreateContainer within sandbox \"be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:10:13.107345 containerd[1501]: time="2026-04-13T20:10:13.107312586Z" level=info msg="CreateContainer within sandbox \"be37c425a9b3b89a09cc83d3b1dd52ccbc2514eacfca37b0278cc5148234d8d5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"cc6c73c60948cbd0547519f37a5fc4ad78db3e3b54dcce95e042d3b0f3576f39\"" Apr 13 20:10:13.107686 containerd[1501]: time="2026-04-13T20:10:13.107670639Z" level=info msg="StartContainer for \"cc6c73c60948cbd0547519f37a5fc4ad78db3e3b54dcce95e042d3b0f3576f39\"" Apr 13 20:10:13.139456 systemd[1]: Started cri-containerd-cc6c73c60948cbd0547519f37a5fc4ad78db3e3b54dcce95e042d3b0f3576f39.scope - libcontainer container cc6c73c60948cbd0547519f37a5fc4ad78db3e3b54dcce95e042d3b0f3576f39. Apr 13 20:10:13.175027 containerd[1501]: time="2026-04-13T20:10:13.174847133Z" level=info msg="StartContainer for \"cc6c73c60948cbd0547519f37a5fc4ad78db3e3b54dcce95e042d3b0f3576f39\" returns successfully" Apr 13 20:10:13.257697 kubelet[2582]: I0413 20:10:13.257642 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7787466b5b-xn5n8" podStartSLOduration=0.908405719 podStartE2EDuration="19.257628681s" podCreationTimestamp="2026-04-13 20:09:54 +0000 UTC" firstStartedPulling="2026-04-13 20:09:54.738058597 +0000 UTC m=+34.864943626" lastFinishedPulling="2026-04-13 20:10:13.087281559 +0000 UTC m=+53.214166588" observedRunningTime="2026-04-13 20:10:13.255856396 +0000 UTC m=+53.382741415" watchObservedRunningTime="2026-04-13 20:10:13.257628681 +0000 UTC m=+53.384513700" Apr 13 20:10:13.258497 kubelet[2582]: I0413 20:10:13.257841 2582 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bd8jl" podStartSLOduration=21.292005953 podStartE2EDuration="38.257837133s" podCreationTimestamp="2026-04-13 20:09:35 +0000 UTC" firstStartedPulling="2026-04-13 20:09:54.173118168 +0000 UTC m=+34.300003197" lastFinishedPulling="2026-04-13 20:10:11.138949348 +0000 UTC m=+51.265834377" observedRunningTime="2026-04-13 20:10:11.253857829 +0000 UTC m=+51.380742858" watchObservedRunningTime="2026-04-13 20:10:13.257837133 +0000 UTC m=+53.384722152" Apr 13 20:10:19.956443 containerd[1501]: time="2026-04-13T20:10:19.956257605Z" level=info msg="StopPodSandbox for \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\"" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.021 [WARNING][5401] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121", Pod:"calico-apiserver-5d559f55b6-fdgdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali88e0eabeac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.021 [INFO][5401] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.021 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" iface="eth0" netns="" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.021 [INFO][5401] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.021 [INFO][5401] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.050 [INFO][5410] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.050 [INFO][5410] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.050 [INFO][5410] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.055 [WARNING][5410] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.055 [INFO][5410] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.056 [INFO][5410] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.060235 containerd[1501]: 2026-04-13 20:10:20.058 [INFO][5401] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.060750 containerd[1501]: time="2026-04-13T20:10:20.060267388Z" level=info msg="TearDown network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\" successfully" Apr 13 20:10:20.060750 containerd[1501]: time="2026-04-13T20:10:20.060289348Z" level=info msg="StopPodSandbox for \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\" returns successfully" Apr 13 20:10:20.060834 containerd[1501]: time="2026-04-13T20:10:20.060807461Z" level=info msg="RemovePodSandbox for \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\"" Apr 13 20:10:20.060834 containerd[1501]: time="2026-04-13T20:10:20.060829612Z" level=info msg="Forcibly stopping sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\"" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.086 [WARNING][5425] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"1febeed9-7aaa-4a97-a2b4-1f1caf66c1e4", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"8bdd82822d74d2a6c6343e744aa2ac5701d4a11f2893bf47002ec9ac82455121", Pod:"calico-apiserver-5d559f55b6-fdgdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali88e0eabeac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.086 [INFO][5425] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.086 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" iface="eth0" netns="" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.086 [INFO][5425] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.086 [INFO][5425] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.103 [INFO][5432] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.103 [INFO][5432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.103 [INFO][5432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.108 [WARNING][5432] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.108 [INFO][5432] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" HandleID="k8s-pod-network.77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--fdgdr-eth0" Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.109 [INFO][5432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.112821 containerd[1501]: 2026-04-13 20:10:20.111 [INFO][5425] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e" Apr 13 20:10:20.113151 containerd[1501]: time="2026-04-13T20:10:20.112842075Z" level=info msg="TearDown network for sandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\" successfully" Apr 13 20:10:20.117708 containerd[1501]: time="2026-04-13T20:10:20.117682455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.117778 containerd[1501]: time="2026-04-13T20:10:20.117729265Z" level=info msg="RemovePodSandbox \"77d2961b052efc227807e50fac66904b615cd80aa8642efcb830040e35b3390e\" returns successfully" Apr 13 20:10:20.118135 containerd[1501]: time="2026-04-13T20:10:20.118103957Z" level=info msg="StopPodSandbox for \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\"" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.144 [WARNING][5447] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0", GenerateName:"calico-kube-controllers-ffb7f679-", Namespace:"calico-system", SelfLink:"", UID:"ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ffb7f679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0", Pod:"calico-kube-controllers-ffb7f679-bnvkh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4afa56b62cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.145 [INFO][5447] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.145 [INFO][5447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" iface="eth0" netns="" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.145 [INFO][5447] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.145 [INFO][5447] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.162 [INFO][5455] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.163 [INFO][5455] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.163 [INFO][5455] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.167 [WARNING][5455] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.167 [INFO][5455] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.168 [INFO][5455] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.172218 containerd[1501]: 2026-04-13 20:10:20.170 [INFO][5447] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.173551 containerd[1501]: time="2026-04-13T20:10:20.172260734Z" level=info msg="TearDown network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\" successfully" Apr 13 20:10:20.173551 containerd[1501]: time="2026-04-13T20:10:20.172323624Z" level=info msg="StopPodSandbox for \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\" returns successfully" Apr 13 20:10:20.173551 containerd[1501]: time="2026-04-13T20:10:20.173220760Z" level=info msg="RemovePodSandbox for \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\"" Apr 13 20:10:20.173551 containerd[1501]: time="2026-04-13T20:10:20.173262430Z" level=info msg="Forcibly stopping sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\"" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.201 [WARNING][5470] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0", GenerateName:"calico-kube-controllers-ffb7f679-", Namespace:"calico-system", SelfLink:"", UID:"ef5a4fd8-83c5-4b36-9eb8-ac26cc2345f3", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ffb7f679", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"d92d786cd67c5a6857b185e8e2df1db3bb53520dc4a9e367c2f0d6a27f76ffa0", Pod:"calico-kube-controllers-ffb7f679-bnvkh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4afa56b62cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.201 [INFO][5470] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.202 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" iface="eth0" netns="" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.202 [INFO][5470] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.202 [INFO][5470] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.227 [INFO][5477] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.227 [INFO][5477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.227 [INFO][5477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.232 [WARNING][5477] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.232 [INFO][5477] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" HandleID="k8s-pod-network.ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--kube--controllers--ffb7f679--bnvkh-eth0" Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.233 [INFO][5477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.237377 containerd[1501]: 2026-04-13 20:10:20.234 [INFO][5470] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575" Apr 13 20:10:20.237377 containerd[1501]: time="2026-04-13T20:10:20.236562484Z" level=info msg="TearDown network for sandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\" successfully" Apr 13 20:10:20.240661 containerd[1501]: time="2026-04-13T20:10:20.240634659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.240729 containerd[1501]: time="2026-04-13T20:10:20.240679469Z" level=info msg="RemovePodSandbox \"ac575e092587398097af7818177f6202f1810e34e771edf5529b00004a377575\" returns successfully" Apr 13 20:10:20.241017 containerd[1501]: time="2026-04-13T20:10:20.240998301Z" level=info msg="StopPodSandbox for \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\"" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.270 [WARNING][5492] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b1dedca3-06af-44ff-b14b-b383f1cac2f6", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b", Pod:"coredns-66bc5c9577-tddkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98806f6d73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.270 [INFO][5492] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.270 [INFO][5492] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" iface="eth0" netns="" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.270 [INFO][5492] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.270 [INFO][5492] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.285 [INFO][5499] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.286 [INFO][5499] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.286 [INFO][5499] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.291 [WARNING][5499] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.291 [INFO][5499] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.293 [INFO][5499] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.297237 containerd[1501]: 2026-04-13 20:10:20.295 [INFO][5492] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.297858 containerd[1501]: time="2026-04-13T20:10:20.297356702Z" level=info msg="TearDown network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\" successfully" Apr 13 20:10:20.297858 containerd[1501]: time="2026-04-13T20:10:20.297436802Z" level=info msg="StopPodSandbox for \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\" returns successfully" Apr 13 20:10:20.298253 containerd[1501]: time="2026-04-13T20:10:20.298225726Z" level=info msg="RemovePodSandbox for \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\"" Apr 13 20:10:20.298286 containerd[1501]: time="2026-04-13T20:10:20.298260967Z" level=info msg="Forcibly stopping sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\"" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.327 [WARNING][5513] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b1dedca3-06af-44ff-b14b-b383f1cac2f6", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"4c1cff125e741e961dcdca56c9f64b3c50f640794eeb2df9622d8e8187c8272b", Pod:"coredns-66bc5c9577-tddkd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie98806f6d73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.328 [INFO][5513] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.328 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" iface="eth0" netns="" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.328 [INFO][5513] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.328 [INFO][5513] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.344 [INFO][5520] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.344 [INFO][5520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.344 [INFO][5520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.349 [WARNING][5520] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.349 [INFO][5520] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" HandleID="k8s-pod-network.8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--tddkd-eth0" Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.351 [INFO][5520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.354837 containerd[1501]: 2026-04-13 20:10:20.353 [INFO][5513] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed" Apr 13 20:10:20.355161 containerd[1501]: time="2026-04-13T20:10:20.354879638Z" level=info msg="TearDown network for sandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\" successfully" Apr 13 20:10:20.358133 containerd[1501]: time="2026-04-13T20:10:20.358108499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.358458 containerd[1501]: time="2026-04-13T20:10:20.358166339Z" level=info msg="RemovePodSandbox \"8ceb0043be36931260406799b39638060af8e572154ff39738cbd8848251e9ed\" returns successfully" Apr 13 20:10:20.358644 containerd[1501]: time="2026-04-13T20:10:20.358618202Z" level=info msg="StopPodSandbox for \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\"" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.382 [WARNING][5535] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"95e2e62a-c377-4112-9481-1c4f900ed72b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665", Pod:"calico-apiserver-5d559f55b6-jwmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid10fcd8f375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.383 [INFO][5535] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.383 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" iface="eth0" netns="" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.383 [INFO][5535] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.383 [INFO][5535] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.397 [INFO][5542] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.398 [INFO][5542] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.398 [INFO][5542] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.404 [WARNING][5542] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.404 [INFO][5542] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.405 [INFO][5542] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.409121 containerd[1501]: 2026-04-13 20:10:20.407 [INFO][5535] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.410126 containerd[1501]: time="2026-04-13T20:10:20.409131046Z" level=info msg="TearDown network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\" successfully" Apr 13 20:10:20.410126 containerd[1501]: time="2026-04-13T20:10:20.409151936Z" level=info msg="StopPodSandbox for \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\" returns successfully" Apr 13 20:10:20.410126 containerd[1501]: time="2026-04-13T20:10:20.409675219Z" level=info msg="RemovePodSandbox for \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\"" Apr 13 20:10:20.410126 containerd[1501]: time="2026-04-13T20:10:20.409694149Z" level=info msg="Forcibly stopping sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\"" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.435 [WARNING][5557] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0", GenerateName:"calico-apiserver-5d559f55b6-", Namespace:"calico-system", SelfLink:"", UID:"95e2e62a-c377-4112-9481-1c4f900ed72b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d559f55b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"afae89ceb04899f1d8c176e1dec7e50c6f50c74b7386b26aae4fdc2c37bc9665", Pod:"calico-apiserver-5d559f55b6-jwmwb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid10fcd8f375", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.435 [INFO][5557] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.435 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" iface="eth0" netns="" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.435 [INFO][5557] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.435 [INFO][5557] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.452 [INFO][5564] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.452 [INFO][5564] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.452 [INFO][5564] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.456 [WARNING][5564] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.457 [INFO][5564] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" HandleID="k8s-pod-network.439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Workload="ci--4081--3--7--2--642afe6700-k8s-calico--apiserver--5d559f55b6--jwmwb-eth0" Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.458 [INFO][5564] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.461899 containerd[1501]: 2026-04-13 20:10:20.459 [INFO][5557] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e" Apr 13 20:10:20.462239 containerd[1501]: time="2026-04-13T20:10:20.461926164Z" level=info msg="TearDown network for sandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\" successfully" Apr 13 20:10:20.465222 containerd[1501]: time="2026-04-13T20:10:20.465157844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.465222 containerd[1501]: time="2026-04-13T20:10:20.465201204Z" level=info msg="RemovePodSandbox \"439e86fa056fe6aeccbdd26661352f897aef1e4af254f076021e37e51fa7fe9e\" returns successfully" Apr 13 20:10:20.465578 containerd[1501]: time="2026-04-13T20:10:20.465555297Z" level=info msg="StopPodSandbox for \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\"" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.490 [WARNING][5577] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7", Pod:"csi-node-driver-bd8jl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2bff448f7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.490 [INFO][5577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.490 [INFO][5577] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" iface="eth0" netns="" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.490 [INFO][5577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.490 [INFO][5577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.506 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.506 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.506 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.511 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.511 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.512 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.517139 containerd[1501]: 2026-04-13 20:10:20.514 [INFO][5577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.517139 containerd[1501]: time="2026-04-13T20:10:20.516032471Z" level=info msg="TearDown network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\" successfully" Apr 13 20:10:20.517139 containerd[1501]: time="2026-04-13T20:10:20.516054011Z" level=info msg="StopPodSandbox for \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\" returns successfully" Apr 13 20:10:20.517139 containerd[1501]: time="2026-04-13T20:10:20.516521183Z" level=info msg="RemovePodSandbox for \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\"" Apr 13 20:10:20.517139 containerd[1501]: time="2026-04-13T20:10:20.516548513Z" level=info msg="Forcibly stopping sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\"" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.548 [WARNING][5599] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"63fb5bd2-87bc-48b2-990d-3ba3eaa6c20e", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"31e17871cf8dfaa7f8e2e119c0572bd9038d6fa0fb3f8f19f1131aa147a044e7", Pod:"csi-node-driver-bd8jl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2bff448f7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.548 [INFO][5599] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.548 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" iface="eth0" netns="" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.548 [INFO][5599] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.548 [INFO][5599] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.564 [INFO][5607] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.564 [INFO][5607] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.564 [INFO][5607] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.568 [WARNING][5607] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.568 [INFO][5607] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" HandleID="k8s-pod-network.b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Workload="ci--4081--3--7--2--642afe6700-k8s-csi--node--driver--bd8jl-eth0" Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.571 [INFO][5607] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.574622 containerd[1501]: 2026-04-13 20:10:20.572 [INFO][5599] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3" Apr 13 20:10:20.574951 containerd[1501]: time="2026-04-13T20:10:20.574652025Z" level=info msg="TearDown network for sandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\" successfully" Apr 13 20:10:20.578037 containerd[1501]: time="2026-04-13T20:10:20.578008695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.578117 containerd[1501]: time="2026-04-13T20:10:20.578064125Z" level=info msg="RemovePodSandbox \"b7d51a7eac8f578216bfa4e826b1bc08319d87a1fb77780838bb413428bf0ff3\" returns successfully" Apr 13 20:10:20.578565 containerd[1501]: time="2026-04-13T20:10:20.578522419Z" level=info msg="StopPodSandbox for \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\"" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.604 [WARNING][5622] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9b1ddd38-936b-4249-ae6d-50277142aab0", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75", Pod:"goldmane-cccfbd5cf-ngkh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15a0b3b10d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.604 [INFO][5622] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.604 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" iface="eth0" netns="" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.604 [INFO][5622] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.604 [INFO][5622] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.620 [INFO][5629] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.620 [INFO][5629] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.620 [INFO][5629] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.625 [WARNING][5629] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.625 [INFO][5629] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.626 [INFO][5629] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.630163 containerd[1501]: 2026-04-13 20:10:20.628 [INFO][5622] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.630711 containerd[1501]: time="2026-04-13T20:10:20.630589852Z" level=info msg="TearDown network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\" successfully" Apr 13 20:10:20.630711 containerd[1501]: time="2026-04-13T20:10:20.630625822Z" level=info msg="StopPodSandbox for \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\" returns successfully" Apr 13 20:10:20.631165 containerd[1501]: time="2026-04-13T20:10:20.631110456Z" level=info msg="RemovePodSandbox for \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\"" Apr 13 20:10:20.631165 containerd[1501]: time="2026-04-13T20:10:20.631142946Z" level=info msg="Forcibly stopping sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\"" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.658 [WARNING][5643] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"9b1ddd38-936b-4249-ae6d-50277142aab0", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"3c0be56a221462b956db528b39b12f661de049b1ac0c32ff50333c86527c4a75", Pod:"goldmane-cccfbd5cf-ngkh6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali15a0b3b10d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.658 [INFO][5643] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.658 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" iface="eth0" netns="" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.658 [INFO][5643] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.658 [INFO][5643] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.679 [INFO][5651] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.679 [INFO][5651] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.679 [INFO][5651] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.684 [WARNING][5651] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.684 [INFO][5651] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" HandleID="k8s-pod-network.2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Workload="ci--4081--3--7--2--642afe6700-k8s-goldmane--cccfbd5cf--ngkh6-eth0" Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.685 [INFO][5651] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.688892 containerd[1501]: 2026-04-13 20:10:20.686 [INFO][5643] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599" Apr 13 20:10:20.689255 containerd[1501]: time="2026-04-13T20:10:20.688922555Z" level=info msg="TearDown network for sandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\" successfully" Apr 13 20:10:20.692193 containerd[1501]: time="2026-04-13T20:10:20.692166514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.692280 containerd[1501]: time="2026-04-13T20:10:20.692226075Z" level=info msg="RemovePodSandbox \"2bedd366eed71cce0feab495d6771b7d4fac997fd09b9b6d518aeed2f98aa599\" returns successfully" Apr 13 20:10:20.693185 containerd[1501]: time="2026-04-13T20:10:20.692838309Z" level=info msg="StopPodSandbox for \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\"" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.720 [WARNING][5665] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.720 [INFO][5665] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.720 [INFO][5665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" iface="eth0" netns="" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.720 [INFO][5665] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.720 [INFO][5665] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.740 [INFO][5672] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.740 [INFO][5672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.740 [INFO][5672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.746 [WARNING][5672] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.746 [INFO][5672] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.747 [INFO][5672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.752032 containerd[1501]: 2026-04-13 20:10:20.750 [INFO][5665] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.752457 containerd[1501]: time="2026-04-13T20:10:20.752403599Z" level=info msg="TearDown network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\" successfully" Apr 13 20:10:20.752457 containerd[1501]: time="2026-04-13T20:10:20.752433550Z" level=info msg="StopPodSandbox for \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\" returns successfully" Apr 13 20:10:20.753103 containerd[1501]: time="2026-04-13T20:10:20.753079963Z" level=info msg="RemovePodSandbox for \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\"" Apr 13 20:10:20.753103 containerd[1501]: time="2026-04-13T20:10:20.753103213Z" level=info msg="Forcibly stopping sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\"" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.786 [WARNING][5686] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" WorkloadEndpoint="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.786 [INFO][5686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.786 [INFO][5686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" iface="eth0" netns="" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.786 [INFO][5686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.786 [INFO][5686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.804 [INFO][5693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.804 [INFO][5693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.804 [INFO][5693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.810 [WARNING][5693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.810 [INFO][5693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" HandleID="k8s-pod-network.ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Workload="ci--4081--3--7--2--642afe6700-k8s-whisker--5c8c5b9bcf--vb6pl-eth0" Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.812 [INFO][5693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.816050 containerd[1501]: 2026-04-13 20:10:20.813 [INFO][5686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e" Apr 13 20:10:20.817027 containerd[1501]: time="2026-04-13T20:10:20.815994055Z" level=info msg="TearDown network for sandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\" successfully" Apr 13 20:10:20.820790 containerd[1501]: time="2026-04-13T20:10:20.820736144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.820891 containerd[1501]: time="2026-04-13T20:10:20.820858805Z" level=info msg="RemovePodSandbox \"ccc0628fcddd8c528ecbdc7c6232476117a8385b7522bf0b33189a5836ee756e\" returns successfully" Apr 13 20:10:20.821473 containerd[1501]: time="2026-04-13T20:10:20.821438658Z" level=info msg="StopPodSandbox for \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\"" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.850 [WARNING][5708] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61", Pod:"coredns-66bc5c9577-4df44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ba631761ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.850 [INFO][5708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.850 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" iface="eth0" netns="" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.850 [INFO][5708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.850 [INFO][5708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.877 [INFO][5715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.877 [INFO][5715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.877 [INFO][5715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.882 [WARNING][5715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.882 [INFO][5715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.884 [INFO][5715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.888411 containerd[1501]: 2026-04-13 20:10:20.886 [INFO][5708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.889222 containerd[1501]: time="2026-04-13T20:10:20.888458975Z" level=info msg="TearDown network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\" successfully" Apr 13 20:10:20.889222 containerd[1501]: time="2026-04-13T20:10:20.888480415Z" level=info msg="StopPodSandbox for \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\" returns successfully" Apr 13 20:10:20.889222 containerd[1501]: time="2026-04-13T20:10:20.888977217Z" level=info msg="RemovePodSandbox for \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\"" Apr 13 20:10:20.889222 containerd[1501]: time="2026-04-13T20:10:20.888996077Z" level=info msg="Forcibly stopping sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\"" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.928 [WARNING][5729] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0454e9bc-ec53-4cc9-a0f4-2ba8ec7662fb", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2-642afe6700", ContainerID:"d235d020184cdd19760999be6db8bbe0d76caf09376819b4fae99f08f7077a61", Pod:"coredns-66bc5c9577-4df44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ba631761ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.928 [INFO][5729] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.929 [INFO][5729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" iface="eth0" netns="" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.929 [INFO][5729] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.929 [INFO][5729] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.947 [INFO][5737] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.947 [INFO][5737] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.947 [INFO][5737] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.952 [WARNING][5737] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.953 [INFO][5737] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" HandleID="k8s-pod-network.aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Workload="ci--4081--3--7--2--642afe6700-k8s-coredns--66bc5c9577--4df44-eth0" Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.954 [INFO][5737] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.960095 containerd[1501]: 2026-04-13 20:10:20.956 [INFO][5729] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f" Apr 13 20:10:20.960095 containerd[1501]: time="2026-04-13T20:10:20.958593530Z" level=info msg="TearDown network for sandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\" successfully" Apr 13 20:10:20.963331 containerd[1501]: time="2026-04-13T20:10:20.963252679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:20.963331 containerd[1501]: time="2026-04-13T20:10:20.963321109Z" level=info msg="RemovePodSandbox \"aae2db38c81ed8d8473980a8fd607e02c98f4fc2d3ff61fff31fa9905b6aae7f\" returns successfully" Apr 13 20:10:27.208008 systemd[1]: run-containerd-runc-k8s.io-c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544-runc.xGbLHw.mount: Deactivated successfully. Apr 13 20:10:34.778598 kubelet[2582]: I0413 20:10:34.777872 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:40.020438 kubelet[2582]: I0413 20:10:40.019836 2582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:56.073398 systemd[1]: Started sshd@9-62.238.3.135:22-20.229.252.112:53352.service - OpenSSH per-connection server daemon (20.229.252.112:53352). Apr 13 20:10:56.281434 sshd[5910]: Accepted publickey for core from 20.229.252.112 port 53352 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:10:56.282749 sshd[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:10:56.289915 systemd-logind[1483]: New session 10 of user core. Apr 13 20:10:56.294642 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:10:56.514162 sshd[5910]: pam_unix(sshd:session): session closed for user core Apr 13 20:10:56.516856 systemd[1]: sshd@9-62.238.3.135:22-20.229.252.112:53352.service: Deactivated successfully. Apr 13 20:10:56.518886 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:10:56.520384 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:10:56.523020 systemd-logind[1483]: Removed session 10. Apr 13 20:10:57.212308 systemd[1]: run-containerd-runc-k8s.io-c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544-runc.mnJMza.mount: Deactivated successfully. Apr 13 20:11:01.561734 systemd[1]: Started sshd@10-62.238.3.135:22-20.229.252.112:53366.service - OpenSSH per-connection server daemon (20.229.252.112:53366). Apr 13 20:11:01.788747 sshd[5947]: Accepted publickey for core from 20.229.252.112 port 53366 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:01.791759 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:01.800538 systemd-logind[1483]: New session 11 of user core. Apr 13 20:11:01.805643 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:11:02.044194 sshd[5947]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:02.048033 systemd[1]: sshd@10-62.238.3.135:22-20.229.252.112:53366.service: Deactivated successfully. Apr 13 20:11:02.052712 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:11:02.054927 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:11:02.055882 systemd-logind[1483]: Removed session 11. Apr 13 20:11:07.094759 systemd[1]: Started sshd@11-62.238.3.135:22-20.229.252.112:55966.service - OpenSSH per-connection server daemon (20.229.252.112:55966). Apr 13 20:11:07.296701 sshd[5982]: Accepted publickey for core from 20.229.252.112 port 55966 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:07.297371 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:07.301995 systemd-logind[1483]: New session 12 of user core. Apr 13 20:11:07.306462 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:11:07.521535 sshd[5982]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:07.526437 systemd[1]: sshd@11-62.238.3.135:22-20.229.252.112:55966.service: Deactivated successfully. Apr 13 20:11:07.528768 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:11:07.529537 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:11:07.530490 systemd-logind[1483]: Removed session 12. Apr 13 20:11:12.568648 systemd[1]: Started sshd@12-62.238.3.135:22-20.229.252.112:55970.service - OpenSSH per-connection server daemon (20.229.252.112:55970). Apr 13 20:11:12.781063 sshd[6012]: Accepted publickey for core from 20.229.252.112 port 55970 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:12.783241 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:12.790893 systemd-logind[1483]: New session 13 of user core. Apr 13 20:11:12.796874 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:11:12.993584 sshd[6012]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:12.996545 systemd[1]: sshd@12-62.238.3.135:22-20.229.252.112:55970.service: Deactivated successfully. Apr 13 20:11:12.999324 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:11:13.001915 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:11:13.002751 systemd-logind[1483]: Removed session 13. Apr 13 20:11:13.033537 systemd[1]: Started sshd@13-62.238.3.135:22-20.229.252.112:55982.service - OpenSSH per-connection server daemon (20.229.252.112:55982). Apr 13 20:11:13.236947 sshd[6026]: Accepted publickey for core from 20.229.252.112 port 55982 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:13.239411 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:13.245838 systemd-logind[1483]: New session 14 of user core. Apr 13 20:11:13.253567 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:11:13.486892 sshd[6026]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:13.490428 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:11:13.490828 systemd[1]: sshd@13-62.238.3.135:22-20.229.252.112:55982.service: Deactivated successfully. Apr 13 20:11:13.492620 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:11:13.493580 systemd-logind[1483]: Removed session 14. Apr 13 20:11:13.530817 systemd[1]: Started sshd@14-62.238.3.135:22-20.229.252.112:55986.service - OpenSSH per-connection server daemon (20.229.252.112:55986). Apr 13 20:11:13.728602 sshd[6037]: Accepted publickey for core from 20.229.252.112 port 55986 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:13.731510 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:13.738894 systemd-logind[1483]: New session 15 of user core. Apr 13 20:11:13.744843 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:11:13.987175 sshd[6037]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:13.991225 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:11:13.991314 systemd[1]: sshd@14-62.238.3.135:22-20.229.252.112:55986.service: Deactivated successfully. Apr 13 20:11:13.993904 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:11:13.994810 systemd-logind[1483]: Removed session 15. Apr 13 20:11:19.035808 systemd[1]: Started sshd@15-62.238.3.135:22-20.229.252.112:47816.service - OpenSSH per-connection server daemon (20.229.252.112:47816). Apr 13 20:11:19.235380 sshd[6059]: Accepted publickey for core from 20.229.252.112 port 47816 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:19.238303 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:19.246928 systemd-logind[1483]: New session 16 of user core. Apr 13 20:11:19.252603 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:11:19.486554 sshd[6059]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:19.491712 systemd[1]: sshd@15-62.238.3.135:22-20.229.252.112:47816.service: Deactivated successfully. Apr 13 20:11:19.494957 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:11:19.496390 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:11:19.497553 systemd-logind[1483]: Removed session 16. Apr 13 20:11:19.533724 systemd[1]: Started sshd@16-62.238.3.135:22-20.229.252.112:47824.service - OpenSSH per-connection server daemon (20.229.252.112:47824). Apr 13 20:11:19.757952 sshd[6072]: Accepted publickey for core from 20.229.252.112 port 47824 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:19.763175 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:19.772439 systemd-logind[1483]: New session 17 of user core. Apr 13 20:11:19.779560 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:11:20.229162 sshd[6072]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:20.237733 systemd[1]: sshd@16-62.238.3.135:22-20.229.252.112:47824.service: Deactivated successfully. Apr 13 20:11:20.242037 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:11:20.243960 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:11:20.246680 systemd-logind[1483]: Removed session 17. Apr 13 20:11:20.279326 systemd[1]: Started sshd@17-62.238.3.135:22-20.229.252.112:47834.service - OpenSSH per-connection server daemon (20.229.252.112:47834). Apr 13 20:11:20.508573 sshd[6085]: Accepted publickey for core from 20.229.252.112 port 47834 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:20.511744 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:20.522174 systemd-logind[1483]: New session 18 of user core. Apr 13 20:11:20.527543 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:11:21.310293 sshd[6085]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:21.314676 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:11:21.315869 systemd[1]: sshd@17-62.238.3.135:22-20.229.252.112:47834.service: Deactivated successfully. Apr 13 20:11:21.320321 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:11:21.323188 systemd-logind[1483]: Removed session 18. Apr 13 20:11:21.348420 systemd[1]: Started sshd@18-62.238.3.135:22-20.229.252.112:47842.service - OpenSSH per-connection server daemon (20.229.252.112:47842). Apr 13 20:11:21.562247 sshd[6109]: Accepted publickey for core from 20.229.252.112 port 47842 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:21.565161 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:21.573468 systemd-logind[1483]: New session 19 of user core. Apr 13 20:11:21.581657 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:11:21.901219 sshd[6109]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:21.904832 systemd[1]: sshd@18-62.238.3.135:22-20.229.252.112:47842.service: Deactivated successfully. Apr 13 20:11:21.906970 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:11:21.907633 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:11:21.908446 systemd-logind[1483]: Removed session 19. Apr 13 20:11:21.951645 systemd[1]: Started sshd@19-62.238.3.135:22-20.229.252.112:47854.service - OpenSSH per-connection server daemon (20.229.252.112:47854). Apr 13 20:11:22.151708 sshd[6120]: Accepted publickey for core from 20.229.252.112 port 47854 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:22.154095 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:22.160186 systemd-logind[1483]: New session 20 of user core. Apr 13 20:11:22.169542 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:11:22.377308 sshd[6120]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:22.380963 systemd[1]: sshd@19-62.238.3.135:22-20.229.252.112:47854.service: Deactivated successfully. Apr 13 20:11:22.383735 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:11:22.386010 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:11:22.387391 systemd-logind[1483]: Removed session 20. Apr 13 20:11:24.064489 update_engine[1485]: I20260413 20:11:24.064378 1485 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 20:11:24.064489 update_engine[1485]: I20260413 20:11:24.064460 1485 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 20:11:24.065206 update_engine[1485]: I20260413 20:11:24.064819 1485 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 20:11:24.066081 update_engine[1485]: I20260413 20:11:24.065948 1485 omaha_request_params.cc:62] Current group set to lts Apr 13 20:11:24.068063 update_engine[1485]: I20260413 20:11:24.067232 1485 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 20:11:24.068063 update_engine[1485]: I20260413 20:11:24.067273 1485 update_attempter.cc:643] Scheduling an action processor start. Apr 13 20:11:24.068063 update_engine[1485]: I20260413 20:11:24.067306 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 20:11:24.068063 update_engine[1485]: I20260413 20:11:24.067415 1485 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 20:11:24.068063 update_engine[1485]: I20260413 20:11:24.067548 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 20:11:24.068063 update_engine[1485]: I20260413 20:11:24.067564 1485 omaha_request_action.cc:272] Request: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: Apr 13 20:11:24.068063 update_engine[1485]: I20260413 20:11:24.067580 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:11:24.068728 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 20:11:24.074021 update_engine[1485]: I20260413 20:11:24.073966 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:11:24.074525 update_engine[1485]: I20260413 20:11:24.074465 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:11:24.075341 update_engine[1485]: E20260413 20:11:24.075285 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:11:24.075461 update_engine[1485]: I20260413 20:11:24.075421 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 20:11:27.206982 systemd[1]: run-containerd-runc-k8s.io-c307066066c21d6ad9012802526728be4ab542814ededd4e31f48658e391a544-runc.UohKv0.mount: Deactivated successfully. Apr 13 20:11:27.430795 systemd[1]: Started sshd@20-62.238.3.135:22-20.229.252.112:42774.service - OpenSSH per-connection server daemon (20.229.252.112:42774). Apr 13 20:11:27.648740 sshd[6205]: Accepted publickey for core from 20.229.252.112 port 42774 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:27.652569 sshd[6205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:27.660104 systemd-logind[1483]: New session 21 of user core. Apr 13 20:11:27.664473 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:11:27.903670 sshd[6205]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:27.907519 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:11:27.908494 systemd[1]: sshd@20-62.238.3.135:22-20.229.252.112:42774.service: Deactivated successfully. Apr 13 20:11:27.910391 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:11:27.912561 systemd-logind[1483]: Removed session 21. Apr 13 20:11:32.951712 systemd[1]: Started sshd@21-62.238.3.135:22-20.229.252.112:42782.service - OpenSSH per-connection server daemon (20.229.252.112:42782). Apr 13 20:11:33.168185 sshd[6251]: Accepted publickey for core from 20.229.252.112 port 42782 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:33.170580 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:33.177503 systemd-logind[1483]: New session 22 of user core. Apr 13 20:11:33.183503 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 20:11:33.411294 sshd[6251]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:33.418013 systemd[1]: sshd@21-62.238.3.135:22-20.229.252.112:42782.service: Deactivated successfully. Apr 13 20:11:33.418185 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Apr 13 20:11:33.419787 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 20:11:33.421006 systemd-logind[1483]: Removed session 22. Apr 13 20:11:34.063258 update_engine[1485]: I20260413 20:11:34.063150 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:11:34.063813 update_engine[1485]: I20260413 20:11:34.063637 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:11:34.064031 update_engine[1485]: I20260413 20:11:34.063983 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:11:34.064800 update_engine[1485]: E20260413 20:11:34.064750 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:11:34.064861 update_engine[1485]: I20260413 20:11:34.064836 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 20:11:44.071725 update_engine[1485]: I20260413 20:11:44.071640 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:11:44.072492 update_engine[1485]: I20260413 20:11:44.071913 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:11:44.072492 update_engine[1485]: I20260413 20:11:44.072145 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:11:44.073081 update_engine[1485]: E20260413 20:11:44.073041 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:11:44.073161 update_engine[1485]: I20260413 20:11:44.073089 1485 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 20:11:50.701842 kubelet[2582]: E0413 20:11:50.701782 2582 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41802->10.0.0.2:2379: read: connection timed out" Apr 13 20:11:51.643865 systemd[1]: cri-containerd-ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1.scope: Deactivated successfully. Apr 13 20:11:51.644272 systemd[1]: cri-containerd-ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1.scope: Consumed 3.656s CPU time, 18.3M memory peak, 0B memory swap peak. Apr 13 20:11:51.664307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1-rootfs.mount: Deactivated successfully. Apr 13 20:11:51.664943 containerd[1501]: time="2026-04-13T20:11:51.664777437Z" level=info msg="shim disconnected" id=ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1 namespace=k8s.io Apr 13 20:11:51.664943 containerd[1501]: time="2026-04-13T20:11:51.664838837Z" level=warning msg="cleaning up after shim disconnected" id=ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1 namespace=k8s.io Apr 13 20:11:51.664943 containerd[1501]: time="2026-04-13T20:11:51.664848197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:11:51.750007 systemd[1]: cri-containerd-7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90.scope: Deactivated successfully. Apr 13 20:11:51.750489 systemd[1]: cri-containerd-7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90.scope: Consumed 6.814s CPU time. Apr 13 20:11:51.775704 containerd[1501]: time="2026-04-13T20:11:51.775609220Z" level=info msg="shim disconnected" id=7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90 namespace=k8s.io Apr 13 20:11:51.775917 containerd[1501]: time="2026-04-13T20:11:51.775699371Z" level=warning msg="cleaning up after shim disconnected" id=7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90 namespace=k8s.io Apr 13 20:11:51.775917 containerd[1501]: time="2026-04-13T20:11:51.775728741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:11:51.778048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90-rootfs.mount: Deactivated successfully. Apr 13 20:11:51.791083 containerd[1501]: time="2026-04-13T20:11:51.791026844Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:11:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:11:52.511301 kubelet[2582]: I0413 20:11:52.510687 2582 scope.go:117] "RemoveContainer" containerID="ef38bce67fbe5d108ef07e3ce6e181b874c653a58cac48ca425b98c7994054b1" Apr 13 20:11:52.513632 kubelet[2582]: I0413 20:11:52.513588 2582 scope.go:117] "RemoveContainer" containerID="7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90" Apr 13 20:11:52.514686 containerd[1501]: time="2026-04-13T20:11:52.514641184Z" level=info msg="CreateContainer within sandbox \"6cdb979c557feb2773b6d93b8a701fd7971e611725d4eca2daa7c68602504d3e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 20:11:52.515420 containerd[1501]: time="2026-04-13T20:11:52.515249525Z" level=info msg="CreateContainer within sandbox \"aa3a351dea63a2a6a70df2cec5d47ef8d69f490c4c3d93968950affb9e98c0a7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 20:11:52.532162 containerd[1501]: time="2026-04-13T20:11:52.532089612Z" level=info msg="CreateContainer within sandbox \"6cdb979c557feb2773b6d93b8a701fd7971e611725d4eca2daa7c68602504d3e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5e56a032e54bce924b77d701b8dd86fdd5ff0dd5786af8e50fc0d4a968af8cb4\"" Apr 13 20:11:52.536003 containerd[1501]: time="2026-04-13T20:11:52.535662530Z" level=info msg="StartContainer for \"5e56a032e54bce924b77d701b8dd86fdd5ff0dd5786af8e50fc0d4a968af8cb4\"" Apr 13 20:11:52.539252 containerd[1501]: time="2026-04-13T20:11:52.538375145Z" level=info msg="CreateContainer within sandbox \"aa3a351dea63a2a6a70df2cec5d47ef8d69f490c4c3d93968950affb9e98c0a7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae\"" Apr 13 20:11:52.539252 containerd[1501]: time="2026-04-13T20:11:52.538664857Z" level=info msg="StartContainer for \"b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae\"" Apr 13 20:11:52.540214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515282105.mount: Deactivated successfully. Apr 13 20:11:52.572437 systemd[1]: Started cri-containerd-b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae.scope - libcontainer container b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae. Apr 13 20:11:52.575770 systemd[1]: Started cri-containerd-5e56a032e54bce924b77d701b8dd86fdd5ff0dd5786af8e50fc0d4a968af8cb4.scope - libcontainer container 5e56a032e54bce924b77d701b8dd86fdd5ff0dd5786af8e50fc0d4a968af8cb4. Apr 13 20:11:52.607626 containerd[1501]: time="2026-04-13T20:11:52.607597536Z" level=info msg="StartContainer for \"b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae\" returns successfully" Apr 13 20:11:52.618994 containerd[1501]: time="2026-04-13T20:11:52.618958001Z" level=info msg="StartContainer for \"5e56a032e54bce924b77d701b8dd86fdd5ff0dd5786af8e50fc0d4a968af8cb4\" returns successfully" Apr 13 20:11:53.520576 kubelet[2582]: E0413 20:11:53.519323 2582 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41426->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-2-642afe6700.18a603a353c39126 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-2-642afe6700,UID:08263d610ab2e5a419e44cbe56866e2e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-2-642afe6700,},FirstTimestamp:2026-04-13 20:11:43.064117542 +0000 UTC m=+143.191002601,LastTimestamp:2026-04-13 20:11:43.064117542 +0000 UTC m=+143.191002601,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-2-642afe6700,}" Apr 13 20:11:54.064009 update_engine[1485]: I20260413 20:11:54.063446 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:11:54.064009 update_engine[1485]: I20260413 20:11:54.064015 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:11:54.065112 update_engine[1485]: I20260413 20:11:54.064429 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:11:54.065390 update_engine[1485]: E20260413 20:11:54.065286 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:11:54.065550 update_engine[1485]: I20260413 20:11:54.065408 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 20:11:54.065550 update_engine[1485]: I20260413 20:11:54.065428 1485 omaha_request_action.cc:617] Omaha request response: Apr 13 20:11:54.065632 update_engine[1485]: E20260413 20:11:54.065552 1485 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 20:11:54.065632 update_engine[1485]: I20260413 20:11:54.065584 1485 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 20:11:54.065632 update_engine[1485]: I20260413 20:11:54.065600 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 20:11:54.065632 update_engine[1485]: I20260413 20:11:54.065614 1485 update_attempter.cc:306] Processing Done. Apr 13 20:11:54.065786 update_engine[1485]: E20260413 20:11:54.065640 1485 update_attempter.cc:619] Update failed. Apr 13 20:11:54.065786 update_engine[1485]: I20260413 20:11:54.065656 1485 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 20:11:54.065786 update_engine[1485]: I20260413 20:11:54.065670 1485 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 20:11:54.065786 update_engine[1485]: I20260413 20:11:54.065685 1485 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 20:11:54.065944 update_engine[1485]: I20260413 20:11:54.065789 1485 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 20:11:54.065944 update_engine[1485]: I20260413 20:11:54.065822 1485 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 20:11:54.065944 update_engine[1485]: I20260413 20:11:54.065836 1485 omaha_request_action.cc:272] Request: Apr 13 20:11:54.065944 update_engine[1485]: Apr 13 20:11:54.065944 update_engine[1485]: Apr 13 20:11:54.065944 update_engine[1485]: Apr 13 20:11:54.065944 update_engine[1485]: Apr 13 20:11:54.065944 update_engine[1485]: Apr 13 20:11:54.065944 update_engine[1485]: Apr 13 20:11:54.065944 update_engine[1485]: I20260413 20:11:54.065852 1485 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:11:54.066286 update_engine[1485]: I20260413 20:11:54.066201 1485 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:11:54.066616 update_engine[1485]: I20260413 20:11:54.066563 1485 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:11:54.066740 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 20:11:54.068171 update_engine[1485]: E20260413 20:11:54.068097 1485 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:11:54.068281 update_engine[1485]: I20260413 20:11:54.068183 1485 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 20:11:54.068281 update_engine[1485]: I20260413 20:11:54.068202 1485 omaha_request_action.cc:617] Omaha request response: Apr 13 20:11:54.068281 update_engine[1485]: I20260413 20:11:54.068218 1485 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 20:11:54.068281 update_engine[1485]: I20260413 20:11:54.068233 1485 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 20:11:54.068281 update_engine[1485]: I20260413 20:11:54.068248 1485 update_attempter.cc:306] Processing Done. Apr 13 20:11:54.068281 update_engine[1485]: I20260413 20:11:54.068264 1485 update_attempter.cc:310] Error event sent. Apr 13 20:11:54.068880 update_engine[1485]: I20260413 20:11:54.068282 1485 update_check_scheduler.cc:74] Next update check in 48m43s Apr 13 20:11:54.068934 locksmithd[1517]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 20:11:56.282606 systemd[1]: cri-containerd-44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2.scope: Deactivated successfully. Apr 13 20:11:56.284222 systemd[1]: cri-containerd-44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2.scope: Consumed 1.699s CPU time, 16.1M memory peak, 0B memory swap peak. Apr 13 20:11:56.324675 containerd[1501]: time="2026-04-13T20:11:56.324575425Z" level=info msg="shim disconnected" id=44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2 namespace=k8s.io Apr 13 20:11:56.324675 containerd[1501]: time="2026-04-13T20:11:56.324647216Z" level=warning msg="cleaning up after shim disconnected" id=44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2 namespace=k8s.io Apr 13 20:11:56.324675 containerd[1501]: time="2026-04-13T20:11:56.324664706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:11:56.327213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2-rootfs.mount: Deactivated successfully. Apr 13 20:11:56.532250 kubelet[2582]: I0413 20:11:56.532200 2582 scope.go:117] "RemoveContainer" containerID="44851a4b36a0610d5da54d98cc87957bf5da95f5b7dc38c3302754091f480dc2" Apr 13 20:11:56.536060 containerd[1501]: time="2026-04-13T20:11:56.534891899Z" level=info msg="CreateContainer within sandbox \"4c1b9baece7a382d6571cf8eea0888249bdffa59f2771d30e94c9761228acec1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 20:11:56.559319 containerd[1501]: time="2026-04-13T20:11:56.559235600Z" level=info msg="CreateContainer within sandbox \"4c1b9baece7a382d6571cf8eea0888249bdffa59f2771d30e94c9761228acec1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"83dc5a16cc417b1d83a0b8e2c107ee3e52fb651822708b8d2eef1a8816b7e3b6\"" Apr 13 20:11:56.560395 containerd[1501]: time="2026-04-13T20:11:56.560367732Z" level=info msg="StartContainer for \"83dc5a16cc417b1d83a0b8e2c107ee3e52fb651822708b8d2eef1a8816b7e3b6\"" Apr 13 20:11:56.602433 systemd[1]: Started cri-containerd-83dc5a16cc417b1d83a0b8e2c107ee3e52fb651822708b8d2eef1a8816b7e3b6.scope - libcontainer container 83dc5a16cc417b1d83a0b8e2c107ee3e52fb651822708b8d2eef1a8816b7e3b6. Apr 13 20:11:56.636892 containerd[1501]: time="2026-04-13T20:11:56.636853383Z" level=info msg="StartContainer for \"83dc5a16cc417b1d83a0b8e2c107ee3e52fb651822708b8d2eef1a8816b7e3b6\" returns successfully" Apr 13 20:12:00.704019 kubelet[2582]: E0413 20:12:00.703966 2582 controller.go:195] "Failed to update lease" err="Put \"https://62.238.3.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2-642afe6700?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 20:12:03.778072 systemd[1]: cri-containerd-b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae.scope: Deactivated successfully. Apr 13 20:12:03.820224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae-rootfs.mount: Deactivated successfully. Apr 13 20:12:03.828387 containerd[1501]: time="2026-04-13T20:12:03.828253537Z" level=info msg="shim disconnected" id=b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae namespace=k8s.io Apr 13 20:12:03.829296 containerd[1501]: time="2026-04-13T20:12:03.828417387Z" level=warning msg="cleaning up after shim disconnected" id=b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae namespace=k8s.io Apr 13 20:12:03.829296 containerd[1501]: time="2026-04-13T20:12:03.828450967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:12:04.558800 kubelet[2582]: I0413 20:12:04.558755 2582 scope.go:117] "RemoveContainer" containerID="7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90" Apr 13 20:12:04.559468 kubelet[2582]: I0413 20:12:04.559241 2582 scope.go:117] "RemoveContainer" containerID="b9af6a893865a90d12be82268f90044082138c52d29761f28ee4b485f34f56ae" Apr 13 20:12:04.559468 kubelet[2582]: E0413 20:12:04.559453 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5588576f44-6g474_tigera-operator(790ec4e6-3240-4795-bf98-9753489fa169)\"" pod="tigera-operator/tigera-operator-5588576f44-6g474" podUID="790ec4e6-3240-4795-bf98-9753489fa169" Apr 13 20:12:04.561165 containerd[1501]: time="2026-04-13T20:12:04.561110904Z" level=info msg="RemoveContainer for \"7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90\"" Apr 13 20:12:04.565733 containerd[1501]: time="2026-04-13T20:12:04.565700493Z" level=info msg="RemoveContainer for \"7ee54c37751ba8cb8507494e82ef57a30e16d658df7c495a2a732cf527f7ad90\" returns successfully" Apr 13 20:12:10.705950 kubelet[2582]: E0413 20:12:10.705899 2582 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4081-3-7-2-642afe6700)"